title_s
stringlengths 2
79
| title_dl
stringlengths 0
200
| source_url
stringlengths 13
64
| authors
listlengths 0
10
| snippet_s
stringlengths 0
291
| text
stringlengths 21
100k
| date
timestamp[ns]date 1926-02-14 00:00:00
2030-07-14 00:00:00
| publish_date_dl
stringlengths 0
10
| url
stringlengths 15
590
| matches
listlengths 1
278
|
---|---|---|---|---|---|---|---|---|---|
Artificial Healthcare Renaissance
|
Artificial Healthcare Renaissance
|
https://juliampuaschunder.com
|
[] |
Artificial Intelligence (AI) – measured by Global Connectivity – is ... Union capital and labor freedom. The New School for Social Research Economic ...
|
Photo credit: Martin von den Driesch
Artificial Intelligence (AI) – measured by Global Connectivity – is significantly positively correlated with freedom from corruption. Find the first ever created corruption freedom AI transition index in Graph 1, which outlines AI hubs that are perceived as corruption free in green and less connected areas of the world that are perceived as corrupt in red and medium AI-connectivity and corruption hubs in yellow .
Graph 1: Corruption Freedom AI Transition Index
The currently ongoing market transition of outsourced decision making to AI hence starts from relatively corruption free territories. AI therefore offers a relatively corruption-free leadership decision-making tool, which could be used to enhance global public services, such as in health care.
Artificial global governance should therefore come from the countries with high global connectivity and low corruption that are exhibited in the lower right quadrant in Graph 2.
Graph 2: Global connectivity (GCI) and Corruption Perception Index (CPI)
An AI_anti-corruption_health index is calculated comprised of the global connectivity (GCI) of a country in 2019 multiplied by the Corruption Perception Index (CPI) in 2019 and the Health Quality and Access Index of the same country in 2016.
Graph 3 displays the country’s index around the world and highlights the parts of the world that feature high internet connectivity, freedom from corruption and good access to and quality of general healthcare in green , whereas those parts of the world that feature less internet connectivity and more perceived corruption and worse access to and quality of general healthcare in red . The higher the index, the greener the country is colored in the graph. The greener the country is colored, the better connected and less corrupt the country is perceived by its citizens and the better access to and quality of general healthcare is available. The lower the indes, the more red the country is coloured. The red countries are less connected and the more corrupt the country is perceived and the worse off are its citizens regarding access to and quality of general healthcare, and the more red the country is colored. Medium AI-connectivity and corruption hubs with medium access to and quality of healthcare in yellow.
Graph 3: AI-anti-corruption-health (AAH) index for 79 countries of the world
Josef Mantl Moving Forward Digital Round Table
Moving Forward Digital Round Table
Session 2:
Moving Forward Digital Round Table
Session 3:
Moving Forward Digital Round Table
Session 4:
Moving Forward Digital Round Table
Daten und Fakten zur ‘Intergenerationellen Fairness’ und welche soziale und psychosoziale Trends zu beobachten sind
Session 5:
The Impact of COVID-19 on Business & Society
TUtheTOP alumni club
Puaschunder, J.M. (2020). Bad news are selling? The pandemic of fear and the responsibility of media to build culture of speech to shape our contemporary society. Invited Video Speech on ‘Healthcare in a Post-COVID-19 World.’ Conference Ecowellness on System Change: The Chances and Transformations of Healthcare after the Coronacrisis, July 14-15, 2020.
The Project is hosted at
Yale University
in cooperation with the
Parsons School of Design
The New School
New York
Financial support is most gratefully acknowledged from The New School School of Public Engagement.
Coverage:
Media: Puaschunder, J.M. (2021). Panel ‘No time to think of what you don’t have: Digitalisation & AI in Europe ,’ European Liberal Forum in cooperation with NEOS Lab , Billrothhaus, Vienna, Austria, European Union, November 16, 2021.
Puaschunder, J.M. (2020). Corona-Update: Wie Big Data im Kampf gegen die Pandemie helfen kann , The New Austria and Liberal Forum Lab Talk, May 8.
Podcast:
2019: Artificial Intelligence Ethics
Upcoming: Artificial Intelligence Ethics Podcast second edition featuring Guest speakers on Artificial Intelligence Ethics in Medicine on behalf of the European Society of Medicine
Output:
Publications:
Books
Work-in-progress
Puaschunder, J.M. (under review). The Future of Medicine: Artificial Intelligence, Algorithms, Big Data and Robotics in Healthcare.
Puaschunder, J.M. (under review). Human Rights Online .
Puaschunder, J.M. (under review). Digital inequality . Oxford University Press.
Puaschunder, J.M. (under review). Artificial Intelligence Ethics . Routledge.
Puaschunder, J.M. (forthcoming). Nudgitize yourself! Behavioral Economics for Everyone . Kindle Direct Publishing.
Published books
Puaschunder, J.M. (2022, 2nd edition). Advances in Behavioral Economics and Finance Leadership: Strategic Leadership, Wise Followership and Conscientious Usership in the Digital Century . Cham: Springer Nature.
Puaschunder, J.M. (2021). Verhaltensökonomie und Verhaltensfinanzökonomie: Ein Vergleich europäischer und nordamerikanischer Modelle. [ Behavioral Economics and Finance Leadership : Nudging and Winking to make Better Choices ] Springer Gabler.
Articles
Work-in-progress
Puaschunder, J.M. (forthcoming). The future of resilient finance: The relation of cryptocurrencies and sustainability ethics. In W. Leal, Climate Change Policies: Science and Technology in Support of Policy-Making in Climate Change Mitigation and Adaptation , Springer Nature.
Published articles
Puaschunder, J.M. & Gelter, M. (2022). The law, economics and governance of Generation COVID-19 long-haul . Indiana Health Law Review , 19 , 1, 47-126.
Gelter, M. & Puaschunder, J.M. (2021). COVID-19 and Comparative Corporate Governance . Journal of Corporation Law , 46 , 3, 557-629.
Coverage: Harvard Law School Forum on Corporate Governance , February 16, 2021.
Puaschunder, J.M. (2020). Revising growth theory in the Artificial Age: Putty and clay labor . Archives in Business Research, 8 , 3, 65-107.
Puaschunder, J.M. & Feierabend, D. (2020). Ancient legal codes as basis for Artificial Intelligence regulation in the 21st century. Scientia Moralitas – International Journal of Multidisciplinary Research, 5, 1, 1-15.
Puaschunder, J.M., Mantl, J. & Plank, B. (2020). Medicine of the future: The power of Artificial Intelligence (AI) and big data in healthcare. Research Association for Interdisciplinary Studies Journal for Social Sciences , 4 , 1, 1-8.
Beerbaum, D., Piechocki, M. & Puaschunder, J.M. (2019). Measuring accounting reporting complexity with customized extensions XBRL: A behavioral economics approach. Journal of Applied Research in the Digital Economy , 1 , 2-38.
Puaschunder, J.M. (2019). Value at looking back: Towards an Empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance , Oxford Journal of Finance and Risk Perspectives , 8 , 223-239.
Beerbaum, D., Piechocki, M. & Puaschunder, J.M. (2019). Accounting reporting complexity measured behaviorally . Internal Auditing & Risk Management , 4 , 56, 35-47 .
Beerbaum, D. & Puaschunder, J.M. (2019). A behavioral approach to irrational exuberances: An Artificial Intelligence roboethics taxonomy . Scientia Moralitas: International Journal of Multidisciplinary Research , 4 , 1, 1-30.
Puaschunder, J.M. (2019). Artificial Intelligence in the healthcare sector . Scientia moralitas: International Journal of Multidisciplinary Research , 4 , 2, 1-14.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age . International Journal of Commerce and Management Research , 5 , 4, 62-70.
Puaschunder, J.M. (2019). Organizational Artificial Intelligence behavior. Journal of Applied Research in the Digital Economy , 2 , 1, 1-14.
Puaschunder, J.M. (2019). Towards a utility theory of privacy and information sharing: The introduction of hyper-hyperbolic discounting . International Journal of Strategic Information Technology and Applications (IJSITA) , 10 , 1, 1-22.
Puaschunder, J.M. (2019). Towards a utility theory of privacy and information sharing and the introduction of hyper-hyperbolic discounting in the digital big data age , Research Association for Interdisciplinary Studies Collective Volume – Economic Science , 2019, pp. 4-46.
Puaschunder, J.M. (2019). Artificial Diplomacy: A guide for public officials to conduct Artificial Intelligence, Journal of Applied Research in the Digital Economy , 1 , 39-54, translated by Run Wang into Chinese.
Puaschunder, J.M. (2019). Big data ethics. Journal of Applied Research in the Digital Economy , 1 , 55-75.
Puaschunder, J.M. (2019). On Artificial Intelligence’s razor’s edge: On the future of democracy and society in the artificial age . Journal of Economics and Business , 2, 1, 100-119.
Puaschunder, J.M. (2017). Nudging in the digital big data era . European Journal of Economics, Law and Politics, 4, 4, 18-23 .
Puaschunder, J.M. (2017). Nudgital: Critique of a behavioral political economy . Archives of Business Research , 5 , 9, 54-76.
Puaschunder, J.M. (2017). The nudging divide in the digital big data era. International Journal of Research in Business, Economics and Management, 1 , 3, 49-53. Coverage: The Living Library
Puaschunder, J.M. (2016). Putty capital’s shadow of the invisible hand on clay labor: On the emergent risk of differing speeds of European Union capital and labor freedom . The New School for Social Research Economic Review: A Journal of Critical Economics at The New School, 8, 3, 147-168 .
Chapters
Proceedings
Work-in-progress
Puaschunder, J.M. (forthcoming). Prospect theory incentives in ‘pay for sustainability’ remuneration schemes. Proceedings of the Scientia Moralitas conference . February 19, 2023.
Published proceedings
Puaschunder, J.M. (2021). Unequal world in healthcare, digitalization and corruption. Proceedings of the 3rd Unequal World Conference of the United Nations , United Nations New York, New York, United States.
Tomek, R.S., Richter, R., Michalsen, A., Puaschunder, J.M., Quarch, Ch., Rohrbach, W., Vormann, J., Rehak, W. & Grimm, J. (2021). System change transformation of healthcare. Proceedings of the 3rd Unequal World Conference of the United Nations, United Nations New York, New York, United States.
Tomek, R.St., Richter, R., Lakonig, St., Bonelli, R.M., Hosang, M., Michalsen, A., Probst, T., Puaschunder, J.M., Rohrbach, W., Schuchardt, E., Vormann, J., Stordalen, G., Lehrach, H., Piketty, Th., Heinrichs, J. (2020). Salzburg Declaration. Press Release of the Conference Proceedings on ‘System Change?! Die Chance der Transformation des Gesundheitswesens: Analyse und Chancen des Gesundheitswesens,’ July 14-15.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age. Oxford Academic Research Network Journal of Finance and Risk Perspectives, Special Issue 17th FRAP Finance, Risk and Accounting Perspectives Conference Proceedings (2019) , pp. 26-54.
Puaschunder, J.M. (2018). Nudgital: Critique of a behavioral political economy and towards a utility theory of privacy and information sharing. Research and Management Learning Unconference 2018 Proceedings, University of St. Andrews, St. Andrews, Scotland, European Union, July 2-3.
Puaschunder, J.M. (2018). Artificial Intelligence applications and innovations. Proceedings of the 15th International Conference on Artificial Intelligence Applications and Innovations and 20th International Conference on Engineering Applications of Neural Networks at Crete, Greece, European Union, May 24-26.
Blogs
Puaschunder, J.M. (2022). Artificial Intelligence hubs: How to catch the next pandemic early . De Gruyter Blog , June 27, 2022.
Puaschunder, J.M. (2019). Big Data, algorithms and health data . Vienna, Austria: The New Austria and Liberal Forum Lab Blog.
Proceedings editor
Tomek, R.S., Richter, R., Michalsen, A., Puaschunder, J.M., Quarch, Ch., Rohrbach, W., Vormann, J., Rehak, W. & Grimm, J. (2021). System change transformation of healthcare. Proceedings of the 3rd Unequal World Conference of the United Nations , United Nations New York, New York, United States.
Tomek, R.St., Richter, R., Lakonig, St., Bonelli, R.M., Hosang, M., Michalsen, A., Probst, T., Puaschunder, J.M., Rohrbach, W., Schuchardt, E., Vormann, J., Stordalen, G., Lehrach, H., Piketty, Th., Heinrichs, J. (2020). Salzburg Declaration. Press Release of the Conference Proceedings on ‘System Change?! Die Chance der Transformation des Gesundheitswesens: Analyse und Chances des Gesundheitswesens,’ July 14-15.
Consultancy reports
Reports on behalf of the European Parliament European Liberal Forum in cooperation with The New Austria and Liberal Forum:
Puaschunder, J.M. (2019). Stakeholder perspectives on Artificial Intelligence (AI), robotics and big data in healthcare: An empirical study . Report on behalf of a European Parliament Agency. New York, New York.
Puaschunder, J.M. (2019). The legal and international situation of AI, robotics and big data with attention to healthcare . Report on behalf of a European Parliament Agency. New York, New York.
Puaschunder, J.M. (2019). Big data, Artificial Intelligence and healthcare: Developing a legal, policy and ethical framework for using AI, big data, robotics and algorithms in healthcare . Report on behalf of the European Parliament European Liberal Forum in cooperation with The New Austria and Liberal Forum Lab. Vienna, Austria, European Union.
Expert conference calls:
Puaschunder, J.M. (2023). Agenda for Global Innovation and Digital Ethics (AGIDE), Austrian Academy of Sciences.
Puaschunder, J.M. (2019). Big Data, Algorithms and Health Data. The New Austrian and Liberal Forum.
Puaschunder, J.M. (2019). On Artificial Intelligence Ethics. Austrian Consulate San Francisco.
Invited presentations:
Puaschunder, J.M. (upcoming). Business Breakfast Keynote address for the American Chamber of Commerce in Austria in the Hilton Vienna Plaza, Austria, European Union.
Puaschunder, J.M. (2023). Resilience leadership . Scientia Moralitas conference . February 19, 2023.
Puaschunder, J.M. (2022). Ökonomie der Diversität im Zeitalter der Unsicherheit [Economics of Diversity in the Age of Uncertainty] . Hostettler & Company 10th HCM Board Conference , Hotel Park Hyatt Zurich, Zurich, Switzerland, November 8, 2022.
Puaschunder, J.M. (2022). Resilience Finance: The role of diversity, hope, and science diplomacy . The New School for Social Research (NSSR) Economics Research Workshop: Climate Change and Dynamic Macroeconomics: New Perspectives on Climate Economics, October 25, 2022.
Invited Presentation on The Law, Economics and Governance of COVID-19 and its Long-Term Implications , European Society of Medicine General Assembly , Madrid, Spain, European Union, August 4-6, 2022.
Puaschunder, J.M. (2022). The Society of Government Economists opening of ‘New Developments in Health Economics in the Pandemic Era‘ with ‘COVID-19 Long Haulers: Democratization of health and healthcare information and the economics of prevention, e-health and rest,’ Society of Government Economists, January 28.
Puaschunder, J.M. (2021). Panel ‘No time to think of what you don’t have: Digitalisation & AI in Europe ,’ European Liberal Forum in cooperation with NEOS Lab , Billrothhaus, Vienna, Austria, European Union, November 16, 2021.
Puaschunder, J.M. (2021). Invited Panel Member of an Intercontinental Plenary of Public Administration in the Fourth Industrial Revolution, International Institute of Administrative Sciences (IASIA) Conference , Bela Bela, Limpopo, South Africa, July 27, 2021.
Puaschunder, J.M. (2021). Invited lecture on economic, psychological and political developments in the U.S. and Europe on System Change. Followed by moderating a Panel on the Salzburg European Declaration from the Gasteinertal, Ecowellness Group Annual Meeting , Grand Hotel de l’Europe, Bad Gastein, Austria, European Union, July 15, 2021.
Puaschunder, J.M. (2021). Our world after COVID-19: Generation COVID Long Haulers and the economics of health, minimalism and rest. Invited presentation to theEuropean Commission DG Health and Food Safety Unit C2 – Health information and integration in all policies, May 6, 2021.
Gelter, M. & Puaschunder, J.M. (2021). COVID-19 and Comparative Corporate Governance . Invited Comparative Private Law Lecture, Universität Innsbruck & Università degli Studi di Padova Scuola di Giurisprudenza, May 25, 2021.
Puaschunder, J.M. (2021). University of Ohio, Moritz College of Law, Ohio State Business Law Journal Symposium Confronting Crisis: Preparing for the Unexpected , Columbus, Ohio, United States, invited guest lecture, March 10.
Puaschunder, J.M. (2021). Our world after COVID-19: Generation COVID Long Haulers and the economics of health, minimalism and rest. Invited presentation to the European Commission DG Health and Food Safety Unit C2 – Health information and integration in all policies, May 6, 2021.
Puaschunder, J.M. (2021). Ethical Dilemmas Carthage Symposium, Carthage College, Kenosha, Wisconsin, Madison, United States, invited guest lecture, February 23.
Puaschunder, J.M. (2021). University of Pennsylvania Journal of Business Law Volume 23 Symposium, ‘Business Law Developments in the COVID Era’ invited panelist, February 6th.
Puaschunder, J.M. (2021). Kommunalkredit Bank, Austria AG, Ökonomie-Stunde mit Julia Puaschunder,COVID-19 crisis economics and financial market performance versus real economy gap , January 19.
Puaschunder, J.M. (2020). Die Pandemie des Geschäftes mit der Angst und die Verantwortung der Medien . Institut für Religiosität in Psychiatrie und Psychotherapie, Vienna Austria, European Union, September 18.
Puaschunder, J.M. (2020). Corona-Update: Wie Big Data im Kampf gegen die Pandemie helfen kann, The New Austria and Liberal Forum Lab Talk, May 8.
Puaschunder, J.M. (2020). Artificial Society, Economy and Democracy: Contemporary legal, economic and societal trends in the introduction of Artificial Intelligence (AI), robotics and big data insights. Harvard University, Cabot Division Room, Harvard Chemistry, Mallinckrodt Building, Cambridge, USA, March 4.
Puaschunder, J.M. (2020). Künstliche Intelligenz und Gesundheitswesen . Josef Mantl Communications, Moving Forward: Shaping the Future, Le Méridien, Vienna, Austria, European Union, January 9.
Puaschunder, J.M. (2019). Revising growth theory in the artificial age: Putty and clay labor . Nationalökonomische Gesellschaft Österreichs Austrian Economic Association (NOeG) Winter Workshop, Vienna University of Economics and Business, Vienna, Austria, European Union, December 17-20.
Puaschunder, J.M. (2019). Big data, algorithms, and health data , Study Report Launch, The New Austria and Liberal Forum, Vienna, Austria, European Union, November 12, 2019.
Puaschunder, J.M. (2019). On Artificial Intelligence’s razor’s edge: On the future of democracy and society in the artificial age. 17th Finance, Risk and Accounting Perspectives Conference , Hanken School of Economics, Helsinki, Finland, European Union, September 23-25.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age , 17th Finance, Risk and Accounting Perspectives Conference , Hanken School of Economics, Helsinki, Finland, European Union, September 23-25.
Puaschunder, J.M. (2019). Value at looking back: Towards an empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance. 17th Finance, Risk and Accounting Perspectives Conference , Hanken School of Economics, Helsinki, Finland, European Union, September 23-25.
Puaschunder, J.M. (2019). Revising growth theory in the artificial age: Putty and clay labor . Academy of Behavioral Economics and Finance 12th Annual Meeting, The New School, New York, New York , USA, September 19.
Invited participant, Workshop on ‘Science, Technology and Economic Policy ,’ National Academy of Sciences, Engineering, Medicine, invited by the Harvard Laboratory for Innovation Sciences at The Keck Center for the National Academies, Washington DC, USA, May 29, 2019.
Puaschunder, J.M. (2019). Thinking out of the growth theory box: Putty and clay labor . Princeton University, Princeton, New Jersey, May 2.
Puaschunder, J.M. (2019). Artificial Intelligence Ethics . Diplomatic Academy of Austria, Vienna, Austria, European Union, April 23.
Keynote moderator for a panel on Europe in a Changing World Order. European Horizons Conference 2019, Yale University , Yale, New Haven, USA, April 6.
Invited Expert Technology and Internet Governance Workshop. European Horizons Conference 2019, Yale University , Yale, New Haven, USA, April 6.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age . The Society of Government Economists 2019 Annual Conference, Janet Norwood Conference and Training Center, U.S. Bureau of Labor Statistics, Washington, DC, USA, April 5.
Puaschunder, J.M. (2018). Artificial Intelligence Ethics , Nominee, Austrian Research and Innovation Talks 2018, Embassy of Austria to the United States, International Court, Washington D.C., December 7.
Puaschunder, J.M. (2018). Research for a Sustainable Future . Invited Keynote Address, Fourth American Academic Research Conference on Global Business, Economics, Finance and Social Sciences , Wagner College, New York, New York, United States, April 13.
Puaschunder, J.M. (2016). Globalization and geopolitics: The new economy is flat! Strategic Design and Management in New Economies, Guest Lecture, Parsons School of Design, The New School , New York, NY, USA, September 27.
Conference presentations:
Puaschunder, J.M. (2023). Human Rights Online: Towards a New Generation of Human Rights in the Virtual World . 17th Western Economic Association International Annual Conference , Melbourne, Australia, April 17-19, 2023.
Puaschunder, J.M. (2023). Resilience leadership . Scientia Moralitas conference . February 19, 2023.
Puaschunder, J.M. (2022). Human Rights Online: Towards a new generation of human rights in the virtual world. 3rd Unequal World Conference of the United Nations , United Nations New York, New York, United States, December 8, 2022.
Puaschunder, J.M. (2022). Extension of Endogenous Growth Theory: Artificial Intelligence as Self-Learning Entity . 30th International RAIS Conference on Social Sciences and Humanities organized by the Research Association for Interdisciplinary Studies (RAIS) , American University, in Washington DC, USA, October 23, 2022.
Puaschunder, J.M. (2022). Health Capital and Health Wealth . 29th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) , online, August 14-15, 2022.
Puaschunder, J.M. (2022). Digital Inequality: A Research Agenda . 28th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) , online, June 26-27, 2022.
Puaschunder, J.M. (2022). Advances in Socially Responsible Investments in Responsible Finance . 26th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) , February 27-28, 2022.
Puaschunder, J.M. (2022). Inclusive Leadership and the Economics of Diversity . 26th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) , February 27-28, 2022.
Puaschunder, J.M. (2021). Advances in Behavioral Economics and Finance Leadership. 25th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) at The Erdman Center at Princeton University , Princeton, New Jersey, United States, December 5, 2021.
Puaschunder, J.M. (2021). Sustainable Lifestyle Revolution: Agrohoods, Ecowellness and Biophilia Trends. 25th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) at The Erdman Center at Princeton University , Princeton, New Jersey, United States, December 5, 2021.
Golofast, A., Kiyashchenko, L.P. & Puaschunder, J.M. (2021). Humanness in the COVID-19 era . 24th Research Association for Interdisciplinary Studies (RAIS) conference , October 17, 2021.
Bariffi, F. & Puaschunder, J.M. (2021). Artificial Intelligence and Big Data in the age of COVID-19 . 24th Research Association for Interdisciplinary Studies (RAIS) conference , October 17, 2021.
Puaschunder, J.M. (2021). Law & Economics . 24th Research Association for Interdisciplinary Studies (RAIS) conference , October 17, 2021.
Puaschunder, J.M. (2021). Healthcare dependent multiplier . 23rd Research Association for Interdisciplinary Studies (RAIS) conference , August 15, 2021.
Puaschunder, J.M. (2021). Ecowellness Conference on System Change: The transformation of the healthcare sector. “System Change 3?! Die Transformation des Gesundheitswesens,” System Change Transformation of the Healthcare Sector, Grand Hôtel de l’Europe, Bad Gastein, Ecowellness-Valley, July 16, 2021.
Puaschunder, J.M. (2021). COVID-19 Crisis Economics. 11th Portuguese Finance Network Conference , July 6, 2021.
Gelter, M. & Puaschunder, J.M. (2021). COVID-19 and comparative corporate governance . Annual Conference of the Council for European Studies , June 23, 2021.
Puaschunder, J.M. (2021). Our world after COVID-19: Generation COVID-Long Haulers and the economics of health, minimalism and rest . Ecowellness Conference on System Change of the Health System as part of a new holistic approach to help to improve both the pandemic and the climate problems, May 4, 2021.
Puaschunder, J.M. (2021). Value of COVID-19: Digitalized healthcare, luxury consumption and global education. Association for Integrity and Responsible Leadership in Economics and Associated Professions ( AIRLEAP) Sessions for the Western Economic Association International (WEAI) Conference, March 17-19.
Puaschunder, J.M. (2021). The future of the city after COVID-19: Digitalization, preventionism, and environmentalism. Association for Integrity and Responsible Leadership in Economics and Associated Professions ( AIRLEAP) Sessions for the Western Economic Association International (WEAI) Conference, March 17-19.
Puaschunder, J.M. (2021). Access to affordable preventive medicine: Common Wealth in Common Health. 2nd Unequal World Conference of the United Nations , United Nations New York, New York, United States, January 24-25.
Puaschunder, J.M., Gelter, M. & Mörtl, D. (2021). Inequality in COVID-19: Legal, economic, trade and governance aspects. 2nd Unequal World Conference of the United Nations , United Nations New York, New York, United States, January 24-25.
Gelter, M. & Puaschunder, J.M. (2021). COVID-19 and comparative corporate governance. The Association of American Law Schools (AALS), Annual Meeting on The Power of Words, January 5-9.
Gelter, M. & Puaschunder, J.M. (2020). COVID-19 and comparative corporate governance . 16th Annual Conference of the Italian Society of Law and Economics , December 16-19.
Puaschunder, J.M. (2020). Artificial Intelligence and healthcare. Artificial Intelligence and human enhancement: Affirmative and critical approaches in the humanities from both sides of the Atlantic conference, Austrian Academy of Sciences, Austria, European Union, October 29.
Gelter, M. & Puaschunder, J.M. (2020). COVID-19 and comparative corporate governance . 5th Annual conference of the French Association of Law and Economics , October 15.
Puaschunder, J.M. (2020). Economic growth in times of pandemics . ConScienS conference on Science & Society : Pandemics and their impact on society, September 6-7.
Puaschunder, J.M. (2020). Value at COVID-19: Digitalized Healthcare, Luxury Consumption and Global Education . ConScienS conference on Science & Society: Pandemics and their impact on society, September 6-7.
Puaschunder, J.M. (2020). The future of the city after COVID-19: Digitionalization, Preventism and Environmentalism. ConScienS conference on Science & Society: Pandemics and their impact on society, September 6-7.
Puaschunder, J.M., Gelter, M. & Sharma, S. (2020). COVID-19 Shock: Considerations on socio-technological, legal, corporate, economic and governance changes and trends . 18th Research Association for Interdisciplinary Studies Conference conference at Princeton University , Princeton, New Jersey, United States, August 17-18.
Puaschunder, J.M. & Beerbaum, D. (2020). The future of healthcare around the world: Four indices integrating technology, productivity, anti-corruption, healthcare and market financialization . 18th Research Association for Interdisciplinary Studies Conference conference at Princeton University , Princeton, New Jersey, United States, August 17-18.
Puaschunder, J.M. (2020).The future of Artificial Intelligence in international healthcare: An index. 17th International RAIS Conference on Social Sciences and Humanities organized by the Research Association for Interdisciplinary Studies (RAIS) , United States, June 1.
Puaschunder, J.M. (2020). Healthcare responses to COVID-19 around the world and digital advancements to prevent pandemics. 17th International RAIS Conference on Social Sciences and Humanities organized by the Research Association for Interdisciplinary Studies (RAIS) , United States, June 1.
Puaschunder, J.M. & Feierabend, D. (2020). Ancient legal codes as basis for Artificial Intelligence regulations in the 21st century. 17th International RAIS Conference on Social Sciences and Humanities organized by the Research Association for Interdisciplinary Studies (RAIS) , United States, June 1.
Beerbaum, D., Ikäheimo, S. & Puaschunder, J.M. (2020). Behavioral approaches to voluntary change and innovation in external financial reporting: A case study of the restructuring of the Annual Report on Form 20-F and the financial report. 43rd European Accounting Association Conference, Bucharest, Romania, European Union, May 27-29.
Puaschunder, J.M., Beerbaum, D. & Ikäheimo, S. (2020). Digital accounting trends of the future: A behavioral analysis. 43rd European Accounting Association Conference, Bucharest, Romania, European Union, May 27-29.
Puaschunder, J.M., Mantl, J. & Plank, B. (2020). Medicine of the future: The power of Artificial Intelligence (AI) and big data in healthcare. 16th Research Association for Interdisciplinary Studies Conference RAIS conference at Johns Hopkins University School of Medicine, Baltimore, Maryland, United States, March 31.
Puaschunder, J.M. (2020). On freedom in the artificial age. 16th Research Association for Interdisciplinary Studies Conference RAIS conference at Johns Hopkins University School of Medicine, Baltimore, Maryland, United States, March 30.
Puaschunder, J.M. (2020). Revising growth theory in the Artificial Age: Putty and clay labor. 46th Eastern Economic Association Conference, Boston Sheraton, Boston, Massachusetts, United States, February 29.
Puaschunder, J.M. (2020). Data fiduciary in order to alleviate principal-agent problems in the artificial big data age, 46th Eastern Economic Association Conference, Boston Sheraton, Boston, Massachusetts, United States, February 29.
Puaschunder, J.M. (2020). The potential for Artificial Intelligence in healthcare . Josef Mantl Communications, Moving Forward Roundtable, Vienna, Austria, January 9.
Puaschunder, J.M. (2019). Value at looking back: Towards an empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance . World Finance Conference & World Finance Banking Symposium , University of Delhi, New Delhi, India, December 19-21.
Puaschunder, J.M. (2019). Revising growth theory in the artificial age: Putty and clay labor . Nationalökonomische Gesellschaft Österreichs Winter Workshop, Vienna University of Economics and Business, Vienna, Austria, European Union, December 17-20.
Puaschunder, J.M. & Gelter, M. (2019). On the political economy of the European Union . 15th Research Association for Interdisciplinary Studies Conference RAIS conference at Johns Hopkins University School of Medicine, Baltimore, Maryland, United States , November 6-7.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age . 17th Finance, Risk and Accounting Perspectives Conference, Hanken School of Economics , Helsinki, Finland, European Union, September 23-25.
Puaschunder, J.M. (2019). On Artificial Intelligence’s razor’s edge: On the future of democracy and society in the artificial age . 17th Finance, Risk and Accounting Perspectives Conference, Hanken School of Economics , Helsinki, Finland, European Union, September 23-25.
Puaschunder, J.M. (2019). Value at looking back: Towards an empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance . 17th Finance, Risk and Accounting Perspectives Conference, Hanken School of Economics , Helsinki, Finland, European Union, September 23-25.
Puaschunder, J.M. (2019). Towards legal empirical macrodynamics: A research agenda . 14th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) at The Erdman Center at Princeton University , Princeton, New Jersey, United States, August 19-20.
Puaschunder, J.M. (2019). On artificial intelligence’s razor’s edge: On the future of democracy and society in the artificial age . International Association of Schools and Institutes of Administration 2019 Conference, Instituto Universitário de Lisboa-Instituto Universitário de Lisboa (ISCTE-IUL) , Lisbon, Portugal, European Union, July 22-26.
Puaschunder, J.M. (2019). Value at looking back: Towards an empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance . International Association of Schools and Institutes of Administration 2019 Conference, Instituto Universitário de Lisboa-Instituto Universitário de Lisboa (ISCTE-IUL) , Lisbon, Portugal, European Union, July 22-26.
Puaschunder, J.M. (2019). Artificial Intelligence: On the virtue of killing in the artificial age. IOP 4.0, Ahead of Our Time: Meaningful and Connected Life: Ethics, Human, Machine, Interface: Reinventing the World at Work: Reactive…Proactiveness, Pretoria, South Africa, The 21st Annual Society for Industrial and Organizational Psychology Conference , CSIR International Convention Centre, Pretoria, South Africa, July 22-26.
Puaschunder, J.M. (2019). On artificial intelligence’s razor’s edge: On the future of democracy and society in the artificial age , IOP 4.0, Ahead of Our Time: Meaningful and Connected Life: Ethics, Human, Machine, Interface: Reinventing the World at Work: Reactive…Proactiveness, Pretoria, South Africa, The 21st Annual Society for Industrial and Organizational Psychology Conference , CSIR International Convention Centre, Pretoria, South Africa, July 22-26.
Puaschunder, J.M. (2019). An inquiry into the nature and causes of Climate Wealth of Nations: What temperature finance gravitates towards? Sketching a climate-finance nexus and outlook on climate change-induced finance prospects , 10th International Research Meeting in Business and Management (IRMBAM-2019) , July 8-10, Nice, France, European Union.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age , 10th International Research Meeting in Business and Management (IRMBAM-2019) , July 8-10, Nice, France, European Union.
Beerbaum, D. & Puaschunder, J.M. (2019). A behavioral economics approach to sustainability: Digital nudging. World Academy of Science, Engineering and Technology, Singapore , Asia, July 4-5.
Puaschunder, J.M. (2019). Value at looking back: Towards an Empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance . Finance and Society panel, Conference of the Society for the Advancement of Socio-Economics (SASE), The New School, New York City, United States, June 28.
Puaschunder, J.M. (2019). Artificial intelligence ethics . Big Data, Algorithms, and Health Data, The New Austria and Liberal Forum Science Lab, Vienna, Austria, European Union, June 24.
Puaschunder, J.M. (2019). Artificial intelligence evolution: On the virtue of killing in the artificial age . International Council for Small Business World Congress , Cairo, Egypt, Africa, June 20.
Puaschunder, J.M. (2019). Value at looking back: Towards an empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance . International Council for Small Business World Congress , Cairo, Egypt, Africa, June 19.
Puaschunder, J.M. (2019). On Artificial intelligence’s razor’s edge: On the future of democracy and society . International Council for Small Business World Congress , Cairo, Egypt, June 19.
Puaschunder, J.M. (2019). Artificial Intelligence market disruption . 13th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) at Johns Hopkins University , Montgomery County Campus, Rockville, MD, United States, June 10-11.
Puaschunder, J.M. (2019). Value at looking back: Towards an empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance. NeuroPsychoEconomics Conference, LUISS University , Rome, Italy, June 7.
Beerbaum, D. & Puaschunder, J.M. (2019). A behavioral economics approach to sustainable finance architecture . Behavioural Finance Working Group (BFWG) conference, Queen Mary University of London , London, United Kingdom, June 6.
Puaschunder, J.M. (2019). Thinking out of the growth theory box: Putty and clay labor . Princeton University, Princeton, New Jersey, May 2.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age . Western Social Science Association Annual Conference, San Diego , California, USA, April 27.
Puaschunder, J.M. (2019). Dignity and utility of privacy and information sharing in the digital big data age . The Society of Government Economists 2019 Annual Conference, Janet Norwood Conference and Training Center, U.S. Bureau of Labor Statistics , Washington, DC, USA, April 5.
Puaschunder, J.M. (2019). On Artificial intelligence’s razor’s edge: On the future of democracy and society . 12th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) at The Erdman Center at Princeton University , Princeton, New Jersey, United States, April 3-4.
Puaschunder, J.M. (2019). Artificial Intelligence Ethics . International Atlantic Economic Society Annual Conference , Athens, Greece, European Union, March 27-30, 2019.
Puaschunder, J.M. (2019). Value at looking back: Towards an empirical validation of the role of reflexivity in econo-historic backtesting: Economic market prediction corrections correlate with future market performance . 45th Annual Eastern Economic Association Conference , New York Sheraton, New York, New York, United States, March 3.
Puaschunder, J.M. (2018). Towards a utility theory of privacy and information sharing , Austrian Economic Association Winter Workshop, Vienna University of Economics and Business, Vienna, Austria, Europe, December 18-20.
Puaschunder, J.M. (2018). Towards a utility theory of privacy and information sharing and the introduction of hyper-hyperbolic discounting in the digital big data age. Ethical Issues for the Economics Profession. 88th Annual Meeting of the Southern Economic Association, Marriott Marquis Washington, Washington D.C., USA, November 20.
Puaschunder, J.M. (2018). Towards a utility theory of privacy and information sharing and the introduction of hyper-hyperbolic discounting in the digital big data age . International Vincentian Business Ethics Conference , Ethics in the Digital Age Session, Marriott Downtown New York, New York, United States, October 27.
Puaschunder, J.M. (2018). Nudgitize me! A behavioral finance approach to minimize losses and maximize profits from heuristics and biases. International Leadership Association 20th Annual Global Conference , West Palm Beach, Florida, United States, October 26.
Puaschunder, J.M. (2018). Towards a utility theory of privacy and information sharing and the introduction of hyper-hyperbolic discounting in the digital big data age . The 11th Annual Meeting of the Academy of Behavioral Finance & Economics , DePaul University, Chicago, Illinois, United States, October 19.
Puaschunder, J.M. (2018). Econo-historic backtesting: Economic market prediction corrections correlate with future market performance. 86th International Atlantic Economic Conference , Westin Times Square Hotel New York, New York, United States, October 12. [International Atlantic Economic Society’s Best Undergraduate Paper Award Competition Judge]
Puaschunder, J. M. (2018). Nudgital: Critique of a Behavioral Political Economy . European Operational Researchers Conference EURO 2018 , Valencia, Spain, European Union, July 8-11.
Puaschunder, J.M. (2018). Nudgital: Critique of a behavioral political economy and towards a utility theory of privacy and information sharing. Research and Management Learning Unconference 2018, University of St. Andrews, St. Andrews, Scotland, European Union, July 2-3.
Puaschunder, J.M. (2018). Towards a utility theory of privacy and information sharing and the introduction of hyper-hyperbolic discounting in the digital big data age . 2018 Western Economic Association International (WEAI) Annual Conference , Sheraton Vancouver Wall Centre, Canada, June 26-30.
Puaschunder, J.M. (2018). Nudgitize me! A behavioral finance approach to minimize losses and maximize profits from heuristics and biases . 2018 International Council for Small Business World Congress , Reshaping the world through innovative SMEs, Taipei, Taiwan, Asia, June 26-29.
Puaschunder, J.M. (2018). Nudgitize me! A behavioral finance approach to minimize losses and maximize profits from heuristics and biases . 2018 New York International Academic Conference on Business, Economics, Finance and Accounting: Promoting Global Progress and Excellence in Academia . Hyatt Place Long Island City, New York, New York, United States, April 28.
Puaschunder, J.M. (2018). Nudgitize me! A behavioral finance approach to minimize losses and maximize profits from heuristics and biases . Fourth American Academic Research Conference on Global Business, Economics, Finance and Social Sciences , Wagner College, New York, New York, United States, April 13-15.
Puaschunder, J.M. (2018). Towards a utility theory of privacy and information sharing and the introduction of hyper-hyperbolic discounting in the digital big data age . 9th International RAIS Conference on Social Sciences and Humanities organized by Research Association for Interdisciplinary Studies (RAIS) at The Erdman Center at Princeton University , Princeton, New Jersey, United States, April 4-5.
Puaschunder, J.M. (2018). The nudging divide in the digital big data era . 7th LAEMOS Colloquium, Organizing for Resilience: Scholarship in Unsettled Times , Buenos Aires, Argentina, South America, March 22-24.
Puaschunder, J.M. (2018). Nudgital: Critique of Behavioral Political Economy . Economics and Behavior Panel, Eastern Economic Association Annual Conference , Boston Sheraton, Boston, MA, United States, March 4.
Puaschunder, J.M. (2018). ‘Nudging .’ Reformatting the world: An interdisciplinary conference on technology and the humanities. Vanier College, York University, Toronto, Canada, February 23.
Puaschunder, J.M. (2017). Nudgitize me! A behavioral finance approach to minimize losses and maximize profits from heuristics and biases . Academy of Behavioral Finance & Economics, 2017 Annual Meeting , University of California Los Angeles, Los Angeles, California, USA, October 19.
Puaschunder, J.M. (2017). Nudgital: Critique of Behavioral Political Economy . Behavioral Economics, The New School for Social Research, March 30.
Puaschunder, J.M. (2016). Global Alliance for Information and Communication Technologies and Development (GAID) as an International Platform for Cross-Sectoral Solution Finding and Policy Dialogue on Information and Communication Technologies for Development . 4th Annual International Conference on Sustainable Development hosted by the Earth Institute and the Sustainable Development Solutions Network, Alfred Lerner Hall, Columbia University, New York, New York, USA, September 22.
Invited events:
‘Effective Altruism Global Annual Conference ,’ Palace of Fine Arts Innovation Hangar, San Francisco, California, USA, August 11-13, 2017.
‘4th Annual Center for the Economic Analysis of Risk-Huebner Summer Risk Institute 2017 ,’ Georgia State University, Atlanta, Georgia, USA, July 25-26, 2017.
‘Fictitious Capital: Is Finance Appropriating Our Future? ,’ French General Consulate New York, New York, USA, June 27, 2017.
‘Disrupted: Speech and Democracy in the Digital Age ,’ Italian Academy of Columbia University, Columbia University, New York, New York, USA, May 1, 2017.
‘Robotics: Law & Ethics ,’ The Italian Academy for Advanced Studies in America, Columbia University, New York, New York, USA, March 4, 2016.
Community service:
Scientific Advisory Board, Finance, Risk and Accounting Perspectives (FRAP) Conference on ‘The Fintech Revolution – Sustainability meets Technology in Finance, Risk and Accounting’ of the Oxford Academic Research Network (ACRN), Hanken School of Economics, Helsinki, Finland, European Union, September 23-29, 2019.
Member of the International Program Committee , Associate Editor, Conference Minisymposium on ‘Sustainable Development and Climate Stabilization Modelling’ Chair, 9th Vienna Conference on Mathematical Modelling , Vienna University of Technology, Vienna, Austria, European Union, February 21-23, 2018.
Session Chair organizer, Artificial Intelligence Ethics. International Atlantic Economic Society Annual Conference . Athens, Greece, European Union, March 27-30, 2019.
Chair, Risk panel, 16th Finance, Risk and Accountability Perspectives Conference: Sustainability and Risk: Environmental, Social and Governance Perspectives , University of Cambridge, Cambridge UK, Great Britain, September 26, 2017.
Editorial Board for the ‘International Robotics & Automation Journal‘
Media coverage:
Bookforum
Leadersnet plus Event
Medicine of the future: A healthy mix
Puaschunder on Dignity and Utility of Privacy and Information Sharing in the Digital Big Data Age & The New School , Media Law Prof Blog
Sound of Business: The Advantage Austria US Podcast on ‘Innovation, Trade and Diplomacy in the Age of COVID-19 ‘
The nudging divide in the digital big data era . GovLab at the New York University Tandon School of Engineering, New York, New York, USA.
A Behavioral Economics Approach to Digitalisation . The Living Library, GovLab at the New York University Tandon School of Engineering, New York, New York, USA
Invited events:
Harvard Business School Healthcare conference, invited since 2016, Harvard Business School, Harvard University, Boston, MA, USA, January 29, 2016.
| 2020-02-27T00:00:00 |
2020/02/27
|
https://juliampuaschunder.com/artificial-healthcare/
|
[
{
"date": "2023/02/19",
"position": 71,
"query": "AI labor union"
}
] |
How chatbots can change journalism. Or not.
|
How chatbots can change journalism. Or not.
|
https://www.semafor.com
|
[
"Https",
"Www.Semafor.Com Author Gina-Chua",
"Executive Editor"
] |
ChatGPT and other AI systems don't do journalism well because, well, they weren't built to. They're language models, meaning they're really good at tasks ...
|
ChatGPT and other AI systems don’t do journalism well because, well, they weren’t built to. They’re language models, meaning they’re really good at tasks involving language. But they’re not fact models, or verification models, or math models, which makes them terrible at figuring out what’s true or adding two numbers together — both of which are pretty basic journalism skills. But complaining that they’re bad at journalism is like being angry at Excel because it doesn’t draw pictures well.
What they are good at is language. I’ve been playing with various AI-powered chatbots for the last week or so, and two things are absolutely clear to me:
There are useful, here-and-now real world applications that could materially improve how journalism is practiced and created;
The statement above might no longer be true.
I asked Claude, a chatbot created by Anthropic (which Google recently invested $300 million in) to copy edit stories that I had deliberately introduced factual (Ron DeSantis is governor of Miami), spelling, and grammar errors into. For good measure, I threw in some internal inconsistencies (a fall in birth rates in China will lead to an increase in population.)
AD
It aced the test. It fixed DeSantis’ title, it corrected the inconsistencies, it made the stories read more smoothly. It even gave an explanation of all the changes it made, and why.
To be sure, this wasn’t top-of-the-line, prize-winning editing of an 8,000-word investigative project. It was acceptable, middling, competent copy editing — and let’s face it, that’s what 80% of what copy editing in most newsrooms is. (Don’t kill me, copy deskers! I love you all!)
I’m not suggesting that Claude should be unleashed on stories unsupervised; but if could do a first edit on most of the copy in a newsroom — especially those where the staff are writing in a language which isn’t their mother tongue — it could offer material improvements in quality and efficiency.
| 2023-02-19T00:00:00 |
https://www.semafor.com/article/02/17/2023/how-chatbots-can-change-journalism-or-not
|
[
{
"date": "2023/02/19",
"position": 25,
"query": "AI journalism"
}
] |
|
Red AI: Victories and Warnings From China's Rise In ...
|
Amazon.com
|
https://www.amazon.com
|
[] |
Author Nina Xiang, a veteran China-based tech journalist who has tracked the Chinese AI industry for years, interviewed hundreds of people and traveled to ...
|
Click the button below to continue shopping
| 2023-02-19T00:00:00 |
https://www.amazon.com/Red-AI-Victories-Artificial-Intelligence/dp/1076774865
|
[
{
"date": "2023/02/19",
"position": 73,
"query": "AI journalism"
}
] |
|
Trust in Artificial Intelligence: A global study
|
Trust in Artificial Intelligence: A global study
|
https://ai.uq.edu.au
|
[] |
... AI, the use of AI at work, and understanding of AI ... Media. Media team contacts · Find a subject matter expert · UQ news. Working at UQ.
|
The University of Queensland and KPMG partnered on a global study to understand trust and attitudes towards Artificial Intelligence.
This research examines the public’s trust and attitudes towards AI use, and expectations of AI management and governance, across 17 countries. The report provides timely, comprehensive global insights into the public’s trust and acceptance of AI systems, including: who is trusted to develop, use, and govern AI, the perceived benefits and risks of AI use, community expectations of the development, regulation, and governance of AI, and how organisations can support trust in their AI use. It also sheds light on how people feel about the use of AI at work, public understanding and awareness of AI, the key drivers of trust in AI systems, and how trust and attitudes to AI have changed over time.
Collectively, the survey insights provide evidence-based pathways for strengthening the trustworthy and responsible use of AI systems, and the trusted adoption of AI in society. These insights are relevant for informing responsible AI strategy, practice and policy within business, government, and NGOs, as well as informing AI guidelines, standards and policy at the international and pan-governmental level.
A clear pattern across the data are the stark differences across countries in people’s trust, attitudes and reported use of AI: people in western countries are more wary of AI, and less convinced that the benefits outweigh the risks, than those in the emerging economies (i.e. Brazil, India, China, and South Africa). Younger generations, the university educated, and those in managerial roles are also more trusting and embracing of AI.
The extensive findings are available in the Full Report with highlights presented below and in the Executive Summary.
Individual Country Insights are also available in a standalone report, summarising the highlights for each of the 17 countries included in the full report.
Key Findings
To what extent do people trust AI systems?
Three out of five people (61%) are either ambivalent or unwilling to trust AI. However, trust and acceptance depend on the AI application. For example, AI use in healthcare is more trusted than AI use for Human Resource purposes. People tend to have faith in the capability and helpfulness of AI systems, but are more sceptical of their safety, security, and fairness. Many people feel ambivalent about the use of AI, reporting optimism and excitement, coupled with fear and worry.
How do people percieve the benefits and risks of AI?
Most people (85%) believe AI will deliver a range of benefits, but only half believe the benefits of AI outweigh the risks. Three out of four people (73%) are concerned about the risks associated with AI, with cyber security rated as the top risk globally. Other risks of concern to the majority include loss of privacy, manipulation and harmful use, job loss and deskilling (especially in India and South Africa), system failure (particularly in Japan), erosion of human rights, inaccurate outcomes and bias.
Who is trusted to develop, use, and govern AI?
People have the most confidence in their national universities, research institutions and defence organisations to develop, use and govern AI in the best interests of the public (76-82%). People have the least confidence in governments and commercial organisations, with a third reporting low or no confidence in these entities to develop, use or govern AI. This is problematic given the increasing use of AI by government and business.
What do people expect of AI management, governance, and regulation?
There is strong global endorsement for the principles of trustworthy AI originally: 97% of people globally view these principles and the practices that underpin them as important for trust. These principles and practices provide a blueprint to organisations on what is required to secure trust in their use of AI. Most people (71%) believe AI regulation is necessary, with a majority believing this to be the case in all countries except India. People expect some form of external, independent oversight, yet only 39% believe current governance, regulations and laws are sufficient to protect people and make AI use safe.
How do people feel about AI at work?
Most people (55%) are comfortable with the use of AI at work to augment and automate tasks and inform managerial decision-making, as long as it is not used for human resource and people management purposes. People actually prefer AI involvement to sole human decision-making, but they want humans to retain control. Except in China and India, most people believe AI will remove more jobs than it creates.
How well do people understand AI?
Most people (82%) have heard of AI, yet about half (49%) are unclear about how and when it is being used. However, most (82%) want to learn more. What’s more, 68% of people report using common AI applications, but 41% are unaware AI is a key component in those applications.
What are the key drivers of trust?
Our modelling demonstrates that trust is central to the acceptance of AI and highlights four pathways to strengthen public trust in AI:
1. An institutional pathway consisting of safeguards, regulations, and laws to make AI use safe, and confidence in government and commercial organisations to develop, use and govern AI.
2. A motivational pathway reflecting the perceived benefits of AI use.
3. An uncertainty reduction pathway reflecting the need to address concerns and risks associated with AI.
4. A knowledge pathway reflecting people’s understanding of AI use and efficacy in using digital technologies.
Of these drivers, the institutional pathway has the strongest influence on trust, followed by the motivational pathway. These pathways hold for all countries surveyed.
How have attitudes changed over time?
We examined how attitudes towards AI have changed since 2020 in Australia, the UK, USA, Canada, and Germany. Trust in AI, as well as awareness of AI and its use in common applications, increased in each of these countries. However, there has been no change in the perceived adequacy of regulations, laws and safeguards to protect people from the risks of AI, nor in people’s confidence in entities to develop, use and govern AI.
How does Australia compare with other countries?
Australian attitudes towards AI generally mirrored other Western countries like the UK, Canada, and France, where fear and worry about AI are dominant emotions. Less than half of Australians are comfortable with, and trust the use of, AI at work, and only a minority of Australians believe the benefits of AI outweigh the risks.
There is a gap in perceptions across age and education in Australia, with 42% of Gen X and Millennials trusting AI compared to 25% of older generation Australians. We see similar numbers when comparing the university educated (42%) with those who don’t have a degree (27%).
Finally, people in Australia and Japan had notably lower interest in learning about AI compared to other countries.
See the Individual Country Highlights for the Australian highlights.
How we conducted the research
We surveyed over 17,000 people using nationally representative samples from 17 countries: Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel, Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom, and the United States of America. These countries are leaders in AI activity and readiness within each global region.
We asked survey respondents about trust and attitudes towards AI systems in general, as well as AI use in the context of four domains where AI is rapidly being deployed and likely to impact many people: in healthcare, public safety and security, human resources, and consumer recommender applications.
Download Full Report
Download Global Executive Summary
Download Country Insights Report
Cite the report
Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia. 10.14264/00d3c94
Acknowledgements
This research was conducted by The University of Queensland (UQ), in collaboration with KPMG Australia. The UQ team led the design, conduct, analysis, and reporting of this research.
University of Queensland Researchers
Professor Nicole Gillespie, Dr Steve Lockey, Dr Caitlin Curtis and Dr Javad Pool
KPMG Advisors
Dr Ali Akbari, James Mabbott, Rita Fentener van Vlissingen, Jessica Wyndham, and Richard Boele
Funding
This research was supported by an Australian Government Research Support Package grant provided to the UQ AI Collaboratory, and by the KPMG Chair in Organisational Trust grant (ID 2018001776).
| 2021-07-22T00:00:00 |
2021/07/22
|
https://ai.uq.edu.au/project/trust-artificial-intelligence-global-study
|
[
{
"date": "2023/02/19",
"position": 87,
"query": "AI journalism"
}
] |
Removing Bias from Hiring with Artificial Intelligence
|
Removing Bias from Hiring with Artificial Intelligence
|
https://www.pymetrics.ai
|
[] |
It's time to look to powerful tools such as AI to de-bias our technological designs, and allow time-restrained recruiters to assess a broader, more diverse ...
|
Why is the status quo flawed and how can we fix it?
We tend to rely on processes, such as resume reviews, employee-referral programs, and outdated assessments that optimize bias instead of minimizing it.
It's time to look to powerful tools such as AI to de-bias our technological designs, and allow time-restrained recruiters to assess a broader, more diverse pool of applicants.
Employers are legally allowed to use traditional and biased assessments that discriminate against women and/or minorities. If we are going to call for unbiased AI — which we undoubtedly should — we also ought to call for the elimination of all biased traditional assessments for good. Download this white paper to learn more.
| 2023-02-19T00:00:00 |
https://www.pymetrics.ai/resource/removing-bias-from-hiring-with-artificial-intelligence
|
[
{
"date": "2023/02/19",
"position": 22,
"query": "artificial intelligence hiring"
}
] |
|
Multimodal Generative Visual AI Workspace: Visualize ...
|
Multimodal Generative Visual AI Workspace: Visualize, Collaborate, Innovate — Jeda.ai
|
https://www.jeda.ai
|
[] |
All-in-one Multimodal Generative Visual AI Workspace. Transform ideas into stunning visual strategic analysis, data into powerful insights, and documents ...
|
From Wireframes to Codes: AI-Generated Web Design
Venture into the enchanting realm of AI Web Design inside Jeda.ai's Visual AI Workspace and marvel at how it shapes the future of web making. Deftly translate wireframes into functional codes, such as HTML, CSS, and JavaScript, effectively turning abstract ideas into tangible, usable reality with GPT-4.1, o1, o3-Mini, Claude 3.5 Sonnet, DeepSeek R1 as well as other AI models. It's a new dawn in the world of web development, where even complex structures are decoded with seamless precision. Experience an effortless coding journey that only accentuates your creative and technical prowess with Jeda.ai Alchemy.
| 2023-02-19T00:00:00 |
https://www.jeda.ai/
|
[
{
"date": "2023/02/19",
"position": 45,
"query": "artificial intelligence graphic design"
}
] |
|
ChatGPT and Generative A.I's impact on our Jobs - AI Supremacy
|
ChatGPT and Generative A.I's impact on our Jobs
|
https://www.ai-supremacy.com
|
[
"Michael Spencer"
] |
ChatGPT is only a few months old and already causing waves in the business world · Media and content jobs: here they include advertising, content ...
|
MirageC/Getty Images
Hey Everyone,
Many of our are beginning to question whether ChatGPT will make our job obsolete. I think it’s a valid concern.
Are our own skills replaceable by an A.I., robots or smarter software?
That is, will ChatGPT and Generative A.I. innovations impact our jobs or even displace some of our skills or disrupt our positions? The reality is in the decades to come; software, robotics and A.I. will only get smarter.
We used to think of automation as something that displaced lower tiers of manual labor, the grind of repetitive work better left to machines.
But what happens when it comes for knowledge workers across industries in a wave of technological automation that we’ve never seen before for services jobs, clerical positions and even white collar workers - people like accountants, teachers and lawyers?
If you would like access to my best work consider upgrading to support my full-time A.I. coverage.
In this piece I explore the topic of automation, Generative A.I. and robotics and how the future of work may this time really be different for many of us in the coming years and why it matters.
It’s generally accepted that the 21st century will have some of the most radical technological automation within human history of civilization people have ever witnessed.
Share AI Supremacy
Let’s get into it:
| 2023-02-20T00:00:00 |
https://www.ai-supremacy.com/p/chatgpt-and-generative-ais-impact
|
[
{
"date": "2023/02/20",
"position": 81,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 76,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 79,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 79,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 82,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 81,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 79,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 81,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 80,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 64,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 64,
"query": "ChatGPT employment impact"
}
] |
|
Navigating the AI Job Market: Insights into the Future of Work
|
Navigating the AI Job Market: Insights into the Future of Work
|
https://www.fisclouds.com
|
[] |
While AI is affecting almost every industry and function, some roles and specializations are more in-demand than others.
|
Artificial intelligence (AI) promises to deliver some of this century’s most significant and disruptive innovations. Currently, we have conversational AI that helps you answer the question more precisely with the understanding of intent and context. In the near future, there will be self-driving cars, robotic assistants, and automated disease diagnosis are all products of an emerging AI revolution that will reshape how we live and work.
Research says that the demand for talented engineers has more than doubled in the last few years, and there are limitless opportunities for professionals who want to work on the cutting edge of AI research and development.
While jobs designing and improving AI applications are growing, some analysts predict these efforts will disrupt economic activity in a big way. This is because AI systems can process infinite amounts of data, and human beings simply aren’t up to the task.
However, despite many new technologies being predicted to replace the human role, there always be new jobs and professions needed to support this technology. While AI is affecting almost every industry and function, some roles and specializations are more in-demand than others. Just like computers that are predicted to replace workers, though until now, we have many new jobs related to this technology. Job seekers should research and identify the high-demand AI roles and specializations that match their interests and strengths.
AI consists of several overlapping disciplines. Understanding statistical methods, for example, is just as important as a background in computer science. In addition to the subjects listed here, it can be helpful to take interdisciplinary courses in areas like cognitive science to provide a conceptual framework for AI applications.
With such wide abilities and courses to acquire, AI jobs are promising for the future. While technical skills are essential for AI jobs, employers also value soft skills such as communication, teamwork, creativity, and critical thinking. Moreover, as AI is increasingly being designed to interact with and serve humans, having a human-centered mindset and ethical awareness is becoming more important. Here are some future jobs that are influenced by AI/ML:
Augmented Copywriter
A Copywriter who uses AI to generate content ideas, find information and write copy. Start with prompting (talking to the AI), extracting information provided by AI, editing to improve copy’s tone of voice, copywriting, and psychological principles.
Chatbot Product Manager
A product manager that charge of building, improving, and controlling the work of the company’s chatbot. It should have customer knowledge, AI product comprehension, and transform insights into AI messages. although the chatbot might replace customer support, businesses still need to hire a product manager to take care of it.
Big Data Architect
A big data engineer is a professional in charge of developing, constructing, evaluating, and maintaining complex data processing systems algorithms that work with large databases. Qualifications to become a Big Data Engineer include expertise in data mining, data integration, and data visualization, along with practiced skills with Java, Python, and C++. Having in-depth knowledge of machine learning algorithms, neural networks, and deep learning is extremely valuable.
Algorithm Engineer
Algorithm engineers are the ones who design, analyze, implement, optimize, and integrate existing algorithms that offer real-time solutions when installed into software or hardware. You need to hold a master’s or doctorate in computer science-related fields and be fluent in programming languages such as C++ and Java.
AI-Marketing Ops
A hybrid marketer and engineer in charge of implementing the organization’s new AI projects. With skills in AI engineering, project management, and marketing knowledge. Top executives will need experts to implement AI ideas and projects in real life. AI will need key people to plan and organize a system that needs AI. This is more than a job, it’s a leading role that will future-proof your career.
NLP Engineer
Natural Language Processing (NLP) engineers are AI professionals who specialize in human language, including spoken and written information. The engineers who work on voice assistants, speech recognition, document processing, etc., use NLP technology. For the role of an NLP engineer, organizations expect a specialized degree in computational linguistics. They might also be willing to consider applicants with a qualification in computer science, mathematics, or statistics.
IoT Engineer
A professional who designs develops and maintains the hardware and software systems that enable devices to connect and communicate with each other through the internet. They work on various aspects of IoT technology, such as embedded systems, wireless communication, cloud computing, and data analytics, to create efficient and secure IoT solutions. The IoT engineer’s responsibilities may include programming, testing, troubleshooting, and integrating different components of the IoT system.
Hybrid Manager
A professional who oversees the collaboration and interaction between AI systems and human workers in a workplace setting. This role involves identifying opportunities where AI can be effectively integrated into human workflows and managing the implementation and deployment of AI systems. They also are responsible for ensuring that AI systems comply with ethical and legal standards, such as privacy, data protection, and fairness.
As AI is increasingly applied to knowledge work, a significant shift will likely take place in the workplace, affecting many jobs in the Western middle class. Contrary to recent dire predictions about AI’s effect on employment, our survey suggests cautious optimism. Most respondents, for example, do not expect that AI will lead to a reduction of jobs at their organization within the next five years. Nearly 70% also said they are not fearful that AI will automate their own jobs. By a similar margin, respondents hope that AI will take over some of their presumably boring and unpleasant current tasks. However, respondents overwhelmingly agree that AI will require employees to learn new skills within the next five years and augment their existing skills. (See Figure 12.) Taken together, these portend adjustment, not annihilation. “Even with rapid advances,” says Erik Brynjolfsson, Schussel Family Professor at the MIT Sloan School of Management, “AI won’t be able to replace most jobs anytime soon. But in almost every industry, people using AI are starting to replace people who don’t use AI, and that trend will only accelerate.”
| 2023-02-20T00:00:00 |
2023/02/20
|
https://www.fisclouds.com/navigating-the-ai-job-market-insights-into-the-future-of-work-9570/
|
[
{
"date": "2023/02/20",
"position": 6,
"query": "machine learning job market"
}
] |
AI in Manufacturing: Applications & Impact | ATS
|
AI in Manufacturing: Applications & Impact
|
https://www.advancedtech.com
|
[] |
Workplace safety: AI boosts workplace safety in several ways. With collaborative robotic technology, automated equipment and human workers can work side by ...
|
The role of artificial intelligence in manufacturing is continually growing today, based on a number of factors, including:
Increased availability of technology
Ever tighter profit margins
The need for manufacturers to remain competitive in their space
AI in manufacturing yields a broad range of benefits, which we will discuss throughout this article in greater depth. More manufacturers than ever are drawing upon AI in production and other areas of the manufacturing process, taking advantage of technology that is more accessible than ever while understanding the need to keep pace with competitors and innovators who are setting today’s bar with the adoption of AI tools.
Read on to see how the impact of AI in manufacturing industry applications is changing the face of the sector and yielding vast productivity and bottom-line benefits for manufacturing organizations.
Trends with AI in the manufacturing industry
AI in manufacturing sector applications is growing at a rapid pace, bolstered by the dropping cost of AI technology implementation and the increasing realization among manufacturers that artificial intelligence is one of the most effective ways to increase productivity and retain a competitive edge. In fact, according to an MIT study conducted by the Sloan School of Management, 26% of companies are using AI in widespread production — more than doubling the previous year’s 12% figure. And perhaps most importantly, 92% of large manufacturers have seen a positive ROI on their data and AI investments.
While it is always difficult to be an “early adopter” for technology, these figures prove that AI in manufacturing is likely here to stay, thanks to the significant cost and productivity benefits that it is providing for the businesses that have made the investment.
Uses of AI in manufacturing
The role of AI in manufacturing extends to numerous areas and applications. These include:
Machine maintenance: AI can make a major impact on maintenance processes throughout the organization, and AI in predictive maintenance is likely the area that can yield the greatest benefit. Condition Monitoring — the practice of drawing upon data from maintenance sensors in order to identify potential problem states for equipment as early as possible — has seen immense impacts from AI, analyzing data in innovative ways to not only detect potential issues even earlier, but also to help identify new troubleshooting scenarios and metrics.
Quality control: AI in factories is not limited to the production process itself. Artificial intelligence can also help during the quality control process, casting a consistent and untiring virtual “eye” over every piece of output and increasing the rate of defect detection.
Workplace safety: AI boosts workplace safety in several ways. With collaborative robotic technology, automated equipment and human workers can work side by side safely, with AI able to detect and address potential safety risks. AI also aids in safety by improving training processes and, through augmented reality, allowing new hires to gain the equivalent of hands-on experience without risking either personal safety or production efficiency.
Machine vision: With AI, machine vision processes — sorting, inspection, security and QC — can be increasingly automated and can become increasingly accurate. AI can support human workers in increasing the speed of these processes and helping to call out potential issues for further investigation, all while improving throughput and quality.
Inventory management: Inventory management is one of the areas of manufacturing that is most ripe for digital transformation, and artificial intelligence is making a major impact here. AI can aid in industrial asset management, applying automation to sourcing, supply chain management, storeroom management and other key areas of inventory that can make a significant impact on spending.
Cybersecurity: With the advent of the connected facility, cybersecurity has become a major concern. Artificial intelligence helps maintain a secure network implementation by identifying potential vulnerabilities or attacks before they can bloom into more significant issues, drawing on historical data and real-time network monitoring.
Robotics: AI helps power today’s advanced automation systems. Artificial intelligence can help robotic systems become even more efficient by identifying potential bottlenecks and adjusting automatically to prevent slowdowns. AI also can collect information from robotic equipment in real time and make suggestions for improving performance.
Factory Automation: By taking a close look at available data, AI can develop plans for improvements throughout a production facility. This can save a significant amount of time because AI can process information much faster and more efficiently, giving people more time to concentrate on other areas.
Product Design: The use of AI in developing new products means manufacturers can test their designs in a virtual environment before creating a prototype. AI simulations can provide valuable information about the feasibility of a design without taking up as much time as a real-world test. It also may give design teams the opportunity to discover numerous design options they may not have been able to think of otherwise. In more-advanced applications, designers may be able to input materials, measurements and other basic parameters for a product and return multiple iterations of that product to improve upon the original concept.
Benefits of AI in manufacturing
AI in manufacturing offers numerous benefits, including:
Data collection & data-informed decisions
Connected facilities collect vast amounts of data throughout every cycle, every day. AI in factories helps to make sense of this data and facilitate informed, data-driven decisions that can make a major impact on productivity, uptime and the bottom line. In conjunction with data scientists and other personnel who are equipped to understand the data provided by industrial sensors, AI can help make quicker and more effective decisions.
Increased production
Some of the primary benefits of AI in manufacturing include more effective maintenance, improved decision-making and, ultimately, more uptime. With equipment running at full capacity more often, production throughput will naturally increase, yielding higher output and better quality.
Optimize process effectiveness
Supported by the data collected from industrial sensors, AI helps to eliminate unplanned downtime and optimize process effectiveness. With AI examining equipment performance data, not only are impending maintenance issues detected, but potential inefficiencies are identified as well. This helps to fine-tune equipment to ensure optimal operation and maximum output at peak quality.
Reduce operational costs
AI in manufacturing and maintenance boosts efficiency and reduces costs in numerous ways. With AI-assisted data analysis, maintenance technicians can make more accurate assessments more quickly than before, helping to reduce or eliminate downtime, keep equipment running at peak performance, and get equipment back online more quickly if a shutdown does occur. AI can also ease QC resource burdens and other areas.
Safer facilities
Aside from the obvious impacts on employee health and well-being, safety issues can also negatively impact workplace morale. AI helps to keep employees out of harm’s way, enabling automation and robotics to take over the most hazardous of the processes in the manufacturing facility. AI and augmented reality can also positively impact safety by creating more effective training processes.
AI implementation process
Before moving forward with AI implementation, there are considerations and steps that are required along the way in order to have AI work in harmony with the production side of the facility. The following basic steps will help you understand how to get started with an AI strategy:
Diagnosis: While AI has an extremely extensive and diverse list of applications, it is far from a one-size-fits-all solution. Furthermore, while AI can certainly impact nearly every area of production and operations, in an initial implementation, you should target a few — or even just one — high-impact areas. This stage involves identifying the areas that could most benefit from an AI implementation.
Create a working proof of concept: In this stage, AI technicians will design an AI system and create a working proof of concept for implementation.
Pilot system: Piloting the system involves preliminary testing, ensuring that the system operates as expected and does not create any unexpected problems with production. Once these high-level tests are completed, you are ready to move to the next stage.
Co-create: In this step, technicians are making improvements in tandem with the AI, optimizing and tweaking parameters as needed.
Self-create: This stage enables the AI to work independently, drawing on data and parameters to improve and fine-tune on its own. The intent at this stage is that technicians will monitor the progress of the AI but will have little to no input as it continues to develop and improve.
Challenges of AI implementation
AI implementation does present several challenges before realizing the benefits described above. These include:
Cost of adoption: While technology costs continue to lower, AI systems still entail a significant investment to get up and running. Even in the face of proven positive ROI, the cost of entry can prove to be a barrier for some organizations.
Need for skilled experts: In addition to hardware and infrastructure, AI also requires skilled technology experts to build and operate systems, analyze data and ensure that the implementation is operating as intended. While AI can introduce massive efficiencies in operations, it still needs skilled people to run it.
Data quality and system obsolescence: In order to operate effectively, AI needs up-to-date sensors and system software. Even if organizations have begun to implement industrial sensors, they may need to update hardware or software for compatibility with AI technology.
AI expectations
It is important to establish proper expectations for an AI implementation, especially in light of the significant time and resource investment required to get a system up and running. Most important is to view AI as a long-term solution, where the proper foundation can be iterated upon to yield massive benefits over forthcoming years. While AI adoption is on the rise, it is still a relatively new technology, and one for which the full potential is still being developed.
ATS is a leading maintenance technology implementation partner, delivering decades of experience in helping manufacturers stay on the cutting edge of cost-saving maintenance tactics. Our services and expertise cover industrial maintenance solutions, along with industrial technology focused on condition monitoring for reliability excellence. To learn how we can help apply our results-focused strategy to your operations, contact us today.
| 2023-02-20T00:00:00 |
https://www.advancedtech.com/blog/ai-in-manufacturing/
|
[
{
"date": "2023/02/20",
"position": 48,
"query": "workplace AI adoption"
}
] |
|
Market Analysis, Sustainable Growth Insights 2024-2031
|
Market Analysis, Sustainable Growth Insights 2024-2031
|
https://www.datamintelligence.com
|
[
"Mansi",
"Datam Intelligence",
"Https",
"Www.Datamintelligence.Com"
] |
The global artificial intelligence market is estimated to reach at a Significant CAGR during the forecast period (2024-2031).
|
Market Overview
The Global Artificial Intelligence Market is estimated to reach at a Significant CAGR during the forecast period (2024-2031).
Artificial intelligence (AI) is the technology that simulates human intelligence in programmed machines to think like humans and mimic their actions. The researchers and developers are making surprisingly rapid strides in mimicking activities such as reasoning, learning and perception can be concretely defined. It is also applied to machine programs associated with the human mind, such as learning and problem-solving. The ideal characteristic is the ability to take action and rationalize what has the best chance of achieving a specific goal.
Market Summary
Metrics Details CAGR Prominent Growth Size Available for Years 2022-2031 Forecast Period 2024-2031 Data Availability Value (US$) Segments Covered Component, Technology, End-User, and Region Regions Covered North America, Europe, Asia-Pacific, South America and Middle East & Africa Fastest Growing Region Asia-Pacific Largest Region Asia-Pacific Report Insights Covered Competitive Landscape Analysis, Company Profile Analysis, Market Size, Share, Growth, Demand, Recent Developments, Mergers and Acquisitions, New Product Launches, Growth Strategies, Revenue Analysis, Porter’s Analysis, Pricing Analysis, Regulatory Analysis, Supply-Chain Analysis and Other key Insights.
To Know More Insights - Download Sample
Market Growth and Trends
Artificial intelligence (AI) is the concept and development of computing and systems generated by performing, which act as human assistance, such as language translation, speech recognition, decision-making and visual perception. AI uses algorithms to understand speech, process information and visually recognize objects. The algorithms are used for data processing, automated reasoning and calculation. The increasing demand for artificial intelligence is due to the remarkable transformations across all these verticals through reductions in overall costs and enhancement in productivity. The continued focus on improving energy monitoring and management and gaining better control of the energy generation and distribution networks is expected to boost market growth over the forecast period.
The rising demand for personalization and growing need for automation supports the growth of the global artificial intelligence market
The software sector includes companies that manufacture hardware components. The rising demand for personalization and the growing need for automation are two key factors that support the growth of the global artificial intelligence (AI) market. AI can be used to personalize the customer experience by providing personalized recommendations and real-time marketing campaigns. By analyzing customer data and behavior, AI can help businesses to understand their customers better and create personalized products, services and marketing campaigns that are more likely to resonate with them. The respective factor is particularly important in industries such as e-commerce and retail, where personalization can help to increase sales and customer loyalty.
The need for automation is increasing in various industries, such as manufacturing, logistics and transportation, as it can help to reduce costs, improve efficiency and increase production. AI can automate repetitive and complex tasks, such as data entry, scheduling and inventory management. By automating these tasks, AI can help businesses to reduce costs, increase efficiency and improve overall productivity.
High initial investment hampers the market growth
The initial investment can be a major obstacle to the growth of the global artificial intelligence (AI) market, particularly for small and medium-sized businesses. The high costs associated with developing AI systems, including hardware and software, can make it difficult for these businesses to adopt AI technology. Developing AI systems requires significant investment in hardware and software, including servers, storage and software development tools. These costs can be high, particularly for small and medium-sized businesses that may not have the resources to invest in expensive hardware and software.
After the initial investment, the AI systems need regular maintenance and upgradation of the AI systems to keep them working optimally and to ensure that they can handle the growing volume of data. This requires additional financial resources and can be a major obstacle for small and medium-sized businesses.
Therefore, the high initial investment required for developing and maintaining AI systems, along with the costs associated with data collection and management and skilled personnel, can make it difficult for small and medium-sized businesses to adopt AI technology, which can hinder the growth of the global AI market.
COVID-19 Impact on Market
The global artificial intelligence market was deeply impacted due to the COVID-19 pandemic. The increase in internet usage and application in various major industries has influenced the market for artificial intelligence. The applications of artificial intelligence in the pandemic grew as it included the face mask detection system to identify people not wearing masks and the human presence system to monitor patients remotely in tracking and early prediction, data dashboards, therapies & cures, diagnosis or prognosis and maintaining social control. Further, computer solutions or platforms powered by artificial intelligence were deployed to monitor and forecast coronavirus mutation during the pandemic. Therefore, the Covid-19 pandemic positively impacts the artificial intelligence technology market.
Market Segment Analysis
The global artificial intelligence market is segmented into components, technology, end-user and region.
Growing demand for cyber security high and customer service powered by AI supplements the dominance of IT & Telecom in the global AI market
The IT & Telecom end-user segment dominates the global artificial intelligence market due to the high demand for AI-powered solutions in customer service, network management and cybersecurity. These industries rely heavily on technology and have vast data that can be analyzed and used to improve operations and decision-making. IT & Telecom companies also have the resources and expertise to invest in and implement AI solutions.
Market Geographical Share
The global artificial intelligence market is segmented into components, technology, end-user and region.
Growing development related to AI technology in the region supports the Asia-Pacific AI market expansion
The Asia Pacific region dominates the global artificial intelligence market due to several factors. Firstly, the region has a large population and a rapidly growing economy, leading to an increase in the adoption of AI technology in various industries such as healthcare, finance and retail. Government initiatives and investments in AI also fuel market growth. For instance, countries such as China and Japan strongly focus on investing in AI research and development, leading to advanced technologies and solutions development. Furthermore, the presence of major players in the AI industry, such as China's Baidu, Alibaba and Tencent, also contributes to the region's dominance in the market.
Market Companies and Competitive Landscape
The global artificial intelligence market is competitive due to the availability of software that caters to different end-use applications. Some major players contributing to growth in the market are AT and T Inc., Google Inc, Facebook Inc, IBM Corporation, Apple Inc, Intel Corporation, Salesforce.com Inc, Saudi Telecom Company, Ayasdi Inc, Nuance Communication and Digital Reasoning Systems Inc. The major players in the market adopt several growth strategies such as connected vehicle technology, enhanced video services, collaborations and joint ventures to contribute to growth in the industry.
AT and T Inc.
Overview: AT& T Inc. provides technology services, telecommunications and media. AT&T is the parent company after the American Bell Telephone Company, controlling Time Warner's rights and rebranding it as Warner Media. The company offers wireless communications, data, broadband and internet services. AT&T also develops, produces and distributes feature films, television, gaming and content in physical and digital formats. The business offered wholesale services, managed networking, local and long-distance telephone services and telecommunications equipment. The AT&T company was established in October 1983. AT&T is headquartered in Texas, U.S.
Product Portfolio
Artificial Intelligence & Data Science: AI is used to boost productivity and provide consumers with better results through three interconnected missions:
Changing and upgrading AT&T's data platforms, data supply chain and data science ecosystem while utilizing, sharing and catalyzing insights from the company's enormous data repositories.
Democratizing data and AI across AT&T, encouraging wider adoption of data-driven solutions by giving all AT&T business leaders and employees access to powerful data analytics tools.
Through open source and industry groups, academic involvement and workforce investments, one can give back to, help lead and build the larger data and AI community.
Key Developments:
In October 2021, Artificial Intelligence Feature Store with industry-first features will be co-developed by H2O.ai and AT&T. The H2O AI Feature Store, already in use at AT&T, offers a repository for collaborating, sharing, reusing and discovering machine learning features to speed up AI project deployments and increase ROI. It is now accessible to any business or organization.
Why Purchase the Report?
Visualize the composition of the global artificial intelligence market segmentation by component, technology, end-user and region, highlighting the critical commercial assets and players.
Identify commercial opportunities in the global artificial intelligence market by analyzing trends and co-development deals.
Excel data sheet with thousands of global artificial intelligence market-level 4/5 segmentation data points.
PDF report with the most relevant analysis cogently put together after exhaustive qualitative interviews and in-depth market study.
Product mapping in excel for the critical product of all major market players
The global artificial intelligence market report would provide approximately 61 tables, 62 figures and 190 pages.
Target Audience 2024
| 2023-02-20T00:00:00 |
https://www.datamintelligence.com/research-report/artificial-intelligence-market
|
[
{
"date": "2023/02/20",
"position": 45,
"query": "AI labor market trends"
}
] |
|
List of advocates of universal basic income
|
List of advocates of universal basic income
|
https://en.wikipedia.org
|
[] |
The following is a list of notable individuals who have publicly expressed support or are working for the introduction of a universal basic income (UBI).
|
However, note that Bill Gates said that it is too early to implement a universal basic income in 2017. He said that "even the US isn't rich enough [yet] to allow people not to work", but "over time countries will be [...]". Overall, there is "[...] still [...] a lot of work that should be done [...]". Source:
| 2023-02-20T00:00:00 |
https://en.wikipedia.org/wiki/List_of_advocates_of_universal_basic_income
|
[
{
"date": "2023/02/20",
"position": 6,
"query": "universal basic income AI"
}
] |
|
1 in 4 companies have already replaced workers with ...
|
1 in 4 companies have already replaced workers with ChatGPT
|
https://www.resumebuilder.com
|
[] |
It's possible having ChatGPT experience could help workers retain their jobs. When assessing candidates to hire, 92% of business leaders say having AI/chatbot ...
|
With the emergence of ChatGPT in November 2022, the question of how AI may affect jobs is top of mind for many. The publicly accessible AI chatbot has many capabilities, including the ability to answer questions, create content, write code, and more.
In February, ResumeBuilder.com surveyed 1,000 U.S. business leaders to see how many companies currently use or plan to use ChatGPT.
Key findings:
49% of companies currently use ChatGPT; 30% plan to
48% of companies using ChatGPT say it’s replaced workers
25% companies using ChatGPT have already saved $75k+
93% of current users say they plan to expand their use of ChatGPT
90% of business leaders say chatGPT experience is a beneficial skill for job seekers
Half of companies are using ChatGPT
Based on our survey, currently, 49% of companies say they are using ChatGPT, 93% of whom say they plan to expand their use of the chatbot.
Additionally, 30% of companies say they plan to start using it, 85% of whom say they will start doing so within the next 6 months.
Companies use ChatGPT to assist in hiring, writing code, and more
Business leaders say their companies have a variety of uses for ChatGPT.
Of companies that currently use ChatGPT, 66% use it for writing code, while 58% use it for copywriting/content creation, 57% for customer support, and 52% for creating summaries of meetings or documents.
The majority of companies also use ChatGPT to facilitate hiring; 77% say it helps them write job descriptions, 66% draft interview requisitions, and 65% respond to applicants.
Overall, most business leaders are impressed by ChatGPT’s work. Fifty-five percent say the quality of work produced by ChatGPT is ‘excellent,’ while 34% say it’s ‘very good.’
“There is a lot of excitement regarding the use of ChatGPT,” says Chief Career Advisor Stacie Haller.
“In talent acquisition, the more mundane tasks like writing job descriptions, interview questions, and following up with candidates are already being replaced by ChatGPT. Writing code is another area where this technology can provide output, while employees focus on more strategic initiatives”
“Just as technology has evolved and replaced workers over the last several decades, ChatGPT may impact the way we work. As with all new technologies, companies’ use of ChatGPT will be continuously evolving, and we are only at the onset.”
1 in 2 companies have replaced workers with ChatGPT since November
Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year.
In the future, ChatGPT may lead to more layoffs according to business leaders. When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say ‘definitely,’ while 26% say ‘probably.’
Within 5 years, 63% of business leaders say ChatGPT will ‘definitely’ (32%) or ‘probably’ (31%) lead to workers being laid off.
For business leaders whose companies haven’t started using ChatGPT but plan to, fewer think layoffs will result. Only 9% say the company’s use of ChatGPT will ‘definitely’ lead to workers being laid off, while 19% say ‘probably.’
“Since this new technology is just ramping up in the workplace, workers need to surely be thinking of how it may affect the responsibilities of their current job,” says Haller. “The results of this survey shows that employers are looking to streamline some job responsibilities using ChatGPT.”
1 in 4 companies have saved $75k+ already with ChatGPT
Companies are saving money by using ChatGPT. Overall, 99% of companies using ChatGPT say they’ve saved money.
Forty-eight percent of companies have saved more than $50,000, while 11% have saved over $100,000.
“The economic model for using ChatGPT is also evolving,” says Haller. “It will be interesting to see how this plays out in terms of savings as well as the reorg of certain jobs within the companies.”
9 in 10 business leaders say it’s beneficial for job applicants to have ChatGPT experience
It’s possible having ChatGPT experience could help workers retain their jobs.
When assessing candidates to hire, 92% of business leaders say having AI/chatbot experience is a plus, and 90% say it’s beneficial if the candidate has ChatGPT-specific experience.
“Companies are already expecting candidates to have experience with ChatGPT,” says Haller.
“The use of this technology is certainly something employees and candidates will want to stay current on. Job seekers certainly should add this skill if they have it to their resume, and employers should add it to their list of required or preferred skills if it’s something they expect from candidates. As seen throughout history, as technology evolves, workers’ skills need to evolve and change as well.”
Methodology
All data found within this report derives from a survey commissioned by ResumeBuilder.com and conducted online by survey platform Pollfish on February 15, 2023. In total, 1,000 U.S. business leaders were surveyed.
Appropriate respondents were found through a screening question. They had to answer that their company currently uses or plans to use ChatGPT. Additionally, respondents had to meet demographic criteria, including age (25+), income (50k+), number of employees (2+), employment status, and organizational role.
This survey utilizes a convenience sampling method. For more information contact [email protected].
| 2023-02-20T00:00:00 |
https://www.resumebuilder.com/1-in-4-companies-have-already-replaced-workers-with-chatgpt/
|
[
{
"date": "2023/02/20",
"position": 4,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/20",
"position": 22,
"query": "AI workers"
},
{
"date": "2023/02/20",
"position": 87,
"query": "artificial intelligence layoffs"
}
] |
|
What role could ChatGPT play in HR and people ...
|
What role could ChatGPT play in HR and people management?
|
https://employmenthero.com
|
[] |
In this blog post, we'll explore the many ways that ChatGPT could help HR professionals improve the employee experience and drive business results.
|
The rise of ChatGPT has been nothing short of phenomenal. OpenAI’s launch of ChatGPT set records for fastest-growing user base in history and the fastest growing app in the world. It has impressed experts with its writing ability, proficiency at complex tasks and ease of use.
Should we be worried? Are robots coming for our jobs?
As technology continues to evolve, we’re seeing more and more organisations embrace AI and machine learning to transform every aspect of their business. It’s also got immense potential for assisting HR processes. Perhaps unsurprisingly – ChatGPT has the potential to revolutionise how we hire, train, and manage our employees.
In this blog post, we’ll explore the many ways that ChatGPT could help HR professionals improve the employee experience and drive business results. From automating routine tasks to providing personalised support and guidance, we’ll see how ChatGPT could help HR professionals work smarter, not harder.
So, whether you’re an HR professional looking to embrace the future of work, or you’re simply interested in the role that AI could play in HR and people management, let’s dive into the world of ChatGPT.
What is ChatGPT?
ChatGPT is an AI language model developed by OpenAI. It is a pre-trained artificial intelligence model that uses deep learning algorithms to generate human-like text based on a given prompt.
ChatGPT has been trained on a massive amount of diverse text data from the internet and is capable of answering questions, generating creative writing, translating languages, and much more.
ChatGPT is part of OpenAI’s GPT (GenerativePretrained Transformer) family of language models and is designed to generate text in a conversational style. The model has been fine-tuned to generate human-like responses to a wide range of prompts and can provide information on a variety of topics, including history, science, current events, and more.
In essence, ChatGPT is an AI tool that allows users to have natural language conversations with an AI-powered agent, providing quick and accurate information in real-time.
How does ChatGPT work?
ChatGPT is best described as a language machine – it’s a technique based on machine learning and masses of training data to build sentences word by word, using its knowledge and experience to predict what word should come next to generate human-like responses.
It doesn’t scour the internet for answers like Siri or Alexa – instead it uses a type of deep learning called transformer architecture. The model has been trained on a massive amount of text data, which allows it to understand the patterns and relationships between words, phrases, and sentences. When a user inputs a prompt or question, ChatGPT uses this understanding to generate a response.
Chat GPT is able to respond to a wide range of prompts and questions, and the model’s ability to understand and generate text makes it a powerful tool for natural language processing and conversational AI applications.
You can also think of Chat GPT as an engine, and with 175 billion parameters, what it can do is pretty impressive. From passing MBA business exams, to writing poetry and stand up comedy, to answering complex questions on the spot.
But remember, Chat CPT is a programme trained using a transformer algorithm and masses of data. That means it is notoriously rife with bias from anything that exists within the data and was programmed into the algorithm, even if unintentional.
It only finished its training in 2021, so it might not be as up to date as you think. It also hasn’t been programmed to verify its own accuracy or reliability, which can lead to wildly inaccurate results, or just nonsense.
While Google’s Bard perhaps came off worse after providing a wrong answer at its launch event, inaccuracy is still a risk in these tools. They aren’t intelligent, they can’t think for themselves, and they are only as good as their input data.
Even still, we are obsessed. And like many, we are curious about how it will impact the business world.
The impact of ChatGPT on businesses around the world
As a language model developed by OpenAI, ChatGPT has the potential to have a significant impact on businesses around the world. With such impressive functionality and speed in processing complex data, here are just a few ways in which ChatGPT could transform various sectors and industries…
For a comprehensive look at AI’s role in transforming HR, discover the latest AI use cases and trends.
Customer service
Chatbot anybody? This is where ChatGPT is extremely strong as it can understand intent in a question and provide very astute answers. This means it can be used to automate customer service by providing 24/7 support and instant human-like responses to customer questions, which can speed up the resolution of customer complaints, improve customer satisfaction and reduce costs for businesses.
It can even hold a conversation, entertaining and amusing your customers. Because of ChatGPT’s large language model, some reports estimate that it may be possible to improve customer service by about 70% because of its ability to answer complex questions that aren’t straightforward.
Sales and marketing
ChatGPT also has a lot to offer sales and marketing teams. It can be used to generate personalised content tailored to a business, such as product descriptions and email marketing campaigns, to improve engagement and conversions. You can also use it in lead generation to generate outbound templates, or text for inbound lead generation.
Thought that was all? It also excels at creating sales scripts, email outreach templates and follow up templates. Have we mentioned you can also use it to qualify leads by asking specific, targeted questions to potential customers?
It’s fair to say that ChatGPT is powerful at all levels of the sales funnel – there really is no limit.
Training and learning
Because of its ability to break down complex information into more digestible, easy to understand snippets, ChatGPT can be used to provide students and teachers alike with answers to their questions on the spot.
ChatGPT can assist with generating ideas and text for writing assignments, which may be particularly helpful for individuals with dyslexia, dysgraphia, or other conditions that affect writing abilities. ChatGPT could also have a significant role in Learning Management Systems, helping to configure and deliver hyper-personalised tailored learning pathways for employees.
HR
The rise of generative AI like ChatGPT can provide seemingly infinite support.
ChatGPT can be used to automate HR processes, such as streamlining recruitment and employee onboarding, to improve efficiency and save time. From screening candidates, to onboarding candidates, answering HR queries, and providing objective guidance and advice on all sorts of topics, ChatGPT will help to settle disputes, solve problems and make decisions.
For a deeper understanding of how to integrate AI while maintaining the human touch, read about how to keep the human in Human Resources and payroll with AI.
It’s still early days so it is hard to conceive just how much ChatGPT will transform a wide range of industries by automating tasks, improving efficiency, and providing instant answers to common questions. However, before we get carried away, we all need to remember that while ChatGPT is a powerful tool, it’s only one part of the puzzle.
What are the benefits of using ChatGPT in HR?
ChatGPT has the potential to offer several benefits when used to support HR. Here are the top three benefits we can see straight away.
Saves time
ChatGPT can automate repetitive HR tasks, such as answering employee questions, processing leave requests and scheduling interviews, freeing up HR staff to focus on more strategic tasks. Of course, good HR software also makes light work of these tasks while keeping a helpful audit trail, making ChatGPT an excellent complement to your HRIS.
Improved efficiency
ChatGPT can provide instant answers to common HR-related questions, reducing the time and effort it takes for employees to get the information they need. Can’t find what you’re looking for? Ask ChatGPT to locate it. Can’t think of an answer to a tricky question from a co-worker? Get ChatGPT on the case.
Hyper personalisation
ChatGPT can be used to provide personalised HR-related recommendations and information, such as job opportunities and career development advice, to employees based on their individual needs and interests.
How could HR professionals use ChatGPT in recruitment?
Recruitment is a time consuming and costly process that requires a lot from your HR team, from extensive research, to thorough screening, and tremendous people skills in order to bring in high-quality hires. When you add everything up, a new hire’s average cost can be between $3,500 to $5,000.
So when it comes to recruitment, ChatGPT could help streamline and improve recruitment in a number of interesting ways. But we reckon Employment Hero still has an edge in the competition.
1. Chatbots for job applications
ChatGPT can be used to create chatbots that can answer common questions from job applicants such as information about the company, the role, and the application process. This can save HR professionals time as well as improve the candidate experience, which is crucial to your employee value proposition.
2. Screening candidates
ChatGPT could also be used to automate initial candidate screening by asking pre-screening questions and providing instant feedback to both the hiring team and to applicants. This can improve the efficiency of the recruitment process and help HR professionals identify the most suitable candidates more quickly. That being said, a hiring professional, human of course, should always oversee HR processes and should always have the final word!
For more insights into AI’s role in recruitment, read about the strategies, ethics, and limitations of AI in recruitment.
3. Scheduling interviews
Can ChatGPT handle a calendar? Yes it can. There is nothing this tool can’t do. ChatGPT can be used to manage calendars, schedule interviews with candidates and provide reminders and follow-up communications. This can improve the efficiency of the interview process and reduce the likelihood of scheduling conflicts.
4. Providing interview questions and feedback
ChatGPT can be used to develop insightful interview questions, analyse answers, and provide interview feedback to candidates in real-time. This can improve the candidate experience and help HR professionals identify areas for improvement in the recruitment process.
5. Lead interviews and make hiring decisions
Ok, here is where we draw the line! We still want humans to have the final say in our interview process. While ChatGPT’s functionality and talents are seemingly endless, we’re still a little uncomfortable about the idea of handing the reins over to ChatGPT, even if it reckons it can sift out top talent and make the best hire.
What about using chatGPT to improve other HR processes?
ChatGPT can be used to improve several other HR processes beyond recruitment. Here are a few examples worth mentioning.
Employee onboarding
Employee onboarding has come a long way with cloud-based tools, but ChatGPT could take it to the next level. The onboarding process could become even more dynamic, creating a tangible onboarding experience that the new hire can meaningfully interact and converse with to create a personalised onboarding journey.
Employee Assistance Programmes
ChatGPT could be used to take employee assistance programmes (EAPs) to a whole new realm. Rather than being static, hard to reach resources, ChatGPT could create accessible EAPs, providing employees with instant access to HR support and information and connecting employees with tailored mental health resources, work-life balance programmes, and information on relevant employee benefits.
Performance management
If you’ve ever felt stuck or anxious trying to navigate a performance management conversation, those days could be behind you. We know how important it is to make performance management stress free and painless, but this could get even slicker. With ChatGPT, employers will be able to automate the performance management process by providing employees with feedback and performance tracking information in real-time.
Learning and development (L&D)
It all comes back to hyper-personalisation. ChatGPT can be used to provide employees with personalised learning and development recommendations and information, such as training opportunities, career development advice and ideas for career progression.
Because ChatGPT automates more menial tasks, what once took hours or days now takes a matter of seconds. This frees employees up to spend more time learning and growing, doing more of things that ChatGPT cannot – deep thinking, developing future-facing ideas, and highly creative processes. Yes, AI will be integrated into L&D, but it will also free up your employees to do more L&D themselves, resulting in stronger, more innovative teams. If you want to take your learning and development game to the next level, check out our learning management system (LMS).
Are there any potential challenges to using ChatGPT in HR?
Of course, relying on AI can make a lot of people uncomfortable – and with good reason. Though it’s an excellent tool, there are definitely some challenges to consider when using ChatGPT in HR.
Data Privacy
ChatGPT works by processing large amounts of data, and HR professionals will need to ensure that this data is stored and processed securely to avoid data breaches and privacy violations. Blockchain technology is one safeguard that is on the horizon to help with this.
Bias and fairness
One challenge that immediately springs to mind is the risk of unconscious bias in the algorithms matching job seekers to positions. ChatGPT is only as good as the data it has been trained on, and if this data contains biases, then ChatGPT could propagate these biases in its outputs. HR professionals will need to be mindful of this and steps should be taken to reduce bias in the recruitment process, as well as data that developers use to train ChatGPT.
Technical challenges
Implementing ChatGPT in HR processes can be technically complex and may require specialised skills and expertise. HR professionals will need to work closely with IT professionals to ensure that ChatGPT is properly integrated into their systems and processes. And now that AI has made its way into the workplace, it might be wise to update your policies. Employees might need to be trained on how to use ChatGPT in ways that align with company policy to ensure everyone is on the same page.
Ethical considerations
ChatGPT has the potential to automate many HR processes, but it’s important to consider the ethical implications of this automation. One common concern is that ChatGPT could potentially lead to job loss and wage stagnation. The risk of job displacement may already be worrying employees – McKinsey estimates that automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely. HR professionals will need to consider these implications and take steps to mitigate them.
Even aside from job losses, if not implemented properly, the rise of ChatGPT in the workplace may reveal – or create – a skills gap, and potentially deepen generational divides. It’s never been more important to invest in upskilling your workforce and manage awareness of generational differences to create inclusive, not exclusive, workplaces.
Limitations of AI
Finally, it bears repeating that ChatGPT is a powerful tool, but it is not a panacea, and there are certain tasks and processes that are still best handled by human beings.
HR professionals will need to consider the limitations of ChatGPT and make sure it is used in a way that complements, rather than replaces, human decision-making. After all, HR is about humans.
For an in-depth analysis of how AI is impacting HR, check out our SuperHuman report.
What are the future trends in AI and HR?
There are several future trends in AI and HR that are expected to have a significant impact on the field.
Hyper-personalisation
We’re already seeing a shift from a one size fits all approach to what experts call a ‘workplace hyper-personalisation’ mindset. AI is expected to play an increasingly important role in providing personalised HR experiences to employees, such as personalised learning and development recommendations, performance feedback, and tailored wellbeing support.
Predictive analytics
As AI develops, we can expect that it will increasingly be used to help HR professionals make better, data-driven decisions by using predictive analytics to identify trends, forecast future workforce needs, and optimise HR processes.
Automation
From employee self service to the use of smart contracts, many HR and administrative tasks are just asking to be automated. AI will speed up and pave the way for automating many routine HR tasks, with the potential to further revolutionise manually intensive processes like recruitment, freeing up HR professionals to focus on more strategic initiatives and improving the overall employee experience.
Employee engagement
Wondering if AI will replace jobs? Actually, rather than replacing people, it’s more effective for employees to work together with an AI.
AI is expected to play a growing role in employee engagement, by using machine learning algorithms to gather and analyse real-time feedback from employees, and providing personalised engagement recommendations to managers and team leaders. In the meantime, you can learn more about next level employee engagement here.
Diversity, Equity and Inclusion (DEI)
While there have been big headlines about bias in algorithms, as it develops, AI is likely to be a significant help to HR professionals in improving diversity and inclusion in the workplace. In future we can expect AI to help identify and address unconscious biases in HR processes, and provide personalised support and guidance to employees and employers alike. But we still have a role to play to guide it away from seeing people as statistics.
Employee wellbeing
Since the pandemic, there has been a growing demand for AI that can sense, read, and evaluate a workers’ emotions.
As it continues to develop, AI is expected to play a growing role in promoting employee wellbeing, by providing instant access to mental health resources, work-life balance programmes, and personalised health recommendations.
It will also have a role in sentiment analysis, continuous listening and feedback models, helping leaders get the best out of their employees and teams.
Streamline your business with human resources technology
The field of HR has evolved significantly over the past 20 years. On the one hand, HR professionals today have access to a wide range of tools and technologies that can help them improve the employee experience and drive business results. From sophisticated HR management software to AI-powered recruitment tools, HR professionals today have more resources at their disposal than ever before.
On the other hand, HR is facing a number of new challenges, such as the increasing demand for work-life balance and wellbeing support, the need to address unconscious biases in HR processes, and the challenge of attracting and retaining top talent in an increasingly competitive global economy.
HR professionals will need to be prepared to adapt to these changes and embrace the opportunities presented by generative AI to improve the overall employee experience and drive business results
By embracing new technologies and innovative approaches in the workplace, HR professionals can ensure that they are well-positioned to meet the challenges of the future and continue to drive value for their organisations.
Need a hand to find the right solution? We’ve got you covered.
| 2023-02-20T00:00:00 |
2023/02/20
|
https://employmenthero.com/blog/what-role-could-chatgpt-play-in-hr-and-people-management/
|
[
{
"date": "2023/02/20",
"position": 17,
"query": "ChatGPT employment impact"
}
] |
Artificial Intelligence is booming – but will it impact your ...
|
Artificial Intelligence is booming – but how will it impact your career?
|
https://www.cnbc.com
|
[
"Sophie Kiderlin",
"In"
] |
Artificial Intelligence is the latest technology buzz topic thanks to the boom of ChatGPT. But are concerns about AI taking over people's jobs warranted?
|
Wang Yukun | Moment | Getty Images
Artificial Intelligence is the latest technology buzz topic thanks to the boom of ChatGPT. The AI-powered chatbot, created by Microsoft-backed OpenAI, has both impressed and shocked with its ability to answer questions, write essays and even argue legal cases. Its capabilities have also put another topic at the forefront of people's minds: if, how and when artificial intelligence might impact their jobs and careers. And while concerns about AI-based technology taking over people's jobs have spiraled, experts say it's not quite that simple.
Replacing or creating jobs?
The short answer to the question of whether AI will replace some jobs is a big, fat "yes." Developments in artificial intelligence mean that technology can achieve more and more, and that will, of course, impact jobs, Steven Miller, professor emeritus of information systems at Singapore Management University, says. "As physical machines, software systems, and combinations of hardware and software get more capable as a result of AI-enablement, it is increasingly possible as well as economically viable to replace a greater share of the portions of the human work of today with machines," he told CNBC Make It. Certain roles are more vulnerable to this than others, Miller adds — especially ones that are highly repetitive or based on very specific instructions or rules that lay out what needs to be done. On the flipside, tasks that change often and therefore require things like adaptability and flexibility are harder for technology to replace. Jobs with a strong human element — such as being a therapist — are especially unlikely to be taken over by technology, according to Dimitris Papanikloaou, a professor of finance at the Kellogg School of Management at Northwestern University. "Jobs that emphasize interpersonal skills are much harder to be replaced by an AI," he says.
New jobs on the horizon?
The concerns aren't surprising, says Steve Chase, consulting leader at KPMG U.S. "As with most technological advancements, an initial fear of job loss and displacement among workers is natural." However, it is crucial to keep several points in mind, he adds. First off, similar disruption has happened before: for example, the spread of computers, or more advanced and specialized machines in factories. These changed the way people work and what type of work they do — and they did make some jobs redundant — but today, we can't imagine life without them. This is a centuries-old process, Miller says, adding that history shows if jobs are lost to new technology, other roles have been created in their place. "The creation of new jobs resulting from the ability to create and deliver of new types of goods and services … have far outpaced the number of jobs displaced," he explains.
Working with AI, not against it
| 2023-02-20T00:00:00 |
2023/02/20
|
https://www.cnbc.com/2023/02/20/artificial-intelligence-is-booming-but-will-it-impact-your-career-and-job.html
|
[
{
"date": "2023/02/20",
"position": 22,
"query": "ChatGPT employment impact"
}
] |
Will ChatGPT steal your job? Let's put it to the test
|
Will ChatGPT steal your job? Let’s put it to the test
|
https://www.woodwing.com
|
[] |
Whether or not you work in the content industry, every single person and their mother has an opinion on ChatGPT. Mostly because it is supposed to form a threat ...
|
If you listen to the hyperbole, copywriters will lose their contracts, journalists will be replaced by bots, and marketing agencies will become obsolete. Since we’re not unfamiliar with writing, publishing and creating our own content, we thought let’s put it to the test. Will ChatGPT be the end of human content creation?
But first...
What is ChatGPT?
ChatGPT is a conversational AI model developed by OpenAI. It is based on transformer architecture and has been trained on a massive dataset of text from the internet, allowing it to generate human-like text. ChatGPT can be used for a variety of natural language processing tasks, including text generation, text classification, and question answering. The "GPT" in ChatGPT stands for "Generative Pre-trained Transformer".
In a conversational setting, ChatGPT can be used to build chatbots that can interact with users and answer questions. The model has been designed to understand context and maintain coherence in long conversations, making it well-suited for building conversational AI applications.
How well suited? The above two paragraphs are written by ChatGPT. You may criticize the writing — a human writer might wonder why the adjective "massive" is used for something non-physical, and the sentence structure tends toward dreadful — but it does answer the question. And in the end, this is just an opinion.
GPT falls under a subcategory of machine learning known as "Generative AI". This type of AI examines large amounts of data and trains models that have the ability to produce new items.
For instance, when provided with millions of works of art, a generative-AI program can learn to produce its own art, much like OpenAI's popular creation, DALL-E.
Similarly, when given a substantial amount of computer code, a generative-AI program is capable of generating code of its own.
The Hype of ChatGPT
As seen around any new technology, it is all very exciting. AI has sparked imagination in everyone, eager to learn how it can make things better, faster, and cheaper. Our customers are very interested in the topic, as we discovered during one of our monthly partner webinars.
To help us understand the current hype around AI, it helps to look at this graph from consulting firm Forrester. It lays out the stages for any emerging technology.
A lot of the big tech breakthroughs over the past three decades, like the internet, the smartphone, and cloud computing, have been tracked using this cycle.
Source: Forrester
Let's take the internet, for example. The "technology trigger" was when the World Wide Web was first publicly demonstrated in '89 - it was interesting, but not many people paid attention to it, and even fewer saw its potential.
Then in the '90s, things got wild - the internet exploded with all sorts of crazy and sometimes far-out ideas, fueled by the dot-com bubble. That's when we hit the top of the awareness hump. Everyone knew about the Internet, was caught up in the buzz, and had inflated expectations of what it could deliver.
But what goes up must come down. The bubble burst, and we found ourselves in the acceptance stage, where development slowly took off. People started to think the internet was just hype and some companies were wiped out.
But here's the thing - the internet didn't go away. It just needed time to mature.
And that's what happened during that last part of the graph. Developers and investors started focusing on practical applications and improving the user experience. Mobile web launched, and before we knew it, the modern internet - which is basically the backbone of our economy - was in full blooming stage.
Forrester:
Visionaries will dominate dawning phase trends as they drive point inventions to address specific business organizations’ opportunities.
trends as they drive point inventions to address specific business organizations’ opportunities. Fast followers will discover the limits of point solutions in the awareness phase and begin to work through them.
and begin to work through them. Enterprises will shift investment toward integrating capabilities across the customer life cycle in the acceptance phase.
So where's ChatGPT now on this cycle?
Since ChatGPT hit the scene, more people are jumping on the generative-AI bandwagon, and checking out other tools like Lensa AI. And according to the Harvard Business Review, this latest ChatGPT version is a "tipping point" for AI - like, it's finally ready for a whole bunch of new uses.
But before we get too excited. This also means we're inching closer to the top of that ‘investment appetite hump’ full of inflated expectations. And that means - people are starting to see all the ways that ChatGPT falls short.
It’s biased, spreads misinformation and gives harmful instructions. OpenAI is open about its shortcomings themselves.
For long-form copy - it lacks all but the most basic understanding of human search intent. So whilst it may be able to answer individual questions adequately, it isn't equipped to create engaging, informative articles at present. Longer AI-generated content feels like a student padding out an essay to get to the minimum word count. There's quantity but no quality.
The process of content creation. Does ChatGPT save time?
Eager as we are, we all couldn't wait to try the new way of ways to streamline our workflows and produce engaging content. We started feeding ChatGPT a few prompts and were amazed by how quickly it was able to generate responses. It was like having a virtual writing assistant at our fingertips.
"Great, let's save on the costs of copywriting, freelancers, journalists, and photographers all together and fully invest in AI," said no qualified leader. But many might think ChatGPT is the answer to a tight budget. Is this true?
Below you will find a number of tests to see if you can spot the AI over human writing and compare quality. Go through the six examples, and count how many you can get right:
TEST 1: What is Digital Asset Management? If you are already aware of what DAM is and how DAM solutions differ from file sharing services, please skip to the next section. If not, read on. There are a number of definitions of "digital asset", but for our purposes, it means anything that exists with a digital format that is of (financial) value to a business or organization, and has usage rights attached to it. This includes rich media such as stock library images, photos, design files, brand logos, presentations, spreadsheets, digital notes, documents and more. As you may guess, Digital Asset Management is a business process for organizing, storing and retrieving these rich media, and managing digital rights and permissions. As mentioned, digital assets can be stored on file storage solutions such as Dropbox, Google Drive, and others. But DAM software has a much broader function. Below is a list of areas where DAM software typically exceeds the functionality of file sharing or network server storage. ChatGPT Human
TEST 2: What is Digital Asset Management? Digital Asset Management (DAM) is a system or process that allows organizations to manage, store, organize, and distribute digital assets, such as images, videos, audio files, and other types of media. The goal of DAM is to make it easier for organizations to find and use their digital assets, while also ensuring their security, accuracy, and consistency.
One of the key features of DAM is centralized storage. Digital assets are stored in a central location, where they can be easily accessed by authorized users. This makes it easier to manage and organize large collections of digital assets, and reduces the risk of loss or duplication.
Another important feature of DAM is metadata management. Metadata is additional information about a digital asset, such as its title, description, keywords, and copyright information. DAM systems allow users to add, edit, and search for metadata, making it easier to find and use digital assets. ChatGPT Human
TEST 3: What is rebranding and what advantages does it bring? You can make rebranding as grand or as small as you’d like. From changing just your logo to completely starting from scratch with your product or service because of a merger or acquisition.
Advantages of a new brand range from improved brand awareness and Google ranking, to increased relevance to existing audiences and of course attracting new ones. What the reason may be, you are left with a considerable amount of digital assets. Both old and new.
How do you manage to stay on top of the rebranding process as efficiently as possible and make the investment worthwhile? ChatGPT Human
TEST 4: What is rebranding and what advantages does it bring? Rebranding is a strategic process that involves altering the brand identity of a company or product. This includes modifying elements such as the name, logo, messaging, and visual elements. Rebranding is usually done for a variety of reasons such as to better reflect the company's mission or values, update an outdated image, or appeal to a new target audience. A successful rebranding effort can bring several advantages to a company.
One of the main benefits of rebranding is an increase in brand recognition. By updating the company's identity, it becomes more memorable and recognizable to consumers. This can lead to increased customer loyalty and positive brand associations. Rebranding can also improve the customer perception of a company. By changing the look and feel of a brand, it can become more appealing, modern, or relevant. This can lead to increased trust and improved engagement with customers. ChatGPT Human
TEST 5: Content pausing power Content has pausing power when it is (what Google calls) “thumb friendly”. The problem is that in order to create high-quality, thumb-friendly content you need time. Especially when thinking about time invested in research, production and possible visual imagery that usually accompanies a good content piece. The result is that big media companies with the right ratio of manpower and paid media budget might well make the cut through the 8 hour daily clutter of content, whilst other, smaller, media companies might not. To reach the “thumb generation”, Google advises media companies to: Be centered around the consumer
Be bold with quick turnaround times
Be mobile-friendly But this alone is not enough. ChatGPT Human
TEST 6: Content pausing power "Content pausing power" is a term used to describe the effectiveness of temporarily halting the delivery of content in marketing campaigns. This can refer to pausing or stopping ads, email newsletters, social media posts, or other forms of content. The idea behind content pausing power is that taking a break from a particular piece of content or campaign can actually increase its impact and effectiveness in the long run.
There are several reasons why content pausing power can be effective. First, pausing content can create a sense of scarcity or urgency, making it more appealing to the audience. When content is available all the time, it can become less valuable or less attention-grabbing. By pausing content, companies can create a sense of anticipation and excitement, leading to increased engagement and interest. ChatGPT Human
Not all the steps in content creation can be automated or executed by a bot. Especially during the research phase and the creation phase, ChatGPT might come in as a helpful addition.
Content creation, written blog posts in these examples, is an open ended cycle that repeats itself. Each step takes time depending on the topic.
Research - SEO, compare, use tools, aim for uniqueness
- SEO, compare, use tools, aim for uniqueness Ideation - Find a unique point of view
- Find a unique point of view Placement - Decide which channel is used and do external outreach
- Decide which channel is used and do external outreach Creation - Take time to produce quality content
- Take time to produce quality content Publication - Decide on lay out, supporting images and uploading
- Decide on lay out, supporting images and uploading Promotion - Use all available and fitting channels and plan ahead
- Use all available and fitting channels and plan ahead QA - Do not ignore published content, optimize, build backlinks and re use.
Personally I don't use ChatGPT. It's an interesting development but I still like the human touch and creativity, in every sense of the word. I don't just write text for my customers, but also help them figure out where they stand, what their audiences are looking for and how they can optimize the use of created content to get the desired response.
Aranka Aarsman, Content creator
Now let’s sum up and look at why ChatGPT still can’t replace your content team
As mentioned, ChatGPT acknowledges their own shortcomings. Looking at the results and the fact that using the bot as an extra tool, the technology will only cost you more money and is considered a gadget at most, when we ask our own content creators.
As Tom Pijsel (VP of product management at WoodWing) advises customers in the company’s Help Center for WoodWing Assets and WoodWing Studio:
“It doesn’t mean these solutions cannot be of great value for a content writer, it can for example assist you with summarizing text, creating social snippets and headlines, help to create interview questions, translate content, gather content, set up FAQ’s or make a draft plan.”
In our experience this adds time and thus money to the steps of quality assurance and research phase of content production instead of saving any. Extra steps, even, are taken to check if written content is not ChatGPT generated with extra tools. We’ll need a human to take those actions. And we get paid.
The technique behind ChatGPT is nonetheless very helpful, interesting and sparks imaginations in any of us. WoodWing will definitely keep an eye on the possibilities these new techniques bring for our own solutions.
This content is written by a human being. Some parts of sentences may be generated with the help of ChatGPT.
| 2023-02-20T00:00:00 |
https://www.woodwing.com/blog/will-chat-gpt-steal-your-job-test
|
[
{
"date": "2023/02/20",
"position": 64,
"query": "ChatGPT employment impact"
}
] |
|
The Must-Have Skills in the Era of Artificial Intelligence
|
The Must-Have Skills in the Era of Artificial Intelligence: How AI’s Democratization Will Impact Workers
|
https://kenaninstitute.unc.edu
|
[] |
UNC Kenan-Flagler Business School Professor Mark McNeilly discusses how ChatGPT and other AI tools will change the workplace - as well as how workers can ...
|
With the advent of ChatGPT, the democratization of AI has begun. By that, I mean AI has moved into a new domain; until now, it has been embedded in mostly unseen applications like Google Search, Facebook and Twitter (all of whom use it to increase engagement), as well as a basis for recommendations on sites such as Amazon and Netflix. Now, however, artificial intelligence will be interacting much more directly with humans—and doing so in a more human manner. Of course, we’ve had Alexa and Siri for a while now, but their abilities are limited to short answers and simple tasks. ChatGPT can carry on deep conversations with you on an unlimited variety of topics.
More importantly, ChatGPT and other programs like it (Google’s Bard, for example) can replicate the work performed across the knowledge worker class, as well as the domain-related tasks within specific knowledge worker professions. This gives us a glimpse of what ever-more-powerful AI tools might be able to do, which is both exciting and to say the least, unsettling.
As illustrated here, ChatGPT can help you do many office tasks, such as writing emails, outlining slide presentations and articles, brainstorming ideas on a variety of subjects, helping with several teaching tasks (e.g., crafting learning outcomes, writing test questions, and grading), and learning about a myriad of topics.[1] Related “generative AI” tools can go even further and write full articles, generate amazing images, write computer code, create videos, offer medical advice, provide therapy, assist in legal talks, and compose music. These abilities impact specific professions such as programmers, graphic designers, doctors, and lawyers.
Will AI Replace Me?
Given these capabilities, AI will definitely change how professional work is done. Will AI replace humans in these cognitive professions, much as machines have replaced humans performing manual labor? When answering this question, there are a few possible scenarios to consider.
AI will not replace all cognitive jobs, but it will substantially change many of them.
Given what AI tools like ChatGPT and others can do this is going to happen. It will just impact different professions at different rates. Those most impacted will be jobs where generative AI is growing; creative jobs in journalism, ad agencies, design firms; the video and music industries; and IT, medicine and law. AI tools will replace some of the work done, either within those organizations or by consumers outside those companies performing the work themselves (e.g., creating their own images and logos for personal use).
You won’t be replaced by AI. You will be replaced by a person with AI skills.
The more than AI tools can be used to be productive in your job, the more this will be true for you. People who are fluent in generative and other types of AI will be able to get more done, be more creative, and learn faster than their counterparts who cannot. It behooves you to become proficient in these tools. Some interesting examples of how some professionals are using ChatGPT to be more productive include real estate agents using it to create listings, marketers using it to create video scripts, a designer using it to create a branding guide and a teacher using it to better serve students.
AI will eliminate some jobs in specific professions.
At this point it is too early in the evolution of generative AI to say but it would seem logical that it will replace some percent of jobs in the specific professions mentioned above in #1.
AI will create new jobs.
It is unclear how many new jobs generative AI will create, however, we are seeing some already appear, mostly around the ability to write good prompts for the AI. We are seeing this especially for ChatGPT text entry to text output and tools like DALL-E which is text entry to image output. For example, BuzzFeed is asking its employees to use ChatGPT to “write inspired prompts” to create content while a market has been created for people who can write prompts that create the highest quality generated artwork.
What Skills Will I Need in the AI Future?
Knowing that AI will play an ever increasing role in our workplace and economy, some fundamental principles for surviving and thriving in this new world apply.
Building skills that are not easily replicated by AI. This is a little tougher than one might think because AI is being applied in some many arenas. Columnist David Brooks, in an article titled In the Age of A.I., Major in Being Human, believes capabilities such as empathy, creativity, having a unique worldview, and situational awareness will be the key to success. This is consistent with a study done by McKinsey on the impact of AI. Based on research in 15 countries involving 18,000 people, it recommended individuals focus on adding value beyond what AI can offer. The McKinsey study also recommended two other approaches:
Ability to operate in an AI environment. Essentially this digital fluency in AI and other digital technologies. Those that know be more productive using these tools will surpass their peers.
Resilience and Adaptability. AI and other technologies will not only disrupt the economy and workplace, they will do so at an ever increasing pace. Only those who are flexible will be able to navigate this dynamic environment.
McKinsey’s report states there are 56 “DELTAs” (Distinct Elements of Talent) that are a combination of both skills and attitudes. These fall into four buckets: Self-Leadership, Digital, Interpersonal and Cognitive, as seen in the image below.
Dondi, M., Klier, F., & Schubert, J. (2021, June 25). Defining the skills citizens will need in the future world of work. McKinsey & Company. https://www.mckinsey.com/industries/public-and-social-sector/our-insights/defining-the-skills-citizens-will-need-in-the-future-world-of-work
Out of those 56 DELTAs, they found nine that most highly correlated with increased employment, income, and job satisfaction. These were: self-confidence, coping with uncertainty, ability to synthesize information, synthesizing messages, adaptability, self-motivation and wellness, work-plan development and organizational awareness.
Conclusion
In sum, in the short term, new AI tools will substantially change and in some cases may eliminate many knowledge worker jobs. To be successful in the coming era one must not only become fluent in these tools, but also be a self-confident, self-motivated, mentally strong individual who can deal with uncertainty by being adaptable and resilient, synthesize messages and information, plan and navigate large organizations.
The democratization of AI is here – are you ready?
[1] However, one must be careful. ChatGPT and other tools like it often “hallucinate” 15%-20% of the time, giving incorrect answers or making up sources. One must check the AI output carefully before relying on it.
| 2023-02-20T00:00:00 |
https://kenaninstitute.unc.edu/commentary/the-must-have-skills-in-the-era-of-artificial-intelligence-how-ais-democratization-will-impact-workers/
|
[
{
"date": "2023/02/20",
"position": 2,
"query": "AI workers"
},
{
"date": "2023/02/20",
"position": 49,
"query": "artificial intelligence graphic design"
}
] |
|
Empower AI Company Leadership Team
|
Empower AI
|
https://www.empower.ai
|
[] |
Empower AI is powered by a forward-thinking team of builders, leaders, and technology professionals. Our executive team works each day to elevate our customers ...
|
A Dynamic Team
Guided by Core Principles.
Learn how we are bringing our vision to life through leadership. We are here to improve how the government conducts business by helping its leaders realize the promise of AI.
| 2023-02-20T00:00:00 |
https://www.empower.ai/leadership-team
|
[
{
"date": "2023/02/20",
"position": 4,
"query": "AI business leaders"
}
] |
|
AI is starting to pick who gets laid off
|
AI is starting to pick who gets laid off
|
https://www.washingtonpost.com
|
[
"Pranshu Verma",
"Annie Gowen",
"John Muyskens",
"Arelis R. Hernández",
"Daniel Wolfe",
"Nicole Dungca",
"Naema Ahmed",
"Todd Frankel",
"Kevin Crowe",
"Tom Jackman"
] |
A January survey of 300 human resources leaders at US companies revealed that 98 percent of them say software and algorithms will help them make layoff ...
|
Days after mass layoffs trimmed 12,000 jobs at Google, hundreds of former employees flocked to an online chatroom to commiserate about the seemingly erratic way they had suddenly been made redundant. They swapped theories on how management had decided who got cut. Could a “mindless algorithm carefully designed not to violate any laws” have chosen who got the ax, one person wondered in a Discord post The Washington Post could not independently verify.
Google says there was “no algorithm involved” in its job-cut decisions. But former employees are not wrong to wonder, as a fleet of artificial intelligence tools become ingrained in office life. Human resources managers use machine learning software to analyze millions of employment-related data points, churning out recommendations of whom to interview, hire, promote or help retain.
But as Silicon Valley’s fortunes turn, that software is probably dealing with a more daunting task: helping decide who gets cut, according to human resources analysts and workforce experts.
Advertisement Advertisement
Advertisement
Advertisement Advertisement
A January survey of 300 human resources leaders at U.S. companies revealed that 98 percent of them say software and algorithms will help them make layoff decisions this year. And as companies lay off large swaths of people — with cuts creeping into the five digits — it’s hard for humans to execute alone.
Big firms, from technology titans to companies that make household goods often use software to find the “right person” for the “right project,” according to Joseph Fuller, a professor at Harvard’s business school who co-leads its Managing the Future of Work initiative.
These products build a “skills inventory,” a powerful database on employees that helps managers identify what kinds of work experiences, certifications and skill sets are associated with high performers for various job titles.
Advertisement
These same tools can help in layoffs. “They suddenly are just being used differently,” Fuller added, “because that’s the place where people have … a real … inventory of skills.”
Human resource companies have taken advantage of the artificial intelligence boom. Companies, such as Eightfold AI, use algorithms to analyze billions of data points scraped from online career profiles and other skills databases, helping recruiters find candidates whose applications might not otherwise surface.
Since the 2008 recession, human resources departments have become “incredibly data driven,” said Brian Westfall, a senior HR analyst at Capterra, a software review site. Turning to algorithms can be particularly comforting for some managers while making tricky decisions such as layoffs, he added.
Many people use software that analyzes performance data. Seventy percent of HR managers in Capterra’s survey said performance was the most important factor when assessing whom to lay off.
Advertisement Advertisement
Advertisement
Other metrics used to lay people off might be less clear-cut, Westfall said. For instance, HR algorithms can calculate what factors make someone a “flight risk,” and more likely to quit the company.
This raises numerous issues, he said. If an organization has a problem with discrimination, for instance, people of color may leave the company at higher rates, but if the algorithm is not trained to know that, it could consider non-White workers a higher “flight risk,” and suggest more of them for cuts, he added.
“You can kind of see where the snowball gets rolling,” he said, “and all of a sudden, these data points where you don’t know how that data was created or how that data was influenced suddenly lead to poor decisions.”
Advertisement
Jeff Schwartz, vice president at Gloat, an HR software company that uses AI, says his company’s software operates like a recommendation engine, similar to how Amazon suggests products, which helps clients figure out whom to interview for open roles.
He doesn’t think Gloat’s clients are using the company’s software to create lists to lay people off. But he acknowledged that HR leaders must be transparent in how they make such decisions, including how extensively algorithms were used.
Advertisement Advertisement
“It’s a learning moment for us,” he said. “We need to uncover the black boxes. We need to understand which algorithms are working and in which ways, and we need to figure out how the people and algorithms are working together.”
Advertisement
The reliance on software has ignited a debate about the role algorithms should play in stripping people of jobs, and how transparent the employers should be about the reasons behind job loss, labor experts said.
“The danger here is using bad data,” said Westfall, “[and] coming to a decision based on something an algorithm says and just following it blindly.”
But HR organizations have been “overwhelmed since the pandemic” and they’ll continue using software to help ease their workload, said Zack Bombatch, a labor and employment attorney and member of Disrupt HR, an organization that tracks advances in human resources.
Given that, leaders can’t let algorithms solely decide whom to cut, and should review suggestions to ensure they aren’t biased against people based on color, gender or age — which would bring lawsuits, he said.
| 2023-02-20T00:00:00 |
2023/02/20
|
https://www.washingtonpost.com/technology/2023/02/20/layoff-algorithms/
|
[
{
"date": "2023/02/20",
"position": 1,
"query": "AI layoffs"
},
{
"date": "2023/02/20",
"position": 1,
"query": "artificial intelligence layoffs"
}
] |
The Future of Writing in a World of Artificial Intelligence ...
|
The Future of Writing in a World of Artificial Intelligence #ChatGPT
|
https://spencereducation.com
|
[
"John Spencer"
] |
When writing about local issues for a school newspaper or blog, the AI won't have the local knowledge that a student journalism team demonstrates. This is ...
|
458 SHARES Share Tweet Pin Pocket
Back in December, I showed ChatGPT to a friend of mine who is also a professor.
“I’m not worried about AI in my humanities courses,” she said.
“Not at all?”
She shook her head. “I know of colleagues who are going back to the blue books and banning devices. Or they’re looking into programs that can detect ChatGPT in an essay. But I’m just wondering how we might need to transform the essay.”
We then talked about Socrates and his concerns about writing.
One of the main reasons was that Socrates believed that writing would cause people to rely too much on the written word, rather than their own memories and understanding. He believed that people who read a text would only be able to interpret it in the way that the author intended, rather than engaging in a dialogue with the ideas presented and coming to their own conclusions. Moreover, Socrates was concerned that writing could be used to spread false ideas and opinions, and that it could be used to manipulate people.
Sound familiar? These are many of the same concerns people have with AI.
“I’ve been through this before,” she adds. “When I realized students could just download whole essays, I started requiring students to do pre-writing that they turned in. I changed to high-interest prompts that you couldn’t find online. Now I see that ChatGPT can generate responses to those high-interests prompts and I’m going to think hard about how treat AI as a tool.”
Together, we planned out a solution that would include blending together AI-generated and student-generated text. It was similar to what I describe later in this article. The essay isn’t dead but it is changing. It will continue to evolve in the upcoming years. For now, the use of AI is forcing us to ask, “When is AI a learning tool and when is it cheating?”
When Is It Cheating?
When I was a new middle school teacher, I had several teachers warn me not to have my students use spellcheck. If we let students use spellcheck, students would grow dependent on the tool and they would become awful spellers. I had similar concerns as well. If we relied too heavily on technology to fix spelling mistakes, would students ever bother to use correct spelling?
That semester, I had students submit a writing sample. I then counted the words and the number of spelling errors to find the rate of spelling mistakes. I then had students do a handwritten assessment at the end of the semester. There was a significant decrease in the number of spelling mistakes when comparing the initial student samples to the samples at the close of the semester. It turned out this tool for cheating was actually providing students with immediate feedback on their spelling. Instead of mindlessly clicking on the spellcheck, they were internalizing the feedback.
We now use spell check all the time. What was once a tool for “cheating” is now a tool we use for writing.
The truth is students are already using AI in their writing. We don’t tend to think of spell check as AI. But it is a primitive example of a smart algorithm. While spell check software is not as advanced as the newer generations of AI, it still relies on machine learning and pattern recognition to improve its accuracy over time. Some spell check software may also use natural language processing techniques to detect contextual errors, such as correctly spelled but misused words. If it seems as though your spell check and grammar checks on Word and Google Docs have improved over the years, it’s because they have.
Students are already using more advanced AI in every phase of the writing process. When doing research, the auto-fill option in Google narrows down the search for students. When typing in a Google Document, the auto-fill option will often complete sentences for students. As students edit their work, the grammar check offers suggestions for what needs to change. Certain students might even use Grammarly to polish their writing in the editing phase. The AI here is so subtle that we sometimes miss it. But machine learning is already fueling aspects of the student writing process.
Note that all of these tools have been considered cheating at some point. The same is true for calculators in math and for spreadsheets in statistics. Every technological advancement has been considered a form of cheating at first. However, eventually, these tools become essential elements to the learning and creative processes.
Somehow, ChatGPT feels different. As a newer generation of AI, it is built on deep learning. This new generation of AI relies on algorithms designed to mirror the human brain. That’s part of why ChatGPT feels so human. Deep learning models learn from massive amounts of data sets and engage in pattern recognition in a way that’s not explicitly programmed. In other words, the algorithm is learning and can now make predictions and generate entirely new ideas. The term “deep” in deep learning refers to the use of multiple layers in a neural network, allowing the system to learn and represent increasingly complex features at each layer. If a spell check is one-layer deep, ChatGPT is multilayered.
So if it feels like ChatGPT is more akin to cheating than previous AI, it’s because it functions in a way that more closely mirrors human thinking. Clippy was cute and even acted a bit human it its tone but current chatbots can feel as though you are actually talking to a person.
So where does that leave us with cheating? When is AI simply a tool to enhance learning and when is it co-opting and replacing a vital part of the learning process? It can help to think of it on a continuum. I love the way Matt Miller, from Ditch that Textbook conceptualizes it:
As Miller describes, “We’re going to have to draw a line — as educators, as schools, even as school districts — to determine what we’re going to allow and what we aren’t.” I love the last question about how students might use AI in the future because it might vary from task to task. In writing blog posts, I might consult ChatGPT for ideas or even use it to explain a definition (where I then modify and re-write it). However, I wouldn’t want ChatGPT to write this. I want it to be my own voice. On the other hand, I could see the appeal of AI to answer my emails or even create a first draft of technical writing after I generate an outline. The truth is we are all going to use AI in a blended way.
A Blended Approach to Essay Writing
This blended approach moves away from the either/or options of embracing Artificial Intelligence or blocking it entirely. Instead, it focuses on using AI wisely to enhance the learning while also embracing the human elements.
A blended approach might include a mix of hand-written and AI-generated writing. Students can create sketchnotes and blend together drawings and text in an interactive notebook or journal. These low-tech options focus on writing as a way of “making learning visible.” Here, students choose old school tools because the simplicity provides more flexiblity for deeper thinking.
But these same students might also use a chatbot to generate new ideas or use an AI-generated response that they then modify and amplify. They might use AI as the starting place and add their own unique voice to the finished product. This could happen in the same class period. Students in a blended mode can shift back and forth between the human and artificial intelligence.
Part of learning how to use AI wisely is finding the best contexts for using AI. This starts by recognizing the strengths of both the human and machine learning. Machine learning does well with synthesizing information from multiple sources and generating text that’s clear and readable. Machine learning can also help with editing and ideation. But if machine learning offers artificial intelligence, humans offer authentic intelligence. We do well with voice and tone (especially humor). We understand context and can write with empathy. While AI does well with creativity, we are better divergent thinkers.
The following is a blended approach that combines human thinking with artificial intelligence.
Initial Conceptual Understanding
I sometimes think about this as the pre-pre-writing. It’s the phase where students access their prior knowledge before asking questions and engaging in research. A blended approach might start with a K-W-L chart on a shared document. Students jot down everything they know about the topic. Next, they write down everything they want to know about the topic. At this point, instead of moving into research, students would use a chatbot (like ChatGPT) to ask questions and learn about the topic. The dynamic nature of a chatbot allows for clarifying questions that are tailored to a student’s understanding.
For this example, I’ll give a social studies essay asking, “How susceptible are we to having a World War III?”as a part of their unit on World War I.
So a student might ask, “What were the causes of World War I?”
The answer ChatGPT gives is:
Militarism: The arms race between the major powers in Europe, particularly Germany and Britain, had been escalating for many years before the war. The belief in the power of military might led to an increased emphasis on building up national armed forces. Alliances: A system of alliances had developed between the major powers of Europe, which meant that if one country went to war, its allies would also be drawn in. The main alliances were the Triple Entente (France, Russia, and Britain) and the Triple Alliance (Germany, Austria-Hungary, and Italy). Imperialism: The competition for colonies and territories around the world led to tensions between European powers, particularly Germany and Britain. Nationalism: The belief in national superiority and the desire for self-determination led to tensions and conflicts between ethnic and national groups, particularly in the Balkans. Assassination of Archduke Franz Ferdinand: The assassination of the heir to the Austro-Hungarian throne, Archduke Franz Ferdinand, by a Serbian nationalist in June 1914, provided the trigger for the outbreak of war.
The student might ask ChatGPT to simplify it with a prompt like, “Explain it to someone in the eighth grade.”
Militarism: This means that the countries in Europe were always trying to build up their armies and make them as strong as possible. It was kind of like a competition to see who could have the biggest, strongest military. Alliances: Some countries in Europe had made agreements with each other that if one of them went to war, the others would help them out. It was like a team sport, where if one player gets into trouble, the others come to their aid. Imperialism: This means that some countries wanted to take over other parts of the world, like colonies in Africa or Asia. This caused tension because different countries wanted the same territories, and they were willing to fight for them. Nationalism: This is when people feel very strongly about their own country and think it’s better than any other. This led to conflict between different groups of people, especially in the Balkans, where there were many different nationalities and ethnic groups. Assassination of Archduke Franz Ferdinand: This was the event that triggered the start of the war. Archduke Franz Ferdinand was the heir to the throne of Austria-Hungary, and he was assassinated by a group of Serbian nationalists. This set off a chain of events that eventually led to war.
At this point, the student might ask follow-up questions, like “Why would an assassination lead to a whole world at war?” Or perhaps this person is confused by militarism and asks, “Can you give me examples of militarism?” Or maybe they focus on the U.S. and say, “Did Americans care about World War I?” It could be a random question like, “What did they call World War I before World War II happened?”
At this point, students can add to their KWL chart. They might engage in some initial research and view videos or read articles. The process would be dynamic.
Research
In this phase, students would engage in online research by generating questions and finding answers. They might start with an initial set of questions but then amplify it by asking a chatbot to come up with research questions for them. During the online research, they would read articles and add to their research chart (with questions, answers, and citations) but also use the chatbot to clarify misunderstandings. Students might work in teams to compare their information and even go entirely human with a Socratic Seminar. In the end, students would create a hand-drawn sketchnote showing the connections between ideas. In other words, how did nationalism relate to imperialism?
Organizing Ideas and Outlining
In this phase, students would brainstorm ideas and organize them into a coherent outline. They might do a mind map or organize their ideas with sticky notes. At some point, students would create an initial outline for their essay. For sake of transparency, they would screenshot the initial outline and then ask for the chatbot to create an outline. Then, after comparing the outlines, they would modify their own outline. Students might even generate multiple outlines using the regenerate responses button on ChatGPT.
Writing
In this phase, students could take their initial outline and ask for the chatbot to generate the actual text. They would take an initial screenshot with a time stamp and then copy and paste the text into a shared document (Google Document). From here, students would modify the text to add their own voice. They would need to add additional sentences and perhaps even break up paragraphs. Using their research chart, students would add facts and citations that they then explain. The initial chatbot text would be black but the human text would be a color of the students’ choice.
Editing and Revision
As students move toward revision, they could engage in a 20-minute peer feedback process:
A key aspect of editing and revision is asking, “how is this being received?” or “how do actual humans respond to this piece?” Most of the feedback could be the type that humans do well, such as voice, engagement, tone, and clarity. But students could also ask for specific feedback from the chatbot. It might be something like, “How can I make my argumentation better?” or “What are some changes I could do to make the essay flow more smoothly.” Students might engage in a one-on-one writing conference with the teacher but then move back to the AI for additional targeted feedback.
Adding Multimedia
If students want to transform their essay, they could add a human touch by doing a video or audio essay. You can give students examples of video essays like those of the Nerdwriter YouTube channel. Here, they combine images, video, and text with their distinctly human voice. They might sketch a few slides to illustrate key points or even animate these in the style of Common Craft videos. Again, this approach blends together technology with the human touch. But students can use AI as a tool to generate images based on command prompts. They might also ask a chatbot to come up with ideas for images or videos to use alongside their voice.
What About Accountability?
Notice that this approach shifts accountability from surveillance and punishments and toward trust and transparency. Students use AI-generated text but then they screenshot it (which then has the time stamp) and copy and paste it into a Google Document. They then modify the AI-generated text with a color-coded process that makes it easy to visualize how much of the text is human-generated. In using this process, I’ve found that students have re-arranged paragraphs, added entirely new paragraphs, and amplified their writing far beyond the initial AI-generated text.
I mention this because I’ve already had several people reach out to me asking if I would test their AI-detection software. These programs promise to detect cheating by analyzing a piece of writing and detecting whether or not it was human-generated. Within a minute, you receive a score describing how much of the work has been generated by AI. Think of it as a Turn It In on steroids. Oddly enough, these programs are a form of AI. The complex algorithms look at a series of factors to determine if something was AI-generated.
It starts by examining semantic coherence. Human thought tends to be more logical but also contains random digressions. In other words, we tend to take random rabbit trails. It also looks at tone and style. Human writers tend to have distinct styles and tones that are shaped by their experiences, personality, and background, whereas AI-generated writing may be more generic and lacking in personality. We also use more colloquial language, like the aforementioned rabbit trails. We tend to change verb tenses more often as well. Finally, these detection programs look at text complexity. Human language tends to be more complex and varied than AI-generated language, which may be more formulaic or repetitive. An AI detector may analyze factors such as sentence length, vocabulary, and syntax to determine if the writing is consistent with human language.
I’ve tested out three of these programs with abysmal results. I used unpublished writing of my own, a series of student pieces, and a bunch of AI prompts generated by ChatGPT. I then used some pieces that contain a hybrid of both. In each case, I found that these algorithms struggled to determine the AI-generated prompts when they were a human-AI hybrid. But more alarming, there were many false positives. The AI kept identifying unpublished human work as AI-generated.
This is a disturbing trend as we think about “catching cheaters” in an age of AI. We are essentially entrusting advanced algorithms to judge the academic integrity of our students. Imagine being a student who wrote something entirely from scratch only to find that you failed a class and faced academic probation because the algorithm sucks at determining what is human. This approach relies on surveillance, detection, and punishment. Even as the algorithms improve in detecting AI-generated text, I’m not sure this is the direction schools should emphasize.
Fortunately, there’s a more human approach to accountability. It’s the trust and transparency approach that my professor friend brought up when she first heard about ChatGPT. Instead of panicking and moving into a lockdown approach, she asked, “How can we have students use the tools and make their thinking visible?”
Cautions for Students Using AI
If you log into ChatGPT, the home screen makes it clear what AI does well and what it does poorly. I love the fact that the technology makes it clear, from the start, what some of its limitations might be. However, there are a few more limitations about ChatGPT that students should consider.
ChatGPT is often dated . Its neural network relies on information that stops at 2021. This means ChatGPT lacks understanding of emerging knowledge. For example, when I asked a prompt about Russia and Ukraine, the response lacked any current information about the current Russian invasion of Ukraine.
. Its neural network relies on information that stops at 2021. This means ChatGPT lacks understanding of emerging knowledge. For example, when I asked a prompt about Russia and Ukraine, the response lacked any current information about the current Russian invasion of Ukraine. ChatGPT can be inaccurate. It will make things up to fill in the gaps. I was recently talking to someone who works at MIT and she described some of the inaccurate responses she’s gotten from ChatGPT. This could be due to misinformation in the vast data set it pulls from. But it might also be an unintended consequence of the inherent creativity in A.I. When a tool has the potential to generate new content, there is always the potential that the new content might contain misinformation.
It will make things up to fill in the gaps. I was recently talking to someone who works at MIT and she described some of the inaccurate responses she’s gotten from ChatGPT. This could be due to misinformation in the vast data set it pulls from. But it might also be an unintended consequence of the inherent creativity in A.I. When a tool has the potential to generate new content, there is always the potential that the new content might contain misinformation. ChatGPT may contained biased content. Like all machine learning models, ChatGPT may reflect the biases in its training data. This means that it may give responses that reflect societal biases, such as gender or racial biases, even if unintentionally. Back in 2016, Microsoft introduced an AI bot named Tay. Within hours, Tay began posting sexist and racist rants on Twitter. So, what happened? It turns out the machine learning began to learn what it means to be human based on interactions with people on Twitter. As trolls and bots spammed Tay with offensive content, the AI learned to be racist and sexist. While this is an extreme example, deeper learning machines will always contain biases. There’s no such thing as a “neutral” AI because it pulls its data from the larger culture. Many of the AI systems used the Enron data files as an initial language training. The emails, which were in public domain, contained a more authentic form of speech. But it was also a form of speech that skewed conservative and male because Enron was a Texas-based energy company.
Like all machine learning models, ChatGPT may reflect the biases in its training data. This means that it may give responses that reflect societal biases, such as gender or racial biases, even if unintentionally. Back in 2016, Microsoft introduced an AI bot named Tay. Within hours, Tay began posting sexist and racist rants on Twitter. So, what happened? It turns out the machine learning began to learn what it means to be human based on interactions with people on Twitter. As trolls and bots spammed Tay with offensive content, the AI learned to be racist and sexist. While this is an extreme example, deeper learning machines will always contain biases. There’s no such thing as a “neutral” AI because it pulls its data from the larger culture. Many of the AI systems used the Enron data files as an initial language training. The emails, which were in public domain, contained a more authentic form of speech. But it was also a form of speech that skewed conservative and male because Enron was a Texas-based energy company. ChatGPT lacks contextual knowledge. While ChatGPT can analyze the words in a given sentence or paragraph, it may not always understand the context in which those words are used. This can lead to responses that are technically correct but don’t make sense in the larger conversation. If a student writes a personal narrative, they know the context better than any AI could possibly understand. When writing about local issues for a school newspaper or blog, the AI won’t have the local knowledge that a student journalism team demonstrates. This is why it’s critical that students learn how to contextualize knowledge.
While ChatGPT can analyze the words in a given sentence or paragraph, it may not always understand the context in which those words are used. This can lead to responses that are technically correct but don’t make sense in the larger conversation. If a student writes a personal narrative, they know the context better than any AI could possibly understand. When writing about local issues for a school newspaper or blog, the AI won’t have the local knowledge that a student journalism team demonstrates. This is why it’s critical that students learn how to contextualize knowledge. ChatGPT requires an understanding of command prompts. This sounds simple but it’s easy to miss. ChatGPT isn’t a mind reader, so if students use it to answer questions, they need to become really good at designing their command prompts.
This sounds simple but it’s easy to miss. ChatGPT isn’t a mind reader, so if students use it to answer questions, they need to become really good at designing their command prompts. ChatGPT lacks empathy. ChatGPT may not be able to understand or recognize the emotional context of a conversation. This can lead to inappropriate or insensitive responses. So, it might give insensitive feedback when a student uses it for the revision process. It might lack awareness and empathy when students ask questions and engage in research (consider a student with a
ChatGPT may not be able to understand or recognize the emotional context of a conversation. This can lead to inappropriate or insensitive responses. So, it might give insensitive feedback when a student uses it for the revision process. It might lack awareness and empathy when students ask questions and engage in research (consider a student with a Chat GPT lacks common sense: I’m not sure how to describe this but some of the answers I’ve gotten on ChatGPT seem silly and nonsensical. ChatGPT’s responses are based solely on the patterns and associations it has learned from text data. It may not always have the common sense or practical knowledge to understand the context of a conversation or provide accurate responses.
I’m not sure how to describe this but some of the answers I’ve gotten on ChatGPT seem silly and nonsensical. ChatGPT’s responses are based solely on the patterns and associations it has learned from text data. It may not always have the common sense or practical knowledge to understand the context of a conversation or provide accurate responses. ChatGPT might not be eco-friendly. Deep learning requires an immense amount of processing power. As AI becomes more pervasive, there’s the potential it could accelerate climate change. Wired Magazine described it this way, “deep learning inherently requires huge swathes of data, and though innovations in chips mean we can do that faster and more efficiently than ever, there’s no question that AI research churns through energy.” On the other hand, certain technologists have looked toward AI as a potential solution for making power grids more efficient and reducing the amount of energy we collectively consume.
We can’t predict what writing will look like in a world dominated by Artificial Intelligence. Deeper learning machines, such as ChatGPT, are still in their earliest phases of development. Machine learning will grow more advanced and complex in the upcoming decades. For now, many AI tools can’t be used in a K-12 setting. ChatGPT, for example, requires users to be 18 or older to sign up. But we do know that AI is growing rapidly and many of the tools currently used outside of school will have an educational equivalent that is both CIPA and COPPA compliant.
Get the FREE eBook!
Subscribe to my newsletter and get the A Beginner’s Guide to Artificial Intelligence in the Education. You can also check out other articles, videos, and podcasts in my AI for Education Hub.
Fill out the form below to access the FREE eBook:
| 2023-02-20T00:00:00 |
2023/02/20
|
https://spencereducation.com/ai-essay/
|
[
{
"date": "2023/02/20",
"position": 97,
"query": "AI journalism"
}
] |
Can AI robots conduct interviews better than YOU? - ETZ
|
Can AI robots conduct interviews better than YOU?
|
https://getetz.com
|
[] |
Another key selling point is that careful programming means interview robots can be used to eliminate interviewer bias, making the hiring process fairer.
|
Rapid advances in AI speech and text recognition
For many, something that is guaranteed to cause at least a moment of discomfort is the suggestion that a man-made device of some sort may be able to do your job better than you. If the thought that you might be less smart than a machine doesn’t get you, then the prospect that you might lose your job and be replaced by a robot just might.
Artificial Intelligence Natural Language Processing (AI NLP) powered text and speech recognition robot interviewing devices have been around for a few years now. Current versions of these devices manifest themselves as a head and shoulders – a bust – sometimes with the features of a projected human face, male or female. Or they can be presented as on-screen avatars.
Whatever the form factor, they have been used as a way of reducing some of the workload. Filtering unsuitable candidates based on keywords is a leading capability.
However, refinements include the capability of interpreting tone of voice to detect uncertainty and hesitancy. Essentially, it’s not just what someone says but also how it is said. Such data is used to identify a candidate’s confidence or lack of knowledge.
Another key selling point is that careful programming means interview robots can be used to eliminate interviewer bias, making the hiring process fairer.
This level of sophistication may be nuanced, but it only represents a low-level threat to one of a recruiter’s core job of activities. However, that may be about to change…
Next-gen interviewing robots landing soon?
ChatGPT is all the rage at the moment. Its advanced Natural Language Processing (NLP) text and speech recognition capabilities are impressing millions of people who have signed up for free testing. And the technology is getting even better, fast – it actually ‘learns’ as more people interact with it.
The disruptive power of technology is here for all to see. Microsoft has announced that similar technology it has been working on is to be integrated with Bing to create a next-gen intelligent search engine. The move pitches it into a race with Google to develop AI powered browsers, which saw its stock value take a hit of $163 billion following news of the Microsoft announcement.
Applying such technology may be a game-changer in many areas, including powering the next generation of RecTech. The potential for an advanced NLP powered interview robot is enormous.
As an example of its capabilities in completing complex communication tasks, a recruitment team unknowingly recommended ChatGPT for a job interview after the AI was used to complete a task as part of an application process. Less than 20% of applicants were shortlisted for consideration and ChatGPT was one of the few that made it!
Amongst many other examples, it is capable of emulating the styles of famous writers, taking exams and is even set to defend a human in court. Experts speculate that ChatGPT could replace 20% of the workforce in 5 years.
It should be noted though, ChatGPT developers and other leading experts in advanced NLP technology have cautioned that it is only as good as the programming and the source data that it works with.
However advanced RecTech in general and interview robots in particular become, humans should still have the last word, especially when it comes to analysing answers to open-ended questions. The best practice should be to let robots transcribe interviewees’ answers and let a professional human recruiter assess them.
Get the power of automation for your agency back office with ETZ
ETZ’s leading timesheet and invoicing solution streamlines the back office processing of your recruitment agency. Our complementary solutions, ETZ Comply for onboarding and document management and Caspian for business intelligence give agencies further capability to streamline and uncover opportunities. To find out more call us on 0800 311 2266 or book a demo.
| 2023-02-20T00:00:00 |
2023/02/20
|
https://getetz.com/recruitment-technology/can-ai-robots-conduct-interviews-better-than-you/
|
[
{
"date": "2023/02/20",
"position": 85,
"query": "artificial intelligence hiring"
}
] |
What's Next for Tech Hires?
|
What’s Next for Tech Hires?
|
https://www.ssipeople.com
|
[] |
Whether it was due to over-hiring during the post-pandemic boom or economic conditions, tech workers are suffering serious layoffs. ... artificial intelligence.
|
Traditionally, “non-tech” industries still have a high demand for tech skills
New geographies are seeing a rise in technology
More talent may be entering the market when organizations adjust their credential requirements
The first few weeks of 2023 left nearly 95,000 tech workers without a job. Whether it was due to over-hiring during the post-pandemic boom or economic conditions, tech workers are suffering serious layoffs. Tech professionals shouldn’t be discouraged, though; despite tech giants such as Amazon and Microsoft’s recent reduction of employees, the tech industry is bigger than it may seem, and there are opportunities across a vast space. Likewise, for organizations who struggled to find top tech hires, their talent pool may have just increased.
Tech Hires Cover Every Industry
The world runs on technology. Your next tech job or a company’s tech hire might not necessarily be in or from a tech company or industry. In 2023, it could be at the local bank, retailer, or other non-based tech companies. All these options need people to fill tech-oriented roles. For example, retail giant Walmart is venturing into the tech hub arena. Although location plans for Walmart tech hubs recently took a different turn, the company is still actively pursuing tech professionals. Finance IT, according to Gartner, “is already one of the greatest areas of headcount growth in the function.” CFOs of finance and banking organizations understand that their future is in ‘Autonomous Finance,’ which in turn needs deep technical skills to achieve.
Because industries need technology to not only survive but thrive, according to a report conducted by Bain & Company, job postings across the employment market are calling for those with tech-based skills in non-tech industries. Workers with Web3, data science, cyber security, and cloud computing skills were the highest in demand. Furthermore, job openings for workers with these skillsets grew by about 237% between the fourth quarter of 2020 and the third quarter of 2022. Since companies are utilizing technology to a greater extent to modernize, more tech hires are needed to make this happen.
Tech Hubs are Expanding
Automakers are zeroing in on newly available tech talent as well, which opens up more new areas of the U.S. for job locations than traditional tech-saturated regions. In addition to tech-centered auto companies such as Arrival or British Volt, jobs in the northern US region at Ford or GM are open to fill as auto manufacturing becomes increasingly digitized. The Midwest region, host to many major agricultural organizations, is also eager for tech workers to transform farm equipment through artificial intelligence. For example, one of the world’s largest tractor-making companies, Deere & Co., wants to advance its farm technology and is offering remote work and positions in cities such as Austin and Chicago. Tech employment is simply not bound by geography any longer.
Experience vs. Education for Tech Hires
Another creative way to hire tech workers can be through apprenticeships. These programs have boomed since 2011, with over 96,000 people completing apprentice training in 2021. This demonstrates that more avenues are available to attain a career in technology than the formalized education we have been accustomed to. In the same way, the tech industry is evolving, and the credentials needed to join it are progressing. According to a survey by the Society for Human Resource Management (SHRM), employers view alternative credentials as a way to diversify their workforce and highlight hidden talent. In fact, SSi posed this fundamental question of “degree or no degree” when hiring and found that organizations are already considering experience and non-formal training in their applicants for tech roles.
Tech hiring is still expected to continue, even as layoffs loom
From an IT staffing perspective, tech professionals who have honed specialized skills have a distinct advantage in working as consultants on a project or contractual bases. Organizations reluctant to add long-term headcount may be more inclined to bring in an expert that can accomplish their immediate digital transformation. At SSi People, our goal is to ensure that hiring companies will be matched with the tech consultant that has the expertise to evolve with their changing technology requirements to reach successful outcomes.
| 2023-02-20T00:00:00 |
https://www.ssipeople.com/whats-next-for-tech-hires-2/
|
[
{
"date": "2023/02/20",
"position": 76,
"query": "artificial intelligence layoffs"
}
] |
|
AI vs. Artist: The Future of Creativity - Cornell Tech
|
AI vs. Artist: The Future of Creativity
|
https://tech.cornell.edu
|
[
"Jess Campitiello",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline",
".Wp-Block-Co-Authors-Plus-Avatar",
"Where Img",
"Height Auto Max-Width",
"Vertical-Align Bottom .Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow .Wp-Block-Co-Authors-Plus-Avatar",
"Vertical-Align Middle .Wp-Block-Co-Authors-Plus-Avatar Is .Alignleft .Alignright"
] |
Generative AI allows artists to create countless visual concepts at the click of a button, making the process of fielding potential design ideas with clients ...
|
Photo Credit: Bloomberg / Provided
By Jess Campitiello
In the last year, artificial intelligence (AI) art generators such as Stable Diffusion, DALL-E 2, and Midjourney have been making a splash across the headlines. To the general public, what started as a fun, harmless, and novel way to pass some time quickly turned into professional discourse over artistic integrity.
Generative AI programs are trained by scraping large online public datasets to learn what words relate to corresponding images. Once this has been done, users can input a text prompt which causes the program to compile what it has learned and composite it into a single image output. Björn Ommer, who worked on Stable Diffusion, explains its workings in more detail here.
Accusations of art theft arose surrounding the use of creatives’ works to train these models, as image outputs emulated prominent artists’ styles without their consent. Backlash grew as an AI-generated piece won first place in an art competition. On Reddit, one user was banned for posting their own work because it too closely resembled AI art. Ultimately, the outrage has led to multiple lawsuits unfolding against these programs.
Simultaneously, the generative AI platform Scenario raised $6 million in seed funding and Microsoft made a ‘multiyear, multibillion-dollar investment’ in OpenAI, the company behind Dall-E 2 and ChatGPT.
“There’s a Cambrian explosion right now with this technology,” said Scott Belsky, Adobe’s Chief Product Officer and Executive Vice President of Creative Cloud, at the most recent Cornell Tech @ Bloomberg Speaker Series event. Alongside over 800 partners, Adobe has been working to add generative AI to its products while still maintaining user trust.
When asked about how artificial intelligence and machine learning fit into the future of art, there were three main points that Belsky mentioned:
Creative confidence is only going to grow. Our creative confidence typically peaks as kids, Belsky explained. Starting off with simple crayon drawings, many children find a creative outlet within art. Crude approximations of houses, animals, and loved ones are held in high esteem as they are tacked up on the fridge. However, as we grow older and we meet critics — those who insist on colors within the lines and accurate representations on the page — our confidence goes down. And with that, so too does many peoples’ efforts to pursue artistic endeavors. This technology not only makes creativity widely accessible, but also allows individual artistic confidence to grow since the barrier to entry is quite low. Professionals can access endless creative possibilities. Generative AI allows artists to create countless visual concepts at the click of a button, making the process of fielding potential design ideas with clients exponentially more efficient. Giving clients a couple of quick AI ‘thumbnail sketch’ options will allow artists to finalize their work faster, as they will not be spending time creating preliminary visuals. The fabricated art is not meant to be used as a replacement but rather as a springboard for the artist’s own creative output. AI can even offer design choices that the artist wouldn’t have considered otherwise, mentioned Belsky. Integrating this technology into artists’ tools has the potential to grant them an “edge to a breakthrough.” Outcome-oriented versus process-oriented people. Belsky described two different generative AI user types: those who are outcome-oriented, and those who are process-oriented. Outcome-oriented individuals are less interested in the details of the final product and more focused on receiving the product itself. For example, if an outcome-oriented person were to ask an AI program to generate images of a ‘cowboy in space,’ they would be happy to use the first image presented that suits their needs. The speed of the output is key. On the other hand, process-oriented individuals look at what is generated by the AI program and use it as a tool to create something unique for their needs. Inputting the same prompt, they may be drawn to and emulate the overall composition of one of the outputs but would move the cowboy’s arms into a specific position or make any other number of edits to create a final piece that is precisely what they want. We are all on a spectrum of outcome-to-process-oriented, explained Belsky.
Do artists have a right to be scared? Belsky says that while AI will not be replacing creatives in their spaces as it cannot emulate a human eye for aesthetics, their concerns are justified and there are many important questions that need to be considered moving forward with these powerful technologies. “Has their content been trained on these models? If so, how are they being compensated? Can you generate things in the style of a specific person? People can be ethical and say ‘no,’ but other players can say ‘it’s a free for all.’”
Belsky concluded his thoughts on the subject by stating that nothing in Adobe’s Cloud has ever been stored for the use of training a generative AI, but that the company plans to be very specific about this in its future terms and policies so that there is no ambiguity on the matter.
Watch the full Cornell Tech @ Bloomberg interview:
The Cornell Tech @ Bloomberg Speaker Series is hosted by Scarlet Fu of Bloomberg Television and Bloomberg QuickTake in partnership with Bloomberg, Cornell Tech, and Tech:NYC.
Jess Campitiello is the Digital Communications Specialist at Cornell Tech.
| 2023-02-20T00:00:00 |
2023/02/20
|
https://tech.cornell.edu/news/ai-vs-artist-the-future-of-creativity/
|
[
{
"date": "2023/02/20",
"position": 15,
"query": "artificial intelligence graphic design"
}
] |
Chat GPT, Dall-E, and Other AI Tools You Need to Know
|
Chat GPT, Dall-E, and Other AI Tools You Need to Know
|
https://helloendless.com
|
[
"Will Curran"
] |
Artificial intelligence (AI) tools have dominated the headlines lately. And tools like ChatGPT, Midjourney, and DALL-E have artists and writers sounding ...
|
Artificial intelligence (AI) tools have dominated the headlines lately. And tools like ChatGPT, Midjourney, and DALL-E have artists and writers sounding alarms about job security. So, what does this have to do with the events industry? In this episode of the Event Tech podcast, Will and Brandt share their thoughts on visual and text-based AI tools and how you might use them for events in the future. Let’s dive in!
Visual AI
Will kick this conversation off with a disclaimer. “If you’re listening to this episode, Brandt and I are not hype-train people. We don’t jump on the coolest newest trends just because,” he explains. “But it may be worth discussing the AI tool that broke huge news in 2022, the DALL-E art-generating tool. Did you get a chance to play with it at all?”
“I have played with Midjourney and DALL-E,” says Brandt. “What’s fascinating to me is how accurate you can be. It’s advancing quickly. And what’s making the headlines right now is that these services allow you to upload photos and generate avatar images. Now there’s another road we can go down, where you can upload other people’s pictures. Or say, I want a picture of X politician doing some horrible thing. They’ve put some guardrails up to try and prevent those things, but it’s pretty easy to get around them because humans are very inventive creatures.”
Brandt continues: “These AI tools are holding up a mirror to us, as humanity, and all of our biases. This stuff is trained on publicly accessible images. It has no morality or judgments. So in one of these avatar-developing apps, what people are finding is that women who upload their photos are getting much more sexualized results compared to men.”
“That’s interesting,” responds Will. “One of the great things about these tools is if you’re not happy with the results, you just hit generate again. But they definitely need to fix these biases. Ironically, you would think a machine would be the most unbiased. People need to know that the way this gets all the information is: it’s as if you do a Google image search; it’s taking all the results and compiling them.”
While AI tools are fed data from the internet, Brandt also points out that the user can significantly impact the results AI spits out. “Most of these things depend on the prompt you’re giving it,” he says. “As the creator, you still have to give it a prompt. That’s where we can take back some of that control. So, to bring it around to events, we’re already starting to see this implemented because people are using it to generate images. Rather than going to stock footage archives, you could type that prompt into an AI generator. Some are photorealistic and good, but others look terrible.”
Brandt thinks AI tools are a great way to start things, even if you don’t use them to generate final images. “Say, I want an image that looks like this, and you describe it to the AI. Then you give the result to an actual graphic designer and say, ‘This is where my head’s at, make it look better.’ That’s where we’re going to see a recurring theme, the idea of AI getting you 80% of the way there, and then you refine it.”
Text-Based AI
In addition to art-generating AI tools, text-based tools are also making headlines. Examples of these include Copy.ai, Jasper, and ChatGPT. And over the years, they’ve been growing faster and easier to use.
For those unfamiliar with these tools, Will shares an example of how you can use them. “You give them a topic, say, ‘How to Save Money on AV.’ And it generates an outline for you based on all the articles across the entire web,” he says. “Many people said it would put blog writers out of business. Instead, writers said you can accomplish way more and faster. Instead of taking the time to write the outline, they can focus on the thing they’re best at, editing, adding story, and tweaking. Now you can go from doing one blog post to 20 blog posts a day, or something like that.”
“It’s the same concept. It’ll get you 80% of the way there,” says Brandt. “In preparation for this episode, I’ve been playing around with ChatGPT. I asked it to tell me ways to save money on AV. It says, ‘Think about renting audio-visual equipment instead of buying it.’ That makes sense. I can get behind that. ‘Compare quotes from different vendors to get the best price.’ And ‘Negotiate prices.’ And then I hit number four, ‘Use in-house audio-visual equipment instead of hiring outside vendors.’ I think it’s trying to say, ‘If you own it, you can save money.’ But I thought that was interesting.”
Even though AI-generated text can sound expertly written, Will raises a good point about its truthfulness. “Just be careful. Sometimes, even if it’s well written, it seems factual, but it isn’t. It’s only over time that these tools get smarter and smarter,” he says. “And what we recommend is to play around with it.”
AI Tools for Events: How can They be Used?
Already having provided an example of how event professionals can use art-generating AI, Will demonstrates how you can use them to write text for events. He gives an AI tool the task of writing a session description for “2023 event trends.” Here’s what he got.
“2023 event trends: what to expect in the coming year as we look ahead in the new year. It’s important to stay on top of the latest trends in the events industry. In the session, we’ll explore some of the key trends to expect to shape the event landscape in 2023, from virtual to hybrid events to sustainability to experiential marketing. We’ll delve into trends that will be driving the industry forward. This session will be perfect for event planners, marketers, or anyone staying ahead of the curve in a fast-paced world of events.”
Will is thrilled with those results. “I’m never writing a session description again,” he says. “That was way better than I could ever write. Session descriptions, bye! Session titles, bye! It’s a place to get kick-started in your writing or designing. I might have added some tweaks to that session description, but it’s still a huge time saver.”
“These tools aren’t meant to replace you,” continues Will. “They’re meant to help you. They’re meant to be tools, not robots taking over your job. So play around with them and see how they work. All of this is a playground meant for you to test things. This is definitely a tool you don’t want to put your head in the sand and say bah humbug about.”
AI Tools: Our Final Demonstration
For the tech duo’s final AI demonstration, Brandt asks ChatGPT to help him conclude the podcast. “I’ll prompt you with what it recommends, and you respond to it,” says Brandt.
Recap of the Main Points Discussed in the Episode
“First, DALL-E is a great tool for image generation. Check that out,” says Will. “ChatGPT-3 is a great one for chatting. It’s going to get you 80% of the way there. Be prepared to edit something potentially offensive or not factually correct. And have fun with it.”
Offer Additional Resources or Recommendations
“First, Google these things. And I’ll add another tool that lets me stay on top of this stuff: Product Hunt. It’s a great newsletter that shows you newer interesting products.”
Encourage Listener Feedback.
“You can always email us. We’d love to know how you see it. Feel free to write an AI-generated response. We’re happy to respond!”
Promote upcoming episodes or events.
“We do this every single week. Also, you and I are starting to kick up the speaking circuit again, So stay tuned to industry and marketing conferences where Brandt and I are speaking all the time.”
And if this topic interests you, we encourage you to check out an earlier Event Tech episode where we talk about how many AI tools use predictive analytics rather than true AI.
It’s an exciting time to be in the events industry, especially as more and more AI tools and technologies emerge. With advancements like chat GPT, Dall-E, and others, there are many opportunities to create better-written content, graphic designs, visuals, and text for event planning.
Understanding the ethical implications is essential. We must know how these powerful tools are changing how we plan events and the greater impacts of their usage, existence, and code. With the right attitude, knowledge, and principles guiding our decisions, utilizing AI tools for events can be an excellent addition for savvy event planners. So explore Chat GPT, Dall-E, and other AI solutions.
We’ll catch you here next time for another episode of the Event Tech podcast. (And if you’re still with us, surprise! Jasper AI wrote most of the final two paragraphs in this conclusion.)
| 2023-02-20T00:00:00 |
2023/02/20
|
https://helloendless.com/ai-tools-for-events/
|
[
{
"date": "2023/02/20",
"position": 83,
"query": "artificial intelligence graphic design"
}
] |
Best AI Prompts
|
Best AI Prompts
|
https://bestaiprompts.art
|
[] |
⚡ 12 categories (more coming soon!): graphic design, cinema, interiorism, illustration, video games, fashion, etc. Compatible with all versions of Midjourney ...
|
How many times have you thought:
“If I could go back to the beginnings of the internet with what I know now...”?
Well... generative AIs like Midjourney are today what the internet was in 2000!
But creating impressive AI art with the correct prompts is hard...
Yes, I know. I have spent hundreds of hours generating images using AI tools, experimenting and looking for the best prompts. The good news are that after having created more that 30,000 images, I'm now able to transmit what I've learned to others in a simple way.
With this AI prompts bundle, you will be able to create EVERYTHING that comes to your imagination. As simple as that.
Graphic novels, comics, video game assets, fashion brainstorms, interior and architecture renders, illustrations for children's books, food photoshoots, film stills... you name it.
If you are here, it's because you have seen it: lots of amateurs and professionals are already creating amazing things using AI...
Do not let time pass and then look back to see that you were one of those who let the opportunity pass.
Get on the train!
—Javi Lopez, BestAIPrompts founder.
| 2023-02-20T00:00:00 |
https://bestaiprompts.art/
|
[
{
"date": "2023/02/20",
"position": 87,
"query": "artificial intelligence graphic design"
}
] |
|
ChatGPT is coming: Considering AI advancements' impact on jobs ...
|
ChatGPT is coming: Considering AI advancements’ impact on jobs of the future (and present) 3 months later
|
https://www.commerce.nc.gov
|
[] |
As ChatGPT, Bard, Ernie Bot, and other AI products continue to develop, they could increasingly disrupt labor markets, or slowly shift tasks for ...
|
Author: Neil Harrington
Perusing business and economics sections of national and local newspapers recently might lead some observers to think the tech industry is collapsing. Google cut 12,000 jobs from its ranks, or about 6 percent of its workforce. Amazon announced plans to lay off 18,000 people. And it’s not just the giants; smaller tech companies such as SoundHound laid off 200 workers, half of its workforce. The tech job tracker layoffs.fyi shows almost 250,000 tech jobs have been lost since the start of 2022. With new waves of layoffs seemingly announced every day, the headlines have been omnipresent, especially for workers in the tech sector unsure if they will have a job when they wake up.
So, what’s causing all these job losses in tech? Overinvestment during a long period of cheap money, not AI, appears largely to blame. However, it’s hard to fault anyone for thinking about recent advancements in AI, specifically the launch of ChatGPT, and envisioning a major disruption brewing. If someone is the first to see an AI revolution coming, it would be tech companies. As ChatGPT, Bard, Ernie Bot, and other AI products continue to develop, they could increasingly disrupt labor markets, or slowly shift tasks for workers within occupations, or quietly fade in people’s memories as other tech advancements have done. It is still far too early to know exactly how and if these technologies will disrupt labor markets, and the full extent of disruption won’t be clear until it shows up in labor market data. Whatever happens, LEAD is closely monitoring trends and will update analyses as disruption happens (or not).
AI didn’t cause recent mass layoffs in tech…
Most of the news coverage around tech layoffs note the sector’s overinvestment over the past several years. An extended period of low interest rates leading up to and during the first part of the pandemic made accessing capital virtually free for companies. This fueled soaring profit margins for tech companies and their investors and accelerated employment growth.
For example, in 2021, the tech giants Facebook and Amazon had more than 5.5 times the number of employees they had in 2015, while Google and Apple’s workforces expanded by a still lofty 153 and 40 percent, respectively, according to the investor research platform Macrotrends. Similarly, smaller tech firms such as Salesforce and Carvana saw employment boom while money has been cheap. But as macroeconomic policy has sought to fight inflation through higher interest rates, investments have slowed, and tech firms are finding it harder to maintain large workforces with high salaries.
…but AI is and will continue disrupting tech and other industries
An ominous message spotted in the halls of a DC consulting firm.
Overinvestment and increased borrowing costs seem to be the main cause for the current wave of tech layoffs, but job cuts are also happening as these companies grapple with the disruption of AI advancements such as ChatGPT. Released at the end of 2022, ChatGPT is an AI language model developed by OpenAI that can generate human-like text based on patterns learned from vast amounts of internet text data. ChatGPT responds to simple requests like “explain what ChatGPT is in one sentence to someone not familiar with machine learning” and returns text like the last sentence or offers more detailed responses to more sophisticated prompts. The UNC Kenan Institute’s Five Economic Trends to Expect in 2023 even features an economist-sounding paragraph generated by ChatGPT on how AI will impact workers.
After its release, major tech companies have scrambled. Google reportedly issued a ‘code red’ over the threat ChatGPT poses to its search engine business. Resultingly, the company’s executives restructured teams and directed many workers to devote more time to AI products and tools, but ultimately flubbed the initial release of Bard, its ChatGPT rival, leading to a 10 percent loss in its market value. Even if AI isn’t being blamed for recent layoffs, current and future AI products and advancements are likely weighing on companies’ projections of future labor needs and expected productivity.
ChatGPT, AI, and the future of work
Of course, ChatGPT has its limitations and has not caused the thousands of job losses in tech. However, as a language model that writes decently well, it is likely to increase the average writing ability of workers and perhaps increase productivity for people with writing-heavy jobs or others who find innovative uses. CNET and other publishers have even started using ChatGPT to write articles and generate poll questions and quizzes.
The labor market disruption potential of ChatGPT and other AI advancements will largely depend on how they are used. This has been the case with previous tech advances as well. It took nearly 50 years for electricity to revolutionize manufacturing in a way that made productivity soar, largely because business owners were hesitant to move away from steam power. Map applications on smart phones first allowed taxi drivers to find more efficient routes before giving rise to the likes of Uber and Lyft, which in turn severely disrupted the taxi labor market. Still, recent technological progress has had relatively little impact on productivity and labor markets, as evidenced by the past few decades of huge tech advancements but lackluster productivity growth. As new versions of language models and other AI are released, people and businesses will find innovative uses which could alter labor markets, with impacts perhaps lurking beneath the surface.
Making sense of ChatGPT’s and AI’s impact
In short, ChatGPT and AI advancements will undoubtedly disrupt certain jobs and the tasks that are performed in some way—the severity of those disruptions will become clearer as it happens and shows up in labor market data. For example, most estimates of future employment expect technological progress to have a greater impact on production industries and non-routine occupations because that’s what historical data indicates will happen.
What makes AI potentially different, or alarming to some, is the potential pace of change and the types of work it may affect. As a technology that learns on its own and improves with the amount of interaction it has with humans, AI advancements could come much quicker than other technologies. And for the first time, ChatGPT has suggested that technology disruption may significantly impact analytical and creative jobs as well. But disruption does not necessarily mean destruction.
At this point, it appears that disruptive AI will impact North Carolina eventually, as it will the rest of the world. However, it’s still far too early to fully understand the scope of that impact. If history is a guide, it’s more likely to change how we work (for the better) and create new occupations than to decrease the overall demand for human labor. For now, LEAD partners can take solace in the fact that we’re closely monitoring trends, using a variety of long-term and real-time sources, and are prepared to help people deal with the uncertainty surrounding automation and AI advancements. A couple years from now, we may all be celebrating the loss of lower paid, routine work and reaping the benefits of better-paying, more challenging and rewarding employment spurred by advancements in AI.
| 2023-02-21T00:00:00 |
https://www.commerce.nc.gov/news/the-lead-feed/chat-gpt-ai-advancements-job-impacts
|
[
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "AI job losses"
},
{
"date": "2023/02/21",
"position": 4,
"query": "AI labor market trends"
},
{
"date": "2023/02/21",
"position": 1,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 4,
"query": "AI layoffs"
},
{
"date": "2023/02/21",
"position": 4,
"query": "artificial intelligence layoffs"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 3,
"query": "ChatGPT employment impact"
}
] |
|
How Artificial Intelligence Will Improve Workplace Safety
|
How Artificial Intelligence Will Improve Workplace Safety
|
https://solutionsreview.com
|
[
"Yukon Palmer",
"Recent Posts"
] |
While some perceive A.I. as a threat to jobs, we will begin to see it being used as a tool that can enhance workers' well-being. In fact, “blue collar ...
|
As part of Solutions Review’s Contributed Content Series—a collection of articles written by industry thought leaders in maturing software categories—Yukon Palmer, the founder and President of FieldLogix, outlines how artificial intelligence (A.I.) technologies can help companies improve their workplace safety.
Artificial intelligence is often viewed as an overly-hyped technology, yet it will eventually impact our everyday lives. Its many applications are used today within many industries to increase revenues and profits. While some perceive A.I. as a threat to jobs, we will begin to see it being used as a tool that can enhance workers’ well-being. In fact, “blue collar” industries such as construction will see the value of harnessing the power of A.I. to improve workplace safety.
A.I.’s most significant benefits revolve around predicting outcomes. To build an A.I. prediction model, you start with the outcome you want to predict and use a machine learning process to analyze all the factors that lead to the result. Often, so many factors, which may even appear to be unrelated, lead to an outcome that even the smartest human cannot accurately predict.
Even very sophisticated computer programs struggle to do this. A.I.’s strength is in digesting millions of factors to predict the likelihood of an outcome, even if many of the factors appear to be unrelated. This process is akin to dumping a bunch of random unprepared meat and vegetables into a pan, putting it in the oven, then pulling out a gourmet casserole after 20 minutes.
Blue-collar industries, in particular, stand to benefit significantly from A.I. technology. It will likely be implemented in areas where these industries feel the most pain, specifically worker safety. According to the National Safety Council, accidents on construction job sites cost organizations $171 billion annually. They are difficult for everyone involved and often result in expensive work stoppages, fines, and lawsuits. Sometimes the factors that lead to the accidents are foreseeable (say, for example, an unsecured ladder).
However, there are many cases where many unforeseeable and non-obvious factors could contribute to an accident. These factors could include a worker’s weak knee from a childhood accident, a slick rung on a ladder due to high humidity, the worker’s weight, the glare of the sun, the fact that the worker isn’t wearing sunglasses, the worker’s lack of sleep from the night before, or a sudden, startling loud noise from a nearby detonation.
There are so many factors that it would be impossible for even the smartest supervisor to notice them and instruct the worker to be more careful during his fifth time climbing the ladder. However, if these factors were input into an A.I. model based on several simple survey questions for the worker, an analysis of the local weather forecast, and the planned activities on the job site, the probability that the worker would fall off of a ladder could be predicted. The supervisor could use this information to instruct the worker to be more careful, reassign the worker to another task, or limit the worker to climbing the ladder four times. All of the worker’s planned tasks for the day could be assessed for risk, and the plans could be modified to reduce the chance that the worker would be involved in an accident.
A.I. could even be used during the job planning phase to assign workers to tasks with a lower risk of accidents based on their circumstances. BHP, a worldwide copper mining company, has put A.I. to use to improve workplace safety in one of their Chilean mines. They implemented “smart caps,” worn on the heads of their mining vehicle drivers, to analyze their brainwave patterns. The brainwave analysis uses A.I. to determine whether the driver is tired and in poor physical condition to operate a vehicle. They then use this data to coach or redirect their drivers to reduce the likelihood that they are involved in accidents.
As organizations realize the potential of A.I., they will put it to use to dramatically reduce accident risk, saving billions of dollars each year. Blue-collar industries will be significant adopters of the technology, not only because of the safety improvements but because they may be pressured to adopt the technology by the ultimate financial losers when there are accident claims, the insurers.
The insurance claims from the 2.6 million annual non-fatal workplace accidents eat into the insurers’ profits, resulting in significant premium increases for their clients. Because of these costs, the insurers will be the biggest proponents of using Artificial Intelligence in the workplace. In many cases, they may underwrite a portion of the cost of the technology or may require that a client purchases it as a prerequisite to being covered by a policy. They already have a history of implementing programs that leverage technology to improve driver safety.
For example, auto insurers have incentivized their clients to utilize telematics solutions for years to monitor their driving habits. For example, auto insurers have incentivized their clients to utilize telematics solutions for years to monitor their driving habits. The Drivewise program by Allstate monitors drivers for unsafe driving practices and rewards safe drivers in the form of lower insurance premiums. Nearly all major auto insurance carriers have similar programs and often subsidize the cost of the technology.
Workplace accidents significantly drain our economy, productivity, and collective well-being. Artificial Intelligence represents an excellent opportunity to solve this problem by predicting and preventing accidents. Not only will the employers see the value of A.I. in improving workplace safety, but the insurance carriers will likely use every method at their disposal to encourage the use of it by employers. This will result in fewer financial losses and healthier workers.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://solutionsreview.com/enterprise-resource-planning/how-artificial-intelligence-will-improve-workplace-safety/
|
[
{
"date": "2023/02/21",
"position": 56,
"query": "AI job losses"
}
] |
Write an essay about artificial intelligence is threat to ...
|
Answers to: Write an essay about artificial intelligence is threat to human jobs and job security -argumentative essay
|
https://www.classace.io
|
[] |
As AI continues to improve, it is likely that these jobs will also be at risk, as AI can perform these tasks more efficiently and accurately than humans.
|
Artificial intelligence (AI) has undoubtedly made huge advancements in recent years, with the potential to revolutionize many industries. However, there is a growing concern that this technological advancement poses a threat to human jobs and job security. In this essay, I will argue that AI is indeed a significant threat to human jobs and job security. First and foremost, AI has the potential to automate many routine and repetitive tasks that are currently performed by humans. This includes tasks in industries such as manufacturing, customer service, and transportation. As AI becomes more advanced, it is likely that many of these jobs will be replaced by machines, leaving millions of workers without a source of income. Additionally, AI has the potential to outperform humans in tasks that require complex decision-making and problem-solving skills. This includes jobs in fields such as finance, healthcare, and law. As AI continues to improve, it is likely that these jobs will also be at risk, as AI can perform these tasks more efficiently and accurately than humans. Furthermore, the rise of AI could lead to the consolidation of wealth and power in the hands of a few corporations and individuals who control the technology. This could lead to a situation where a small elite has a monopoly on the economy, leaving the majority of the population without access to jobs and income. Moreover, the rapid advancement of AI could lead to a situation where there are simply not enough new jobs being created to replace the ones that are being lost. This could lead to a rise in unemployment and underemployment, which in turn could lead to social unrest and economic instability. In conclusion, it is clear that AI poses a significant threat to human jobs and job security. It is important that we take this threat seriously and develop policies and strategies to mitigate its impact. This may include investing in education and training for new types of jobs that are less susceptible to automation, and implementing policies to ensure that the benefits of AI are shared equitably among the population. Failure to address this issue could lead to widespread job losses and economic instability, with far-reaching consequences for society as a whole.
1
| 2023-02-21T00:00:00 |
https://www.classace.io/answers/write-an-essay-about-artificial-intelligence-is-threat-to-human-jobs-and-job-security-argumentative-essay
|
[
{
"date": "2023/02/21",
"position": 81,
"query": "AI job losses"
}
] |
|
Artificial Intelligence (AI) and Digital Healthcare ...
|
Artificial Intelligence (AI) and Digital Healthcare Technologies Capability Framework published today
|
https://digital-transformation.hee.nhs.uk
|
[] |
The new AI capability framework, published on 21 February, addresses the need for our health and care workforce to continually adapt their skills and ...
|
Adrian Brooke: It's 12:45 and thank you all for coming. Those who have logged on and we hope you will stay with us for the whole time. So uh, can we ask that? All people who are joining the webinar have their cameras turned off and the microphones on mute the. Uh. Session will be recorded. And if you'd like to ask questions, but this is supposed to be a kind of a chat. And therefore, it's really helpful if there are questions, if you can put them in the chat window, which you can see if you click on this series of icons at the top of the screen, the very first icon in the model, middle of the teams screen. There's chat. If you click that. A window will open. I'll even do it now. A little column appears in in in black on the right-hand side of the screen, and then you can write your question there and then post it using the little uh paper aeroplane icon in the bottom right-hand corner of the screen and then we'll know what. Question you've asked and we can pose that to the panel that we've got. So thank you all for coming. Uhm. Can I start by asking? Uh. All our panel. Members to just briefly introduce yourself and. Where you work, what your job title is, and I I will I will demonstrate by saying my name is Adrian Brooke. I'm medical director at health. And England and my background clinically was in paediatrics and I'm an interested dinosaur from the pre digital age in area. So can I go over to you Sonia next please.
Sonia Kumar: Hello everyone lovely to be here. My name is Sonia Kumar. I'm a GP by background professor of medical education and I'm an associate Dean at the University of Leeds.
Adrian Brooke: Thanks, Sonia. And can we then move on to Alan next please?
Alan Davies: Hi everyone. I'm Doctor Alan Davies. I'm a senior lecturer at the University of Manchester and my background in nursing and computer science.
Adrian Brooke: Excellent and last but by no means least, Hatim.
Hatim Abdulhussein: Thanks Adrian. So my name is Hatim Abdul Hussain. I'm a GP in in northwest London, Health Education England, national lead for AI and digital workforce.
Adrian Brooke: Thank you. So and what we're going to do is just very briefly, uh, kind of to introduce this session, really just set the scene and. What I really just want to do here is in fact to just remind everyone that the AI digital healthcare technologies capability Framework was published this morning at 10:00 AM and is available on the NHS Digital Academy. Website. So the that report and the framework build on the findings and recommendations made in the top hall review, which came out in 2019 and was entitled preparing the healthcare workforce to deliver the digital feature and that outlined a set of recommendations. Preparing the NHS workforce to become world leaders and utilising digital technologies to the benefit of our patients, of course. Now we know clinical teams in near future will be required to use AI, artificial intelligence and other digital health technologies effectively and equitably. Really, for the benefit of all. And that's starting now actually. So this is not, this is not something for the distant future. This is occurring as we speak, but in response to this needs on health education England, HE's how it's foreshortened, it's an arms length body commissioned University of Manchester to undertake a learning needs. Assessment and create a capability framework. And that's to aid learning and development of. Our healthcare workforce. Now the framework aims to help healthcare workers identify gaps in their current knowledge and areas for preparatory activities to support digital transformation of the workforce, as well as their own individual learning. And it's this builds these capabilities that we've just published, build on the foundational digital literacy capabilities first introduced in a health and care digital capabilities framework.
Adrian Brooke: So the AI and digital healthcare framework extends this with capabilities around the use of. Health data and the technologies that make use of this data, for example applications. On your mobile. Phone or computer wearable technologies, software and programmes, etcetera. And this is further extended with more advanced capabilities like artificial intelligence and of course, the advent of robotics. And capabilities of course range across the whole spectrum from initial awareness through implementing these technologies in a healthcare environment and supporting digital transformation projects. So I'm going to shut up now. I think that's probably welcome for everyone and perhaps I'll turn to Hatim and Alan to present the framework. Thank you.
Hatim Abdulhussein: Thanks, Adrian. And so I'm just bringing up the slides and hopefully. We can all see you then. Give me a second. Yeah, OK everyone can see the slides. Thanks for joining us. Come here. Yeah, perfect. Alright. So I mean, I but you know myself, we're just gonna go through the the methodology behind the framework and and a brief overview of what the framework includes. But when I'm reflecting on what I'm going to say today, I was just looking back when I started my my GP training as as a registrar and started. Uh, my first placement as a GP in a practise in Hayes. I was doing majority face to face-to-face consultations and practising in a way that seemed very familiar to me and I think going back up only about two or three percent of my consultations were even telephone based, so the majority were with the patient in front of me. I don't want him to ask an emergency at the trainee and we hit the first wave of the pandemic and being an accident emergency, I noticed. Certain things. So I noticed how suddenly our nursing staff and them and see what collecting observations on on a device and we're inputting them into a. Then.
Adrian Brooke: I’m hearing people are struggling to see the slides, so we've got messages in the chat that says they can't see the slides.
Hatim Abdulhussein: Let me see if I can share that as a candidate.
Adrian Brooke: That's a few.
Hatim Abdulhussein: Give me a second.
Adrian Brooke: So if you go into presenter and all the we all disappear. That might be easier. Oh. Can someone message to say with the slides on out? Yeah, we've got some. Yes. OK, thank you. That's really helpful.
Hatim Abdulhussein: Wonderful. Wonderful. So, so. So being in Annie, I noticed that things were changing. So nursing staff were recording observations on, on, on, on a system and we were having to access that system to to be able to look at observations. And we had new healthcare records that we were using in that emergency department. And I remember going in one day in, in paediatric A&E and being told we've got this new system in place and not really being turned, showing how to use it and not really being kind of having had the time to really familiarise myself with the system. I then went into general practice and notice the whole world had changed. Uh, you know, when I logged into my my sister, my suddenly had widgets on the screen that allowed me to text and receive messages from Patty. And suddenly when I was looking at appointments, I had something called the consultation where people were giving me information beforehand and then I had to act upon and thinking about what I. Was going. To do going forward and all of a sudden I was doing about 50 to 60% of my consultations. Uh, either via the telephone or in some cases, even by video consultation. And I reflected on how one we got to this stage, but two, whether I felt that I had the best skills in place to, to be able to, to. Work in this new way. And. And I was also at a point where I was preparing for my general practise exams and a key part of my exams was to record myself consulting with patients and a lot of these consultations were over telephone and video. Now that became an opportunity for me. It allowed me to really analyse the way I consult with patients and to reflect on my educational supervisor around how best to do that and what kind of mitigation I needed to take when I'm consulting with a patient over a video call rather than over a phone or versus a face-to-face consultation. And so when I came into my role at Health Education England, it's very important for me to think about how we do our best to, to help people understand what they need to know, to be able to work with the types of technologies that we interact with patients with. And so that's really the thekey context behind why this is important. It will enable people working in, in, health and care to be able to understand the types of skills they need to have when interacting with people interacting with patients andd using technology passing over to Alan and I'll move the slides along.
Alan Davies: Thanks. So I'm just gonna talk very briefly about the sort of methods we use to to create the framework. So we used an iterative mixed methods approach to create it that involved code design as well. So this involved carrying out a systematic literature review to look at the academic side of things and where the different gaps were and a series of. Workshops which we did online and then that was followed up by a digital survey as well. Next slide please that in. So the the systematic literature review is really used to to generate some initial concepts and this was carried out by Health Education England's knowledge and management team and we also include as well as the academic literature. We looked at the great literature. So we looked at existing frameworks, international frameworks and other relevant policies and documents and we use this to generate. Set of a set of really groupings of topics and themes and concepts. So looking at the different things that were coming up constantly in the literature, that would seem to be important, and we grouped these together roughly into. Uh, what we call a concept map and that acted as the basis for the workshops to give people a starting point so they could look at the kinds of technologies and things that we were talking about under those different main areas and spark the the debate really. So if you move. To the next slide. So we carried out the workshops online. It was during the pandemic and we used, we used the thing called Myro, which is a an interactive board that allows multiple users to work at the same on the same page basically. And we also put people into breakout rooms and the series of workshops targeted different stakeholder groups. So the first one was really around. So they're people like the top or fellows there. We had NHS clinic or entrepreneurs we had in the second group, we had industry representatives. So this was Babylon Health, Google Health, Barclays and Bupa and the final workshop was focused around subject matter experts. So we used each of these three workshops to use the topics to spark discussion. And consider what the different capabilities might might involve, and then we're able to rank these in order of importance and complexity. Next slide. We use something called the nominal group technique for the workshops and this is quite a useful technique when you've got people that aren't familiar with each other, or you might have power dynamic imbalances. So essentially you've got this nominal phase where you privately consider the information, and we did this offline prior to the workshops, and then we have an item generation phase. This is all around. Radiation. So people come up with ideas without being interrupted. By others and in this we captured that on post. It notes on the Mira board and then you go back around to the clarification and discussion where you can kind of probe into the different ideas and ask people to explain them. And then finally, there's a voting stage where you're able to order the priority of the different items. So we use this to generate a draught version of the framework. Next slide, please. And then we sent that draught framework out via survey for wider participation so we could get more people to give us feedback. We took that feedback on board and then constructed the final version of the framework that you can see in the report. Next slide please. So the framework is as I mentioned before, it's built on top of the original digital literacy framework and that forms the foundation. And then on top of that, we've got a lot of skills around data. So obviously for a lot of these advanced technologies, wearables, AI, machine learning, they're all built on and understanding and use of data. And then on top of that, we've then got those, those technologies. And then on the higher end, we've got things like artificial intelligence and AI. So it's built up in that in that in that way basically. And it sort of straddles the space between the original digital literacy framework, which is very much around basic digital competencies. You know, so can you switch your machine on send emails and do all these fundamental digital things? And then at the other end, we've got special frameworks for special groups like Informaticians. And this framework very much straddles that space in between. The expert frameworks and the very fundamental digital literacies. Next slide, please. So umm the other problem we had here is how do you make a framework where you've got so many different types and roles in the NHS, so many different types of workers in the NHS workforce? So it would be quite a challenge to map these these capabilities onto all those different working groups. And the other problem is some of these work working groups will have different roles. So you might be a clinical nurse, but you might also be involved. In informatics projects, for example, so you might wear multiple hats. So to get around this, we use archetypes instead. So essentially we map. The capabilities onto archetypes and then people can self-identify which archetype or archetypes they belong to, or their managers can do this as well, and the archetypes include things like shapers. So this can be people in leadership positions or arms-length, arm length bodies. We've got drivers. So this can be your CIO's and CIO's. Creators. So these are people that are actually creating some of this stuff, engineers, data. Tests and then we've got embeddings. So these are people actually embedding some of these things into the into the various systems. So it teams and so forth. And then we've got the users as well. So people actually use the technologies and and it is possible that you you can you can come under one or more of these different archetypes at different points. Next slide please. We also use something called Bloom's digital taxonomy. So for any educators out there, you're probably quite familiar with blooms. It's quite a popular framework that's often used in in education, and this is a digital version of that framework. And we mapped all of the different capability statements onto Bloom's taxonomy as well. And it really includes moving from lower-order thinking. Skills through to higher order thinking skills, so at the lower end you've got things like remembering things and and basic understanding moving through to application, analysing, evaluating and then creating. So we use Bloom's taxonomy across the framework and and through the different sections as well. Next slide please. So the framework itself is split into a number of key domains, and these domains include things like digital implementation, digital health for patients, and the public ethical, legal, and regulatory considerations, human factors, health, data management and artificial intelligence. Next slide please. And a number of these domains also have sub domains so you can see there for example that that that they break down further. So AI includes things like robotics, we've got things like management and leadership under human factors and ethics and regulations under under the legal issues and so forth. And inside each of these we've got a number of individual capability statements. Next slide please. So on on each of these, uh, uh domains and sub domains, we've got a number of statements split into 4 levels. They're split into 4 levels to make this compatible with the original digital literacy framework, so it's a familiar structure and the levels really just infer increasing complexity or difficulty. So level one's going to be easier than level 4. And then within each of these levels, you've got the actual. Capability statements themselves and then these are mapped onto those different archetypes that you can see at. The bottom there. So that's a that's a quick whistle stuck tour through kind of how we designed the framework and kind of what the framework consists of. And I'll pass back to hatting.
Hatim Abdulhussein: Thanks, Alan. And our key message here is says great, we've got a framework, we've got an idea at a very early stage of what these capabilities might be. But how do we make sure that one is sustainable? And two, we get the impact we need to get in terms of me being in my clinic room as a GP to touch the skills that I would need to be able to work the touch technologies that I'm interacting. And so the first thing is to say that technology is, is fast adapting and in our framework, we've done our best to make sure that we're technology agnostic, but we need to make sure that we continue to keep live to, to advancements and developments in this area. And so we're gonna be doing some work to make sure we have a a mechanism in place to continue to review and refresh the the capabilities with the and within the framework. As well as building new areas as things emerge in in, in policy and in healthcare. The second part is we want to empower individual learners to be able to use this framework, so it's about embedding it into existing health, education, England, learning platforms or tools such as the the Learning hub. So that individuals can really measure their own, learning their own aspirations for where they want to get to and and and and. And that will then drive them forward in terms of what kind of skills. They develop based on that material out there. And then the final part is to be able to make sure we're working with their educational bodies. Uh, like, like people like Sonia. Who, who's who's working women, a higher education institution or our all colleges or our professional regulators to be able to support the educational reform that we need as a result of the learning that we have developed over the past year and a half in. Developing this framework. And so that we know that when I am entering my TP training, I have it quite clearly within my remit to be able to develop these skills naturally within the competency, the capabilities that I need to build as part of becoming a GP. I hope that's been a helpful overview of the framework, but and I'll pass it back now over to to Adrian for for for the discussion.
Adrian Brooke: Thanks Hatim and thanks Alan for for a kind of a lightning tour through the the rationale, background, development and deployment of of the framework. So. So thank you very much. What we'd like to move on now to this discussion on how we can implement that framework. Uh, in undergraduate and postgraduate uh training. So I'm and I'm going to turn to to Tim and Alan. And Sonia, I mean, we have got this. You know this funny triad between? If you like the individual, the framework, the places individuals are in, for example, you know postgraduate or undergraduate courses and and we've got the changing landscape as well. So we've got lots of moving targets. And of course, we've got a regulatory framework as well to navigate as well because some Healthcare is are highly regulated field for obvious and very good reason. Which may not always be quite as adaptive I would imagine, and I don't. If anyone would like to kind of comment on some of the difficulties that that that throws up or you know, maybe around assessments or stuff.
Hatim Abdulhussein: I can go first and then so whenever. Ohh. When we kicked off this piece of work, I think we made it very clear at the start that we needed to be engaging with with educational bodies right from the start to be able to help them understand one why we're doing this and and two how they might use the product at the end. And some early examples of where that's been in effect is is, for example, the British Institute of Radiology. So we did a piece of work in, in, in January of of last year that looked at the AI and data-driven technologies in the NHS and what workforce groups they affect at the top of that tree, we saw radiology, radiologists and and near the top healthcare scientists as well. And so. In further conversations off the back of that with the British Institute of Radiology, we were able to say, well, look, this is going to be really important for your membership. It's going to be really important for those that are working in the professional groups that that you're responsible for. What can we do to to enable the the that learning that these groups need to need to have to be able to work with these technologies. And we've got a a webinar series and and and. Learning materials that are being developed by the British Institute of Radiology and are launching at the AI Annual Congress as well in a few months time and so. The key is is is to find the the bodies that are, you know, really valued at the importance of this and are looking to to work with us to to to build some of that proof of value. Uh for for uh. The learning in this space.
Adrian Brooke: Thank you. So so it sounds like some colleges are kind of acknowledging this and you know, sometimes we say in education and we the assessment drives learn. And therefore, if, if you're going to be asked about it in the exam, that's quite a powerful driver. Clearly, a lot of the workforce is not in training or education, but is post training, as it were in in service roles. But still needs to know so the the the if you like the examination, uh press or to make you learn it's slightly less. Urgent. But I'm just wondering. For example, uh, stuff like finals for undergraduates. You know what? What's the inclusion in of the digital kind of agenda in that and how, how might this framework relate to that? Sonia, can you can you? I I know you're GP, but I'm. I'm. I'm. I'm. I'm thinking things like licencing exams and stuff.
Sonia Kumar: Yeah.
Adrian Brooke: Like that so.
Sonia Kumar: Yeah, I know. I've involved in medical education for quite a number of years. I mean, I have to say First off, I mean, I'm really excited by this because it's a very clear outline of the domains that you need to consider with digital health. Technologies, but I think equally I'm also quite worried about how the health service is sort of moving at breakneck speed in how we're adopting digital technologies and indeed how society is as well. We all know that there's Google, there's wearables, there's apps, you know, digital health is part of our everyday lives. But yet when you look at the training needs and how it's being integrated into undergraduate curriculum, and that's across the health professions, postgraduate curricula, you do start to think that actually digital help. Of at best, is is sometimes mentioned it. It isn't. It isn't a strong theme and I think one of the really sort of beautiful ways of highlighting this is the medical licencing exam, which comes in just for medical students in 2024, doesn't really mention digital health, even though it does have an area around of around capabilities. I did a bit of a look yesterday and I was putting in words like technology, digital, remote consulting, anything that could encapsulate what we're talking about today and it just isn't reflected and that that that's new that, that that isn't that hasn't even been launched yet. That's coming out in 2024. So that disconnect between what society. Is is moving ahead with what the NHS and HG's moving ahead with, but yet how educational bodies and that's undergraduate and postgraduate are sort of somehow lagging behind I think will be will be a problem not only for dissemination of this framework, but actually the bigger thing is is actually how are we supporting our patients rather like you her team. I remember a patient coming in with their genome profile and I had a student in with me. You know, I was totally out of my depth and had to counsel the patient about their risk with for various conditions. So not only is there a training need for our pipeline, our students, there's a huge training need for our trainers. You know who who is going to be teaching our students all of these six domains around digital health? So I don't know. I don't want to use the word emergency, but I do think there is a digital health emergency that we need to address.
Adrian Brooke: Thank you so much. It's really, uh, kind of powerful was call to action, isn't. It yeah that. That the that we need to catch up across the system, maybe it reflects A societal A wider societal issue where we've got the kind of inexorable and ever quickening March of technology. And across society we struggle to catch up and are playing catch up with it. But from a kind of list point of view. And if you like, this is an aspect in a in a in a part of medical education or healthcare, education and practise. So I think that's a really powerful observation. And and it we have got this strange situation. Have we not and I'd be interested to hear people's comments on this that you know that everything is moving really quite rapidly there. Normally in in a lot of healthcare knowledge and understanding. It's sort of held behind a kind of a a bit of a Mystic shroud of learning, isn't it? So we we've. At this aspect of the the the doubling time of medical knowledge and it used to be 50 years and then 25 years, and then ten years. And I think it's currently at about 70 days and shrinking. But for technology which is often released in a commercial setting. 1st and then adapted for healthcare rather than the other way around. Actually, if you like our public our way ahead of us in terms of their use and and often their sophistication certainly for some parts of the population. So I think that's another challenge. How how do you do you think the framework will help? Our our healthcare workforce, if you like, map their progress and their learning journey that helps equips them to meet that challenge. Perhaps I'll ask Alan because you you described the the construction of the framework.
Alan Davies
It's thanks. Actually I think I think it has the potential to do that definitely and we've we've certainly been. And and we've we've, we've we've put parts of the framework and mapped that against our new clinical data science programme, for example. So we're trying to try and embed these things in, in some of those postgraduate work that we're doing. We've also got a lot of stuff going on in nursing at the University of Manchester, so particular modules and courses. You know, a lot of them tend to be. That's graduate focus, though, because you know there's a lot of crowded curriculum in a lot of these medical professions. So in medicine and and nursing that they're always putting more and more things in. And obviously digital is very important, but often we're seeing you know that that it's maybe not getting the attention that it deserves, but some people are also trying to embed that into normal normal kind of practise and put it into other units and other things which I think is a good idea. It's another way to to embed some of the digital stuff in there as well. So we're seeing sort of more and more adoption of these things and if it can be incorporated into other modules and. Interdisciplinary learning as well. When we're working with other professional groups, because that's what happens in reality and you're you're going to use this technology a lot to communicate with other groups and other departments. And really we need to start embedding this early on in the undergraduate and postgraduate curriculum. So I think definitely having that. Framework and the ability for people to sort of look at that and see what those requirements might be certainly gives educators something they can start to work with and start to make sure that they're including some of those main elements.
Adrian Brooke: So, so early adoption, uh dissemination and uptake kind of key key themes I think coming out out of of of your answer. And how how might you see that for example Hatim or Sonya in GP training, given that that's both your clinical backgrounds is, is that something you've you've seen heard? I mean I'd wanna make presumptions, but I suspect has him slightly closer to training than Sonia, but maybe not very not very much. But from from your point of view Hatim have you seen that or Sonya, have you seen that in practise?
Hatim Abdulhussein: So my reflection during my training is I think that we're on a journey here that similarly we were on with leadership with, with quality improvement, which are areas that have fell in quite naturally into the the, the GP portfolio or or the GP kind of workplace based assessments. And now when I was training that whilst there wasn't a a specific section around digital, I made sure that I. Did a lot of reflection on the way that I use digital tools in in the way I interacted with my patients and I did a I spent a lot of time thinking about, well actually did that make things better? Did that help the, the, the the case or did that actually make things worse? Was that the right modality to to choose to communicate with that patient? And was I excluding them from from care by by by using that modality and I spent a lot of time reflecting on these things more naturally purely because it was important to me. But I think we do need to create the conditions within the portfolio to to support people to to be able to do that reflection and and to be able to make to understand that better because ultimately this is all about safe. Ethical patient care and and and to be able to to deliver safe, ethical patient care you to be able to be competent in working with the tools that you're working with and understand their their strengths and their limitations.
Sonia Kumar: I just sort of, I suppose just adding from an undergraduate perspective, you know, I I think the evidence around I wrote a paper around this, the evidence around how you teach digital health, how you actually embed this in curricula is quite sparse. So I think that there's there's a real gap. There is, how do we actually get this information and. And skills and values around digital inclusion out to our students. Clearly, PowerPoints not gonna do it. You know, teaching our students all of this in lectures isn't gonna do it. I mean, one thought that I have is is 1 and this is bringing in my experience around community based learning and and being a GP is that students and also just building on what you said her team around quality improvement. One thing that I have done went previous to my role at the University of Leeds. I was at Imperial College London for 10 years and one thing that we did there is that the students year three medical students. Is what we call Community Action projects and we sort of focused these around hot topics such as COVID vaccine hesitancy. And just one thought is that students could do quality improvement projects with communities in that they learn the knowledge base and the skill based around digital health. But they do that through working with communities in upskilling communities, in digital literacy. So you sort of have a double win there that not only the students learning, but actually they're learning through. Nervous because I do think that we need to think about training needs not just of our healthcare professionals, but also the gaps in our in our in, in patients. We need to empower them so that when they are coming with information, they've been able to do a little appraisal of the information themselves and that they're not spending huge amounts of money and time on digital health technologies that may not be. Best for the health?
Adrian Brooke: Thank you, senator. That that's a really helpful insight actually. So we do have a question and so thank you for the questions and comments which are coming through on the chat. We have a question from Jane Daily that that that was dinner at 12:29 for members of the panel who want to look at the question before I fire it out. View so it says digital first will only be integrated or embedded if workforce contracts and the rewards and recognition system is revolutionised. How does this align with critical and strategic workforce planning? I've got horrible things. I read that and it might come back my way, but does anyone want to kind of start off with that as a response?
Hatim Abdulhussein: Yeah, I can kick us off and actually it would be great to have your views on that as well, but am I guess the the way the way I see this is is important. Uh, this is a particular group that we haven't necessarily focused on just now. We're talking about undergraduate and postgraduate uh. Training. But actually there's a whole group around continuing professional development who are out there working in the NHS that will need to equally have these skills looked at and be able to kind of be supported to be able to keep their skills up to date or develop their skills where gaps lie. And I think the key here is is, is, is culture of an organisation, top down leadership in terms of saying this is important. To be able to develop the skills in this area, making some of these things are built into. Uh. Annual appraisals so that you're able to at some point, you know, look at your digital literacy use as something like a digital self-assessment tool that we're developing at health education in England and and piloting in, in, in, in, in, in the north of the country to be able to say where am I right now and where do I need to go and have a really open and frank conversation with your line manager in terms of how you then develop those skills and why it's important that you develop those skills. And and so if you have all of that kind of naturally happening within an organisation, you're going to be more digitally mature as an organisation and. And so it's important that we work with with providers to to be able to, to. Enable that.
Adrian Brooke: Thanks. Thanks for Hatim. Yeah. And I I think from a a workforce planning is really quite a complex thing, isn't it? Because a lot of planning is there's kind of short medium and long term planning and some of that planning you know long term planning assumes or if you like has foresight that there will be. A great deal. Of digital technological change. And and yet it can't be exact in articulating exactly what it would look like or how it affects. If you like the you know what's often only workforce planning cycles as circles, as productivity or your workforce requirements or your learning requirement. Even so, it becomes really quite an inexact science at that stage. And as we know. Kind of current. Progress is not always a good kind of predictor of future growth in in in the area. So it's it's it should be quite easy. I think it's the answer. It's actually very, very difficult to to do accurately beyond very broad assumptions. I think that's one of the issues. So it's a really good question. Kind of highlighting some of the some of the difficulties in in trying to do that, I think reward is a really useful kind of. That local example of how you can reward to your workforce for training. And pursuing that that knowledge, journey and competency and capability journey amongst uh, for, for digital. And we know for example, there are yeah, areas of practise and so protect the example of diagnosis of stroke where. AI technologies for for imaging to to diagnose kind of strokes which are amenable to intervention. I think that's my understanding of the technology that's being. Because you know that's grown from about 10% uptake a couple of years ago and it's about 70% of units and now using it. So there, there, there, there's that rapid growth, it would be quite hard to predict where it's incredibly welcome and and some of the other technological learning advancements. Which require greater interplay with. If you like skill, individual skill might take a bit longer and of course need guarantees and regulation because you know you don't want to be doing robotic surgery on people if you're not properly qualified to to to undertake that procedure. For example, very simplistic view. So umm, I think, uh Alan, you have your hands up. So please do come.
Alan Davies: So I think hatting was first.
Hatim Abdulhussein: No, that's fine. And I took it off. I took it off. Go. Go for it.
Alan Davies: I was just gonna. Yeah, I was just gonna say another another thing that I think is quite important with this as well is we talk a lot about the the digital. Literacy. But as the technologies get more advanced, they're often closely associated with data. So there's this concept of data literacy as well, where you know if, if you're if you're not putting the right data into it or or or doing that in the right way, obviously what you get out of it can be affected. So I think that's another key thing is also having access to this. Data for people to learn from as well, and to learn how to use data and therefore you know. So it's not just the tools that we need to sort of teach people, it's the data that goes into the tools and how that's collected and and maintained as well as it were. And also you know we often have trouble with that in academia getting access to real data sets and things like that to train people on. So we're looking at things like synthetic data and. You know, using things like electronic health record systems and using sort of fake data and things like that. But again, the sooner we can get people using some of these tools and the data that's associated with it as well and getting them comfortable with using data that's going to help as well in this in this. There I think.
Adrian Brooke: Right, so there's. A really good. Question from Catherine uh Worley at 12:34 for the time stamp aware amongst the panel which says do we need to up skill? The trainers first can't teach effectively something you don't understand yourself which is fantastic question. Catherine and incredibly true. So who would like to have a go at answering that? Hatting I saw the ghost to the nod there, so that that means that means it's you.
Hatim Abdulhussein: No, I'd love to hear Sonia's opinion on this as as a a senior educator and what your thoughts are in terms of one, I think we might all I'm hoping we're all going to say yes. And two, how do we then do it?
Sonia Kumar: Well, I suppose just turning this on its head so. This is pre COVID I think 20/17/2018 I or maybe no. It was around 20/19. It was around the time of the total review. I set up a module for medical students called digital health futures and looking at it it wasn't particularly forward thinking but you know it was. It was on the basis of the total review which was where we really started to embed some of this. Learning for medical students. And what became apparent exactly as you say, Catherine, is that none of us really knew as much as the students. And so that really is where the light bulb hit that actually, I do wonder whether it is the new generation. It is our students that will be upskilling us. Obviously, we don't want to do complete reverse teaching, but I do think there is something about Co creating. Any curricular changes with our students, they are so completely savvy not only with just digital tech on their on their smartphones. But also around that there are a lot of students that are really, really excited about digital health and know a lot about it. And so when we ran this module, the students were absolutely teaching us. And so when we developed the module and and we presented some of our our work at conferences, we were very much working alongside. Students. So I think the how has to be with the new generation who have been brought up with digital education and digital health.
Adrian Brooke: Great. And of course, there's nothing to stop our educators from using the same capability and competency framework themselves, plot their own journey, and make sure they're teaching to the right level. That, as, as you say, you've got to understand a subject properly to be able to teach it. Well, I think that's one that. Really observed. So we're we're just coming to the end, we've got the last couple of minutes and there's one very quick question. Which I'm hoping. There will be a really quick answer to that. I think talks about the cheap digital literacy literacy assessment from Carrie O'Reilly digital literary assessment. Is this available for wider use?
Hatim Abdulhussein: So yeah, to my understanding, the digital self-assessment tool is is currently being piloted and it's not open for for wider use as of yet. But but then, but hopefully will be soon and I'll I'll share a a link in the chat to the website so that people can stay updated in, in terms of its progress.
Adrian Brooke: That that. That's brilliant. Fantastic. There are lots of really interesting and insightful comments on the chat, and I can reassure you as we approach the last minute of the webinar because I think we log off at 12. 45. UM that, UM, uh. Recording of the webinar will be will be made available and it will be on the darted web pages. Uh, so, uh, and we'll add a link in the chat which I hope will come soon and and there's a Twitter channel at NHS Digital Academy. So I think this conversation and the developments can be followed on on that, on Twitter. On. And I think I I hope, I hope the link can get posted into that as well. So I I would really, really like to thank our panel today. I'd like to thank Sonia for for our input and development of this. I'd like to thank Alan similarly. Thank you so much. And I'd like to thank I'm Hasim for really kind of, uh, helping coordinate and drive a lot of this in HE. So thank you so. Much. I'd also like to thank. Beth and Emily, who you won't see on this on webinar, but basically without their abilities to organise and corral us four of us into a room or be it virtually at this time, none of this would happen. Thank you so much for listening and tuning in. And I hope we'll have further conversations and look forward to you all all joining us in the future. Thank you. Good afternoon.
| 2023-02-21T00:00:00 |
https://digital-transformation.hee.nhs.uk/news/artificial-intelligence-and-digital-healthcare-technologies-capability-framework-published-today
|
[
{
"date": "2023/02/21",
"position": 81,
"query": "workplace AI adoption"
}
] |
|
Swedish Labour Market Analysis: Vacancies Stagnate
|
Swedish Labour Market Analysis: Vacancies Stagnate
|
https://magnitglobal.com
|
[] |
There has continued to be wage and benefit increases, but according to ONS data there has been little growth in the past year in terms of vacancies. Magnit ...
|
High inflation rates coupled with record low unemployment rates currently characterise a mixed picture for European labour markets as we start a new year.
In Sweden the annual inflation rate soared to 11.5% in November 2022, the highest since February 1991, following a 10.9% surge in October. With the unemployment rate falling but still above the EU average, the outlook seems challenging in Sweden.
With global financial markets in turmoil over rising interest rates, an energy price crisis hitting Europe hard, as well as global supply chain shortages, record inflationary pressure and the escalation of the war in Ukraine, Sweden is not alone in feeling the pressure. The challenges on the labour market have been constant and diverse over the past year.
Let’s look deeper at the situation in Sweden.
| 2023-02-21T00:00:00 |
https://magnitglobal.com/us/en/resources/knowledge-center/blog/swedish-labor-market-analysis.html
|
[
{
"date": "2023/02/21",
"position": 51,
"query": "AI labor market trends"
}
] |
|
AI Orchestration Market Size & Analysis Report 2022-2028
|
AI Orchestration Market Size & Analysis Report 2022-2028
|
https://www.kbvresearch.com
|
[] |
The Global AI Orchestration Market size is expected to reach $17.1 billion by 2028, rising at a market growth of 20.0% CAGR during the forecast period. Th.
|
Analysis of Market Size & Trends
The Global AI Orchestration Market size is expected to reach $17.1 billion by 2028, rising at a market growth of 20.0% CAGR during the forecast period.
The management of the tools, procedures, data, and people associated with AI applications inside an organization in a way that it becomes a regular component of operations rather than remaining a part of research or an experimental process is known as AI orchestration. The well-defined and predefined "better" result in you established at the outset is achieved with the aid of AI orchestration.
Enterprises may operationalize artificial intelligence (AI) with the aid of an AI orchestration system, allowing scalability and growth. In order to improve how businesses, explore and evaluate the data, enabling technologies like machine learning and AI help with data preparation, modeling and insight generation, deployment, and insight explanation.
Additionally, it has access to a machine learning environment that automates every phase of the data life cycle, including the creation of auto models using simple processes, redefining corporate AI. Additionally, it enables the rapid creation and management of AI models without the need for writing a single line of code. By enabling explainable AI capabilities that provide transparency and interpretability, it helps to democratize AI by making it accessible to everyone within the company and by eradicating prejudice.
Additionally, AI orchestration aids in the visualization, analysis, and consumption of complicated corporate data. It automatically visualizes the data and unearths essential and hidden insights. It also generates reports, dashboards, and infographics to help users comprehend and communicate business findings. It obtains deep insights from streaming real-time data and uses visualization to absorb business information. Additionally, AI-driven decision analytics boosts innovation and development across enterprises while reducing infrastructure administration time and increasing efficiency.
COVID-19 Impact Analysis
The pandemic presented significant hurdles to the industrial industry, including interruptions to the supply chain, a decline in the workforce's availability, and a lack of raw materials. Because most workers now work remotely, the COVID-19 pandemic has altered how companies’ function, forcing organizations and corporations to develop new procedures to reduce network complexity and improve security. As a result, manufacturers were forced to depend on the strength of their data and analytics to remain competitive and advance innovation. Such elements contributed to the industrial industry's desire for AI orchestration solutions.
The leading players in the market are competing with diverse innovative offerings to remain competitive in the market. The below illustration shows the percentage of revenue shared by some of the leading companies in the market. The leading players of the market are adopting various strategies in order to cater demand coming from the different industries. The key developmental strategies in the market are Acquisitions.
Market Growth Factors
Workflow orchestration is being widely used in organizational digital transformation.
Workflow orchestration use is rising due to technological development in a quicker internet support. Business apps now have more possibilities thanks to the introduction of 5G. The use of AI orchestration solutions and services is accelerating across a number of business sectors, such as manufacturing, banking, financial services and insurance (BFSI), healthcare, retail, communications & IT, energy & utilities, and others. Moreover, the need for digitization has grown as a result of unexpected office, school, and company closures. The market for workflow orchestration is growing greatly as a result of these advances.
Evolution of responsible artificial intelligence
Through the incorporation of workflow management technologies and trend forecasting, artificial intelligence has revolutionized company management in the contemporary corporate environment. The market for artificial intelligence is mostly driven by this. The market is expected to rise as a result of increased investment in machine learning and artificial intelligence technologies. In 2022, when AI adoption will have surged, responsible AI will have a prominent position. Countries will need to carefully address the ethical implications of AI as they execute their national AI initiatives to ensure that automation results in benefits for all stakeholders.
Market Restraining Factor
Threat to data security
Every organization's goal depends on the security and privacy of its customers' and employees' personal information; therefore, there is obviously no room for compromise. These enterprises must make sure that their IT infrastructure and data are protected by reliable security and privacy solutions when they migrate to the cloud. However, using AI comes with a number of cybersecurity dangers, especially if one is not familiar with the most recent technical developments. The use of AI for data collecting has sometimes exposed government records to many data breaches. These factors might prevent the market for AI orchestration from expanding.
Component Outlook
Based on component, the AI orchestration market is divided into solution and services. With the largest revenue share in 2021, the solutions segment led the AI orchestration market. The market is expanding as a result of technological developments that make it possible for orchestration to alter several sectors, including financial services, manufacturing, healthcare, and many others. Additionally, as the amount of data grows, AI orchestration solutions are being used much more widely.
Deployment Mode Outlook
Based on deployment mode, the AI orchestration market is segmented into on-premise and cloud. The on-premise segment accounted for the largest revenue share in 2021. This may be ascribed to the many benefits that come with on-premise implementation, including the high degree of data protection and safety. Industry desire for on-premise deployment models is further fueled by the fact that on-premise models have higher data security and fewer data breaches than cloud-based deployment models.
Application Outlook
Based on application, the AI orchestration market is segmented into workflow orchestration, infrastructure orchestration, customer service orchestration, and others. In 2021, the customer service orchestration segment registered significant revenue share. Software for managing various processes and systems throughout on-premises and cloud-based settings is called service orchestration. Businesses always struggle with a number of concerns related to the IT infrastructure, such as integration and compatibility problems, ballooning IT budgets, and technology that is ever more sophisticated.
Organization Size Outlook
Based on organization size, the AI orchestration market is segmented into large enterprises and Small & Medium Enterprises (SMEs). The SME segment acquired the significant revenue share in 2021. By optimizing routine corporate activities like inventory, buying, manufacturing, and shipping, small and medium-sized enterprises are significantly increasing their revenue contributions. In order to keep up with modern trends and outperform their rivals, many small and medium-sized businesses are using orchestration. Over the course of the projection period, these elements are anticipated to fuel this segment's expansion.
Vertical Outlook
Based on vertical, the AI orchestration market is divided into IT & Telecom, manufacturing, healthcare, BFSI, consumer goods & retail, government & defense, energy & utility, and others. Healthcare accounted for a sizeable revenue share in the AI orchestration market in 2021. Orchestration enables the streamlining of patient processes and an overall increase in efficiency in a healthcare context. By identifying and resolving present inefficiencies, Care Orchestration assists healthcare professionals in streamlining their current care journey.
AI Orchestration Market Report Coverage Report Attribute Details Market size value in 2021 USD 4.9 Billion Market size forecast in 2028 USD 17.1 Billion Base Year 2021 Historical Period 2018 to 2020 Forecast Period 2022 to 2028 Revenue Growth Rate CAGR of 20% from 2022 to 2028 Number of Pages 352 Number of Tables 574 Report coverage Market Trends, Revenue Estimation and Forecast, Segmentation Analysis, Regional and Country Breakdown, Competitive Landscape, Market Share Analysis, Companies Strategic Developments, Company Profiling Segments covered Component, Application, Deployment Mode, Organization Size, Vertical, Region Country scope US, Canada, Mexico, Germany, UK, France, Russia, Spain, Italy, China, Japan, India, South Korea, Singapore, Malaysia, Brazil, Argentina, UAE, Saudi Arabia, South Africa, Nigeria Growth Drivers Workflow orchestration is being widely used in organizational digital transformation
Evolution of responsible artificial intelligence Restraints Threat to data security
Regional Outlook
Based on region, the AI orchestration market is categorized into North America, Europe, Asia Pacific, and LAMEA. The North American region dominated the AI orchestration market in 2021. The market in this area will see profitable potential as a result of the use of AI orchestration expanding gradually to fulfill rising expectations from modern organizations to better their business processes and consumer experience. Workflow orchestration market expansion in this region is being driven by the quick use of technology in IT, process automation, network, and communications, and other areas. Many firms are undergoing a digital transformation as a result of a faster internet assistance in this area.
Free Valuable Insights: Global AI Orchestration Market size to reach USD 17.1 Billion by 2028
KBV Cardinal Matrix - AI Orchestration Market Competition Analysis
The major strategies followed by the market participants are Acquisitions. Based on the Analysis presented in the Cardinal matrix; Cisco Systems, Inc., SAP SE, Oracle Corporation, IBM Corporation, and BMC Software, Inc. are the forerunners in the AI Orchestration Market. Companies such as Wipro Limited, Fujitsu Limited, and Hewlett Packard Enterprise Company are some of the key innovators in AI Orchestration Market.
The market research report covers the analysis of key stake holders of the market. Key companies profiled in the report include Oracle Corporation, Fujitsu Limited, SAP SE, General Electric Company, Tibco Software, Inc. (Vista Equity Partners), Cisco Systems, Inc., Hewlett-Packard Enterprise Company, Wipro Limited, IBM Corporation, and BMC Software, Inc. (KKR & Co., Inc.).
Recent Strategies Deployed in AI Orchestration Market
» Partnerships, Collaborations and Agreements:
Oct-2022: Oracle extended its multiyear partnership with Nvidia. The partnership involves supporting customers in accelerating the use of AI. Additionally, the partnership agreement brings Nvidia's computing stack ranging from systems to software to Oracle's Oracle Cloud Infrastructure (OCI).
Jun-2022: Fujitsu partnered with Hexagon; a public company focused on providing information technology solutions for industrial applications. The partnership aims at jointly working on societal challenges for Fujitsu's Trusted Society. Through cashing on digital twin technologies and solutions from both Hexagon and Fujitsu, the companies would together develop solutions that provide a deeper understanding to clients and support in reducing emissions, improving safety, etc.
Jun-2022: GE Healthcare teamed up with National Cancer Centre Singapore (NCCS), a Singapore-based cancer treatment center. The collaboration agreement aims at enhancing the possibility of more tailored cancer treatment options.
Mar-2022: Wipro came into partnership with Pandorum Technologies, an India-based biotechnology company. The partnership focuses on jointly developing technologies aiming at reducing time-to-market and maximizing outcomes during the R & D and trials of regenerative medicine and developing an AI model that learns from the multi-dimensional data. The partnership integrates Wirpo's AI technical know-how and Pandorum's competence in regenerative medicine.
Nov-2021: GE Healthcare collaborated with Optellum, a leader in AI decision support intended for lung cancer. The collaboration involves combining GE Healthcare's Edison Platform with AI-enabled solutions as provided by Optellum. The combination can make clinician workflows smooth and efficient, and further facilitates GE's goal to make precision healthcare, and taking the right disease treatment at the right point of time as broadly reachable as possible.
Nov-2021: Cisco partnered with IBM, a US-based company primarily selling, IT services, software, hardware, and consulting services. The collaboration integrates both the companies' products enabling management and orchestration of virtual 5G networks, optical and IP networks in a consolidated framework, solution-driven from any cloud.
» Product Launches and Product Expansions:
Sep-2022: Fujitsu Network Communications launched Virtuora Service Management and Orchestration (SMO). The new offering offers control and management across open RAN networks, lifecycle service orchestration, and supports the integration of cloud infrastructure, and is equipped with advanced analytics, AI & ML capabilities.
Jul-2022: TIBCO launched TIBCO ModelOps. The new product allows organizations, and businesses to deploy AI models anywhere, faster, at scale, and with safety. This new addition to the company's product offerings aids customers in deployment, monitoring, and simplifying. The ModelOps further allows managerial teams to work from any cloud services and select the algorithm they want.
Nov-2021: GE Healthcare introduced New AI and Digital Technologies. The new product aims at solving the healthcare industry's problems, optimizing healthcare, etc.
Oct-2021: BMC Software added new features to its already existing Helix operations management and Control-M workflow orchestration platforms. The new features enable customers to apply workflow orchestration to various cloud development environments, including Google's Cloud Dataflow and Function, Amazon Web Services' Glue, and many more.
Jul-2020: Hewlett Packard Enterprise (HPE) unveiled HPE Ezmeral, a software portfolio developed to advance digital transformation. The new software allows customers to get rid of lock-in and expensive legacy licensing models, supporting clients in advancing innovation and bringing down costs, and at the same time ensuring enterprise-grade security.
» Acquisitions and Mergers:
Dec-2022: Wipro Infrastructure Engineering, part of Wipro took over Linecraft.ai, an India-based AI company. The acquisition expands Wipro's digital capabilities and further allows it to provide turnkey automation solutions equipped with bolt-on digital layers providing deep insights and analytics to clients.
Dec-2021: Oracle took over Federos, a US-based developer of analytics & automation solutions, AI optimized assurance. The acquisition provides Oracle's customers with improved operations automation solutions and offers powerful orchestration capabilities.
Oct-2021: BMC took over StreamWeaver, a US-based software company. This acquisition reflects BMC's focus on investing in disruptive technologies and innovation.
Jun-2021: IBM acquired Turbonomic, a US-based provider of Application Resource Management (ARM) and Network Performance Management (NPM). The acquisition complements IBM's acquisition of Instana, and further supports IBM to expand its customer reach and better serving its clients.
Jun-2021: Hewlett Packard Enterprise took over Determined AI, a startup primarily into developing an open-source platform intended for designing machine learning models. The acquisition involves integrating Determined AI's open-source AI training platform with HPE's high-performance computing (HPC) offerings, the integration allows ML engineers to build models sooner, and further advances time-to-production for AI systems.
Sep-2020: Cisco took over BabbleLabs, a US-based developer of speech technology through the use of advanced AI. The acquisition enables Cisco to provide a supreme audio experience during Webex meetings by leveraging BabbleLabs' noise removal capabilities.
Jul-2020: IBM took over WDG Automation, a Brazil-based developer of robotic process automation software. The acquisition enhances IBM's AI-based automation capabilities and further broadens IBM's potential to deliver automation services across clients.
» Geographical Expansions:
Apr-2022: Fujitsu expanded its global footprint by establishing a new research center Fujitsu Research of India Private Limited. The new research center focuses on AI, ML technologies & quantum software and aims to improve software technologies in important technology areas.
Scope of the Study
Market Segments Covered in the Report:
By Component
Solution
Services
By Application
Workflow Orchestration
Infrastructure Orchestration
Manufacturing Orchestration
Customer Service Orchestration
Others
By Deployment Mode
On-premise
Cloud
By Organization Size
Large Enterprises
Small & Medium Enterprises (SMEs)
By Vertical
Manufacturing
BFSI
IT & Telecom
Government & Defense
Consumer Goods & Retail
Energy & Utilities
Healthcare & Life Sciences
Others
By Geography
North America US Canada Mexico Rest of North America
Europe Germany UK France Russia Spain Italy Rest of Europe
Asia Pacific China Japan India South Korea Singapore Malaysia Rest of Asia Pacific
LAMEA Brazil Argentina UAE Saudi Arabia South Africa Nigeria Rest of LAMEA
Key Market Players
List of Companies Profiled in the Report:
Oracle Corporation
Fujitsu Limited
SAP SE
General Electric Company
Tibco Software, Inc. (Vista Equity Partners)
Cisco Systems, Inc.
Hewlett-Packard Enterprise Company
Wipro Limited
IBM Corporation
BMC Software, Inc. (KKR & Co., Inc.)
Need a report that reflects how COVID-19 has impacted this market and its growth? Download Free Sample Now
| 2023-02-21T00:00:00 |
https://www.kbvresearch.com/ai-orchestration-market/
|
[
{
"date": "2023/02/21",
"position": 72,
"query": "AI labor market trends"
}
] |
|
D.C. Reintroduces Bill to Prohibit AI Discrimination in ...
|
D.C. Reintroduces Bill to Prohibit AI Discrimination in Employment Decisions
|
https://potomaclegalgroup.com
|
[] |
The bill includes regulations for employers and prohibits algorithmic discrimination in other areas, such as consumer credit decisions, education and housing.
|
The District of Columbia has reintroduced legislation to prevent discrimination by algorithms in employment decisions. The Stop Discrimination by Algorithms Act of 2023 would prohibit algorithmic discrimination by employers and require service providers to ensure that their AI tools are compliant with the law.
The bill includes regulations for employers and prohibits algorithmic discrimination in other areas, such as consumer credit decisions, education and housing.
The new bill would force employers to contractually require that their AI service providers comply with the proposed law.
Employers would be required to conduct an annual discrimination audit, which must be conducted by a third party and has a reporting requirement. Employers would also have a poster requirement informing employees about the law, as well as a required pop-up notice on certain systems.
The council included a wide definition of protected data that an algorithm might use. The bill addresses the use of IP addresses, equipment identification or MAC addresses, history of consumer purchases, geolocation data, education records, certain automobile records and more.
While employees already have protections from workplace discrimination, the purpose of this bill is to provide them new protections in relation to the data that algorithmic-based systems would use in making decisions to hire, promote or terminate an employee.
New York City previously approved legislation to prevent algorithmic discrimination. The District would need to look to New York and observe how they address their audit and reporting issue before moving forward with this bill. The city would also need to address how to verify that AI service providers are not using protected information in their algorithms. Typically, employers do not develop or maintain AI systems, and they rely on the assurances from AI providers that their systems will perform as promised and maintain bias-free results.
While employers may use AI for hiring, promotions and terminations, the employer is ultimately responsible and liable for discriminatory conduct regardless of who is making employment decisions, whether it is a human, algorithm or service provider.
If you believe that you have been the victim of employment discrimination in Washington DC, Maryland or Virginia, then contact Potomac Legal Group to schedule a consultation. We have extensive experience in representing employees who have experienced all types of discrimination in the workplace.
Contact Us Today to Schedule a Consultation
| 2023-02-21T00:00:00 |
2023/02/21
|
https://potomaclegalgroup.com/d-c-reintroduces-bill-to-prohibit-ai-discrimination-in-employment-decisions/
|
[
{
"date": "2023/02/21",
"position": 2,
"query": "AI regulation employment"
},
{
"date": "2023/02/21",
"position": 61,
"query": "artificial intelligence layoffs"
}
] |
ChatGPT – What Employers Should Be Worried About Now
|
ChatGPT – What Employers Should Be Worried About Now
|
https://www.laborandemploymentlawcounsel.com
|
[
"Seyfarth Shaw Llp",
"February",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline",
".Wp-Block-Co-Authors-Plus-Avatar",
"Where Img",
"Height Auto Max-Width",
"Vertical-Align Bottom .Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow .Wp-Block-Co-Authors-Plus-Avatar"
] |
Because of the risks of bias in AI, employers should include in their policies a general prohibition on the use of AI in connection with employment decisions ...
|
By Karla Grossenbacher
Seyfarth Synopsis: Since ChatGPT became available to the public at large in November 2022, employers have been wondering, and asking their employment lawyers, “What kind of policies should we be putting in place around the use of ChatGPT in the workplace?” Although at this stage it is difficult to imagine all of the different ways ChatGPT, and its subsequent iterations, could be used by employees in the workplace, it is important to consider some of the more obvious usage cases and how employers might choose to address them in workplace policies.
What is ChatGPT?
ChatGPT is a form of artificial intelligence (AI) — an AI language model that is trained to interact in a conversational way. At its most basic level, AI is a computer system able to perform tasks that normally require human intelligence. In order to achieve this, AI needs to be trained. First, massive data sets are fed into a computer algorithm. Then the trained model is evaluated in order to determine how well it performs in making predictions when confronted with previously unseen data. For ChatGPT, it is predicting the next word in a given context to provide that conversational tone for which it has become known. Lastly, the AI goes through a testing phase to find out if the model performs well on large amounts of new data it has not seen before. This is the phase in which ChatGPT finds itself.
Legal Risks for Employers
Given how AI is trained and learns, significant issues can arise for employers when employees use ChatGPT to perform their job duties. One big concern when employees obtain information from a source like ChatGPT in connection with their work is accuracy and bias.
ChatGPT’s ability to supply information as an AI language model is only as good as the information from which it has learned and on which it has been trained. Although ChatGPT has been trained on vast swaths of information from the Internet, by its very nature as AI, there are and will continue to be some gaps in ChatGPT’s knowledge base. The most obvious example of such a gap is that the current version of ChatGPT was only trained on data sets available through 2021. On top of that, one needs to keep in mind that not everything that appears on the Internet is true and so there will be some built-in accuracy problems with information provided by ChatGPT given the data on which it was trained. Thus, with respect to legal risk for employers, if employees are relying on ChatGPT for information in connection with work and not independently fact-checking that information for accuracy, obvious problems can arise depending on how the employee uses the information and to whom the information is provided. Thus, it would make sense for employers to have policies that put guardrails on when and to what extent it is permissible for employees to obtain information from ChatGPT in connection with their work.
There is also the question of inherent bias in AI. The EEOC is focused on this issue as it relates to the employment discrimination laws it enforces and state and local legislators are proposing, and in some jurisdictions already passed, legislation that places restrictions on the use of AI by employers. As described above, the information AI provides is necessarily dependent on the information upon which it is trained (and those who make decisions about what information the AI receives). This bias could manifest itself in the types of information ChatGPT offers in response to questions presented in “conversation” with it. Also, if ChatGPT is consulted with regarding to decision-making in employment, this could lead to claims of discrimination, as well as compliance issues based on state and local laws that require notice of the use of AI in certain employment decisions and/or audits of AI before using it in certain employment contexts. Because of the risks of bias in AI, employers should include in their policies a general prohibition on the use of AI in connection with employment decisions absent approval from the legal department.
The other big concern for employers when thinking about how employees might use ChatGPT in connection with work is confidentiality and data privacy. Employers are naturally concerned that employees will share proprietary, confidential and/or trade secret information when having “conversations” with ChatGPT. Although ChatGPT represents that it does not retain information provided in conversations, it does “learn” from every conversation. And of course, users are entering information into the conversations with ChatGPT over the Internet and there is no guarantee of security such communications. Thus, while the details of how exactly confidential employer information could be impacted if revealed by an employee to ChatGPT, prudent employers will include in employee confidentiality agreements and policies prohibitions on employees referring to or entering confidential, proprietary or trade secret information into AI chatbots or language models, such as ChatGPT. A good argument could be made that it is not consistent with treating information as a “trade secret” if it is given to a chatbot on the Internet. On the flip side, given how ChatGPT was trained on wide swaths of information from the Internet, it is conceivable that employees could receive and use information from ChatGPT that is trademarked, copyrighted and/or the intellectual property of another person or entity, creating legal risk for the employer.
Other Employer Concerns
In addition to these legal concerns, employers also should also consider to what extent they want to allow employees to use ChatGPT in connection with their jobs. Employers are at important crossroads in terms of determining whether and to what extent to embrace or restrict the usage of ChatGPT in their workplaces. Employers will need to weigh the efficiency and economy that could be achieved by employees using ChatGPT to perform such tasks as writing routine letters and emails, generating simple reports, and creating presentations, for example, against the potential loss in developmental opportunities for employees in performing such tasks themselves. ChatGPT is not going away, and in fact, a new and improved version should be out within the year.
Employers will ultimately need to address the issue of its use in their workplaces the next iteration is going to be even better. For all of the risks ChatGPT can present for employers, it can also be leveraged by employers. The discussion has just started. Employers – like ChatGPT – will likely be learning and beta testing on this for a bit.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://www.laborandemploymentlawcounsel.com/2023/02/chatgpt-what-employers-should-be-worried-about-now/
|
[
{
"date": "2023/02/21",
"position": 3,
"query": "AI regulation employment"
},
{
"date": "2023/02/21",
"position": 7,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/21",
"position": 15,
"query": "AI labor union"
}
] |
The nonprofits accelerating Sam Altman's AI vision
|
The nonprofits accelerating Sam Altman’s AI vision
|
https://techcrunch.com
|
[
"Mark Harris",
"Kirsten Korosec",
"--C-Author-Card-Image-Size Align-Items Center Display Flex Gap Var",
"Media",
"Min-Width",
"--C-Author-Card-Image-Size",
"Img.Wp-Block-Tc_Author-Card__Image Height Var --C-Author-Card-Image-Size",
"Width Var --C-Author-Card-Image-Size",
"Img.Wp-Block-Tc_Author-Card__Image Border-Radius",
"Max-Width None -O-Object-Fit Cover Object-Fit Cover Display Block"
] |
Altman controls at least two nonprofits, OpenAI and OpenResearch, and has provided funding to a third, not previously reported, known as UBI Charitable.
|
Elon Musk tweeted Saturday a ChatGPT conversation that speculated about the 2019 transition of its creator, OpenAI, from a nonprofit to a for-profit organization. The AI chatbot concluded that, if the for-profit business had used the nonprofit’s resources for the change, it would have been “highly unethical and illegal.”
It appears that Musk and ChatGPT didn’t have all the facts. Tax filings seen by TechCrunch indicate the original OpenAI nonprofit retained control over all of its financial assets, totaling tens of millions of dollars, meaning none of its money was used to spin out the organization’s commercial enterprises.
The interesting part is where that money ended up: financing Universal Basic Income pilots aiming to fix the very problems OpenAI’s technologies seem to be creating.
And that’s just one thread in a web of commercial investments and nonprofits that all tie back to Sam Altman, best known as a co-founder of startup accelerator Y Combinator and OpenAI — the nonprofit he started with Musk.
His investments span a dozen industries, from nuclear fusion and supersonic planes to molecular diagnostics and crypto, but key among his wider interests are a collection of nonprofits, run by Altman and his close friends.
The story of this family of nonprofits illustrates how a small group of like-minded entrepreneurs can leverage their charitable donations to not only support their personal causes, but to further commercial interests and possibly even accelerate the transformation of society.
A web of nonprofits
It’s far from unusual for tech entrepreneurs to have a charitable foundation or two to distribute their wealth exactly how they wish. But Altman’s commercial and charitable dealings are more intertwined than most.
Techcrunch event Save up to $475 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $450 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW
Altman controls at least two nonprofits, OpenAI and OpenResearch, and has provided funding to a third, not previously reported, known as UBI Charitable.
UBI Charitable’s mission is to research and deploy Universal Basic Income (UBI) programs — the no-strings-attached payouts scheme that futurists like Altman and Musk believe will be necessary when advances in robotics and AI, similar to those being developed by the two technologists, render many human occupations unprofitable. It is already funding at least two UBI schemes.
Understanding the connections and the flows of money between Altman’s businesses and charities means going back to 2015.
That was the year that Altman co-founded OpenAI with Musk, Reid Hoffman and others, as a 501c3 organization to safely and transparently pursue AI research. It was also the year he spun out a separate nonprofit research lab from Y Combinator that would ultimately be called OpenResearch. This research lab was launched to tackle work that required a very long time horizon, sought to answer open-ended questions or develop technology that Altman thought should not be owned by any one company.
“We’re not doing this with the goal of helping YC’s startups succeed or adding to our bottom line,” wrote Altman on Y Combinator’s blog at the time. “At the risk of sounding cliché, this is for the benefit of the world.”
He claimed in the blog that he would start off by personally donating $10 million to OpenResearch and raise more money later.
A filing with the IRS shows that the lab in fact received only $1 million in donations in 2016. Funding for OpenResearch initially lagged, but would eventually top $10 million by 2019. The source of that money was not specified. OpenResearch has received a total of nearly $24.5 million in funding since it was established, according to tax filings. Altman also provided a $5.2 million loan to the organization in 2016, and increased that year by year. Altman had loaned OpenResearch a total of $14 million by the end of 2021, according to the latest records (although he has forgiven some of the debt).
The 2016 filing also claimed that OpenResearch had already made “significant progress” in such diverse areas as programming languages, simulation systems, physical/virtual user interfaces, computer-mediated student-teacher interaction and virtual reality.
OpenResearch kept a low profile in its early years. That changed with the COVID-19 pandemic.
In March 2020, as the virus was shutting down America, Altman tweeted a call for help with clinical trials of potential therapies, that connected him to computational biologist Benjamine Liu, a founder of TrialSpark.
OpenResearch provided TrialSpark with a $1 million grant to help set up Project Covalence, a platform to support COVID-19 trials in community settings or at patients’ homes. The project’s website stated: “The world doesn’t have time to waste. By coordinating efforts, sharing resources, and streamlining logistics, we can halt the spread of COVID-19 together.”
At least one trial did take place, not for an actual therapy, but for a remote diagnostic test for COVID antibodies. The trial in the summer of 2020 was a success, gathering high-quality samples and positive feedback from participants.
And yet, by late summer 2021, Project Covalence’s website disappeared. Not long after, Altman led a $156 million Series C investment in the company. TrialSpark’s valuation would pop to $1 billion by the time the round closed.
“When donors give, and then benefit from their donations, arguably they are not promoting the public good, but rather their own good,” says Patricia Illingworth, a philosophy professor at Northeastern University and author of “Giving Now,” a book about the ethics of philanthropy. “I am reminded of the practice of parents donating to the schools their children attend. The donation has an element of self-dealing to it.”
TrialSpark provided the following statement: “We wound down Project Covalence as vaccines and therapies were authorized and approved. We had no concerns about OpenResearch’s contribution to Project Covalence and Sam’s investment in TrialSpark because they are two separate things.”
Altman could not be reached for comment, but a spokesperson for OpenResearch supplied a statement along similar lines: “Project Covalence was part of a number of efforts during the pandemic, a project that the OpenResearch board felt would be beneficial to the public at that time. It is important to note that Project Covalence is different from TrialSpark.”
A press release issued by TrialSpark itself in July 2020 described Project Covalence as a platform of TrialSpark.
AI versus jobs
By 2020, OpenResearch had largely abandoned its work on user interfaces and virtual reality. Aside from its one-off grant to TrialSpark, OpenResearch’s attention and funds would now be dedicated to UBI research.
In a lengthy 2021 essay, Altman predicted that AI technologies might be able to pay every American $13,500 a year by 2031, and “that dividend could be much higher if AI accelerates growth.” Last year, he tweeted in favor of a $25 minimum wage: “I think it’s good to force the issue on automating jobs we aren’t willing to pay that much for anyway. Long term, I still think this is all the wrong framing and we will probably need something like UBI.”
And he was ready to put his nonprofit’s money where his mouth was.
Altman drew funds in 2021 from OpenAI and made a $75,000 grant to OpenResearch to work on UBI. That work involves designing and evaluating UBI programs, and advising other groups.
It makes sense that Altman turned to OpenAI to fund other projects. After all, OpenAI has had no difficulty in attracting donors. By 2018, it had raked in nearly $100 million to fund research projects into AI gaming, training a dexterous robot hand, organizing machine conferences and building out its AI safety team. But it had yet to make any external grants. The same year, Musk surrendered his board seat, citing possible conflicts of interests with Tesla’s AI efforts.
In 2019, most of OpenAI’s 125 employees transferred over to a new for-profit business, confusingly also called OpenAI, that would seek to commercialize the technologies it had developed, including the GPT large language models and text-to-image generators. Microsoft invested $1 billion, alongside other investors and VCs.
But the original nonprofit still had $30 million in the bank. With its AI technologies spun off, it now started to make grants, starting with modest contributions to organizations such as the ACLU, Black Girls Code and Campaign Zero — a nonprofit seeking to end police violence.
Then in 2020, the original OpenAI gave away $10 million, nearly one-third of its assets, in a previously unreported donation to a nonprofit called UBI Charitable, launched that same year. UBI Charitable does not have a website, or any salaried employees or volunteers, and its address is identical to that of OpenResearch.
A tax filing with the state of California reveals that UBI Charitable’s “primary and only currently planned activity will be grant-making to organizations that run universal basic income programs, and other policies and programs aimed at broadly distributing the benefits of technological advancement.”
UBI Charitable’s president and treasurer is Altman’s long-time friend and ex-Mountain View mayor, Chris Clark. Clark is also director of OpenResearch, as well as head of strategy at OpenAI itself. UBI Charitable’s only other income in 2020, a $15 million donation, came via a donor-advised fund that protects its originator’s identity. It received another $5.3 million in 2021.
UBI Charitable started spending almost immediately. Since 2020, it has given $8.3 million to CitySquare, an anti-poverty charity in Dallas, and another $8.2 million to Heartland Alliance, a similar organization in Chicago that is already running a UBI pilot, called Chicago Resilient Communities. At the end of 2021, the most recent year for which tax records are available, UBI Charitable was sitting on assets of nearly $15 million.
Fixing the problem it created
The ethics of both funding AI, a technology that could lead to job losses, and providing for people whose livelihoods it threatens, are undeniably complex.
AI technology itself can see two sides to Altman’s actions. When TechCrunch asked ChatGPT, it noted: “If the entrepreneur’s nonprofit is creating a tool that could lead to job loss, he or she may be seen as having a responsibility to mitigate the harm that could result. By funding another nonprofit to provide support for those who may lose their jobs, the entrepreneur may be seen as fulfilling this responsibility.”
However, the AI system went on: “If the entrepreneur’s actions are motivated by a desire to protect his or her financial interests, rather than a genuine concern for those who may be impacted by the tool, this could be seen as a conflict of interest and potentially unethical.”
Of course, no one should rely on legal or ethical advice from a chatbot, and as Illingworth notes: “We want billionaires to give away their money as fast as they can.”
Whether Altman is trying to stay ahead of a coming technological tsunami, covering his ass, or some combination of the two, the net result is still millions of dollars being funneled to people in financial need. What remains to be seen is whether Altman’s UBI charity keeps pace with the changes AI seems likely to bring, and the profits ChatGPT seems likely to generate, in the years to come.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://techcrunch.com/2023/02/21/the-non-profits-accelerating-sam-altmans-ai-vision/
|
[
{
"date": "2023/02/21",
"position": 4,
"query": "universal basic income AI"
}
] |
Bernie Sanders and Bill Gates propose a robot tax
|
Bernie Sanders sides with Bill Gates and says he wants to tax the robot that takes your job
|
https://fortune.com
|
[
"Tristan Bove"
] |
Companies that choose to use robots instead of humans should be taxed, Sanders said, as automation threatens labor markets.
|
Automation that could eliminate countless jobs may be the next big political challenge in the U.S., and politicians are already starting to discuss how to deal with a potentially inevitable unemployment surge driven by robots and artificial intelligence.
If automation and A.I. really could wipe out almost half of jobs in the U.S., the government is still unprepared for it. Recent A.I. products like ChatGPT could herald a future where even white-collar workers are pushed aside by artificial intelligence. And a 2020 study found that adding a single robot to expedite manufacturing jobs destroyed 3.3 jobs nationwide.
How automation and technology will affect labor markets is top of mind for politicians like Vermont Sen. Bernie Sanders, who released a new book Tuesday, It’s OK to Be Angry About Capitalism.
In it, Sanders argues that current tax paradigms are not equipped to handle disruptive technologies like automation and A.I., and higher taxes targeted at companies that choose to use robots instead of humans could soften the blow, an argument that billionaire and tech philanthropist Bill Gates has been making for years.
“If workers are going to be replaced by robots, as will be the case in many industries, we’re going to need to adapt tax and regulatory policies to assure that the change does not simply become an excuse for race-to-the-bottom profiteering by multinational corporations,” Sanders writes in his book, according to an excerpt reported by Insider.
Gates notably called for a robot tax during a 2017 interview with Quartz, in which he said companies taxed for using robotics could still come out ahead, given how much money and time automation could save businesses in the long run. Like Sanders, Gates said governments and companies need to start having these conversations now, before the pace of technological advancement threatens to overtake both.
Government and business tackling technology’s impact on jobs
It isn’t the first time Sanders has tried to address how automation and A.I. could affect the labor market, and likely won’t be the last. The two-time presidential candidate frequently discussed how automation could only make rich people richer most recently during the run-up to the 2020 presidential election.
“[I] will tell corporate America that artificial intelligence and robotics are not going to be used just to throw workers out on the street,” his campaign told Vox in 2019 when asked how Sanders would deal with automation as president. Sanders’ campaign added that he would cut tax breaks for companies that replaced employees with robots and mandate that large companies let more human workers sit on boards and weigh in on automation decisions.
Automation’s threat to jobs was a frequent talking point during the Democratic primaries ahead of the 2020 presidential election, with entrepreneur Andrew Yang notably proposing a universal basic income to shield American workers from the disruptive effects of automation. Sanders disagreed with Yang at the time, saying people still “want to work” despite the difficulties posed by new technologies.
The issue has taken on even greater prominence since the pandemic, when the use of robotics in manufacturing and the service sector skyrocketed as companies worked around stay-at-home orders and a tight labor market, and are unlikely to go back to humans. Company orders for robots broke records both in 2021 and 2022, with robotics industry experts predicting continued growth especially among car companies looking to pivot to electric vehicles.
President Joe Biden’s 2021 executive order, which promises to invest $300 billion in new technologies and create 5 million new manufacturing jobs, doesn’t close the door to automation, arguing that it will create more jobs in the long term. The Biden administration has said that projected growth in more technical technology jobs, which is the fastest-growing job sector in the U.S., will offset the loss of factory jobs.
Biden has presented his plan as a revolutionary moment for the U.S. economy as it keeps pace with new technologies, and while Sanders is on board with the economy’s massive changes, he has continued to sound the alarm bell that large numbers of American workers risk being left behind.
Sanders called advances in automation and artificial intelligence a “transformational moment” for the U.S. in an interview with CBS aired Sunday, but added that he wants “working people to be involved” in the new economy. Sanders said that while he is not “anti-technology,” he wants to see policy enshrined that ensures workers are set to benefit from technological leaps including automation and A.I.
“I don’t want to see the people on top simply be the only beneficiaries of this revolution in technology,” he said. “There is a revolution taking place now with artificial intelligence and robotics. Okay? Millions of workers are going to lose their jobs.”
During his interview, Sanders said Gates’ proposal of a robot tax idea was “one way to do it” when it comes to mitigating technology’s negative impact on jobs, but reiterated that the scope of policy responses should go beyond stopgap solutions and put the well-being of workers front and center.
“It’s not just taxing the robots. It’s this whole question of an economic transformation. Are working-class people going to benefit from that or just the billionaire class?” he said.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://fortune.com/2023/02/21/bernie-sanders-bill-gates-robot-tax-automation-job-threat/
|
[
{
"date": "2023/02/21",
"position": 27,
"query": "universal basic income AI"
}
] |
ChatGPT Is Here to Stay. What Do We Do With It? - Duke Today
|
ChatGPT Is Here to Stay. What Do We Do With It?
|
https://today.duke.edu
|
[] |
“For now, it will just augment most professions. However, looking far into the future, it could mean we'll need universal basic income, because a lot of jobs ...
|
Solutions aren't immediate, but panelists at the “Big Problems in Big Tech” forum said the concerns need prompt attention from scholars, society and policymakers. Photo by HuthPhoto
Solutions aren't immediate, but panelists at the “Big Problems in Big Tech” forum said the concerns need prompt attention from scholars, society and policymakers. Photo by HuthPhoto
ChatGPT Is Here to Stay. What Do We Do With It?
Published February 21, 2023 All Meta Credits Geoffrey Mock University Communications Tags Computer Science Computing & Technology English Faculty Education & Classroom Learning Politics & Public Policy Schools Trinity College of Arts & Sciences Sanford School of Public Policy Scholars David Alfred Hoffman Steed Family Professor of the Practice of Public Policy
Professor of the Practice in the Sanford School of Public Policy
Associate of the Duke Initiative for Science & Society
Affiliate Faculty Member, Duke-Margolis Institute for Health Policy View Scholars@Duke Profile David Page James B. Duke Distinguished Professor of Biostatistics & Bioinformatics
Professor of Biostatistics & Bioinformatics
Professor of Computer Science
Chair of Biostatistics & Bioinformatics View Scholars@Duke Profile Aarthi Vadde E. Blake Byrne Associate Professor
Associate Professor of English
Associate of the Duke Initiative for Science and Society View Scholars@Duke Profile If you read any of the many stories written just last week about ChatGPT and generative artificial intelligence (AI) tools, you likely noticed a lot of both hype and concerns: ChatGPT will help students to better express their ideas. ChatGPT will lead young people down dark rabbit holes, end take-home exams and lead to layoffs. On Friday, a multidisciplinary panel of scholars and industry professionals broke down where the hype is overstated and the dangers overblown, but also where the benefits are possible and the concerns are real. During the opening session of the 2023 Provost’s Forum on “Big Problems in Big Tech,” panelists engaged in a wide-ranging discussion that covered labor force changes, the definition of creativity, cheating on essays and even GPT-written erotic Batman fan fiction. But while there was agreement that generative AI will bring benefits, the disagreements on the panel about how to regulate and minimize the negatives underscored the complexity of the issue and the need to ramp up study and discussion of the issues before the technology is more widely adopted. “This panel comes out of the provost’s office because it’s the provost’s role to bring faculty from different disciplines together to tackle the problem, bringing their different expertise to the discussion,” said David Hoffman, Steed Family Professor of the Practice of Cybersecurity Policy at the Sanford School of Public Policy, the moderator and lead organizer of the panel. “It’s going to take all of that expertise to address the issue.” Generative AI tools have been around for some time, but it was only with the recent release of ChatGPT by a company called OpenAI that they started getting wider use. Matt Kenney, a machine learning engineer for Alethea in Durham, explained that in creating ChatGPT, Open AI successfully combined two different types of computer actions.
Facebook Whistleblower to Students: You Be the Change In the afternoon session of the Provost’s Forum, data scientist Frances Haugen shared her story of holding Facebook accountable for damaging practices. Read more “The goal of the model is to predict the next word in the sentence,” Kenney said. To do this, the GPT part takes a massive amount of data and trains the model to understand the relationships among the words. This then helps the model to decide what is the probability of the next word. Kenney said Chat is the second part: OpenAI took the GPT architecture and trained it to engage in human dialogue. The model uses reinforcement learning to reward the model to select the most appropriate word and to penalize it for a mistake. This means when you read stories about creepy AI conversations with journalists, you should remember that the model learns from humans, said Duke biostatistics professor and machine learning expert David Page. “I read one such story this week, and what stood out to me is how hard the journalist had to push to get it to take it down a scary path,” Page said. “If you present it with problematic matter then it will continue on that path.
“It seems to lack a sense of self and is easily led. The scarier part is that someone who is not trying to use it for a bad purpose could inadvertently lead it to go to the dark part of the web. ChatGPT has strong potential to be ‘rabbit hole 2.0.’”
For similar reasons, generative AI can be easily programmed to spread disinformation, said Casey Fiesler, a technology ethicist at the University of Colorado-Boulder. She gave a scenario where you present ChatGPT a description of a family member with extreme political beliefs and ask it to build an argument around those beliefs.
Following the panel, audience member shared their own thoughts on generative AI issues during lunch. Photo by HuthPhoto
“In that you can create a perfect disinformation campaign for subreddit, and you can do that very quickly,” Fiesler said.
For educators, ChatGPT presents new challenges, but panelists said current discussion focuses too much on policing its use and punishing cheating. Instead, generative AI can be used to teach students how to make critical judgments about the content before them, said Aarthi Vadde, Duke associate professor of English.
“One of the tasks generative AI presents for educators is thinking what is necessary in AI literacy,” said Vadde. “How do you evaluate text in front of you and the sources you find online? This means studying AI text as an object. In a class on Jane Austen, you could have students study its writing style mimicking Austen and compare it with the real thing. This can lead to a better understanding of relations between words and meaning and creativity. And it can lead to interesting conversations on what creativity is about.”
Vadde said she is less interested in worrying about policing students who pass off ChatGPT writing as their original work because right now the essays it produces aren’t very good. “What it produces is good writing for an AI, not for a human being. If our standard for good writing in the future is what an AI produces, that’s not a standard you want to see.”
Likewise, Fiesler said concerns about the death of take-home tests is overblown. “I’ve had teachers tell me they won’t do take-home tests anymore,” she said. “That’s the wrong response. We don’t want witchhunts for AI generated writing. What we need to do is work with the students so they know when it’s appropriate to use ChatGPT and when it’s not.
“There are ways to use GPT tools to integrate it into the learning process. We’ve done this with other tools previously. Should you stop students from using spell check? During a spelling test, yes, but if they’re writing an essay, why would you cut them off from using that technology? The question for educators is to decide when the work is a spelling test and when it is not, when you can use GPT and when you can’t.”
There was disagreement on the panel about the potential impact on the work force. Most said that generative AI will lower the cost of information, which will disrupt many industries, but not lead to replacement of people in most fields that depend on information, from coding and writing to law.
“Some professions will shrink but there will be adaptability,” Kenney said. “I’m not concerned, for example, about programmers. Models just aren’t there yet in their quality.”
Fiesler said the ability of generative AI to produce information quickly, and to write standard content to a template could change how many professions do their jobs. But as the technology, improves, disruptions will increase.
“We won’t have robot doctors, but we will have doctors with supercomputers in pockets,” she said. “For now, it will just augment most professions. However, looking far into the future, it could mean we’ll need universal basic income, because a lot of jobs will need less people. We may have to rethink the entire economy.”
Page said the disruptions may come sooner. “In journalism, for example, our standard for a news-written story has been lowered over time. I think it will impact people now.”
In introducing the session, interim Provost Jennifer Francis said the forum brings the Duke community together to learn from each other and explore solutions on critical issues affecting our personal lives and wider society.
“These questions are incredibly timely and relevant to questions nearly all of us in higher education—and in society more broadly—are thinking about and wrestling with right now,” Francis said.
| 2023-02-21T00:00:00 |
https://today.duke.edu/2023/02/chatgpt-here-stay-what-do-we-do-it
|
[
{
"date": "2023/02/21",
"position": 47,
"query": "universal basic income AI"
}
] |
|
Goal 1: No poverty
|
United Nations Development Programme
|
https://www.undp.org
|
[] |
Since 2000, there has been enormous progress in achieving the target of universal primary education. ... Hate speech in the age of AI. Read more. View More. 11 ...
|
The Sustainable Development Goals (SDGs), also known as the Global Goals, were adopted by the United Nations in 2015 as a universal call to action to end poverty, protect the planet, and ensure that by 2030 all people enjoy peace and prosperity.
The 17 SDGs are integrated—they recognize that action in one area will affect outcomes in others, and that development must balance social, economic and environmental sustainability.
Countries have committed to prioritize progress for those who're furthest behind. The SDGs are designed to end poverty, hunger, AIDS, and discrimination against women and girls.
The creativity, knowhow, technology and financial resources from all of society is necessary to achieve the SDGs in every context.
| 2023-02-21T00:00:00 |
https://www.undp.org/sustainable-development-goals/no-poverty
|
[
{
"date": "2023/02/21",
"position": 70,
"query": "universal basic income AI"
}
] |
|
Why some companies are using artificial intelligence to ...
|
Why some companies are using artificial intelligence to decide who gets laid off
|
https://www.khou.com
|
[] |
More companies are using artificial intelligence to decide what employees to fire in addition to who to hire and interview in 2023.
|
Artificial intelligence-driven technologies have for the most part been used to hire and promote workers up until this point.
Example video title will go here for this video
Example video title will go here for this video
To stream KHOU 11 on your phone, you need the KHOU 11 app.
HOUSTON — Getting a pink slip is never a good feeling, and as big tech companies start making layoffs in the thousands, some workers are speculating that algorithms are making the call.
As the Washington Post reports the idea is not outlandish. That's because human resources managers have been using machine learning software for years to help make decisions.
The difference is artificial intelligence-driven technologies have for the most part been used to hire and promote workers up until this point. According to Forbes, the software screens resumes and looks for qualifications that match the job opening. It can analyze millions of data points, helping recommend who gets an interview, hired and promoted.
It's not a giant leap to believe that the same technology is being used to determine who should get laid off. In fact, a survey of 300 human resources leaders found 98% planned to use the technology to make layoff decisions this year.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://www.khou.com/article/tech/ai-assisted-layoffs/285-d83bedca-3040-4fe4-b15b-724f7820d026
|
[
{
"date": "2023/02/21",
"position": 49,
"query": "AI workers"
},
{
"date": "2023/02/21",
"position": 5,
"query": "AI layoffs"
},
{
"date": "2023/02/21",
"position": 2,
"query": "artificial intelligence layoffs"
}
] |
Automation drives income inequality
|
Automation drives income inequality
|
https://www.technologyreview.com
|
[
"Peter Dizikes"
] |
Automation drives income inequality. Blame self-checkouts, assembly-line robots, and similar technology for most of the growth in the wage gap since 1980.
|
Acemoglu and coauthor Pascual Restrepo, PhD ’16, looked at US Bureau of Economic Analysis statistics on the extent to which human labor was used in 49 industries from 1987 to 2016—as well as data on machinery and software adopted in that time—alongside Census Bureau metrics on worker outcomes for roughly 500 demographic subgroups, broken out by gender, education, age, race and ethnicity, and immigration status.
They concluded that since 1980, automation has reduced the wages of men without a high school degree by 8.8% and women without a high school degree by 2.3%, adjusted for inflation. Incomes for people with college and postgraduate degrees have risen substantially during that period.
Moreover, they say that although these specific innovations have been good for the corporate bottom line, they have not yielded much in the way of productivity gains that would improve overall quality of life.
“These are controversial findings in the sense that they imply a much bigger effect for automation than anyone else has thought,” Acemoglu says.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://www.technologyreview.com/2023/02/21/1067563/automation-drives-income-inequality/
|
[
{
"date": "2023/02/21",
"position": 8,
"query": "AI wages"
}
] |
2025 AI/ML Engineer Salaries for Singapore
|
AI/ML Engineer Salaries in Singapore
|
https://www.morganmckinley.com
|
[] |
According to our salary calculator, the average annual salary for AI/ML Engineers working in Singapore is S$90,000 - S$170,000.
|
From S$140,000 To S$240,000+ 15+ years
15+ years: These numbers represent the median, which is the midpoint of the ranges from our proprietary Total Pay Estimate model and based on salaries collected from our users.
| 2023-02-21T00:00:00 |
https://www.morganmckinley.com/sg/salary-guide/data/ai-ml-engineer/singapore
|
[
{
"date": "2023/02/21",
"position": 20,
"query": "AI wages"
}
] |
|
2025 AI/ML Specialist Salaries in London
|
AI/ML Specialist Salaries In London
|
https://www.morganmckinley.com
|
[] |
According to our salary calculator, the average annual salary for AI/ML Specialist working in London is £75000 - £90000 in 2025.
|
Up, down or flat? The state of salaries.
Significant salary increases seen in 2022 normalised in 2023 - with fewer vacancies, the power shifted. However, employers should be careful to not under offer as they will likely miss out on the top talent that understands their value.
| 2023-02-21T00:00:00 |
https://www.morganmckinley.com/uk/salary-guide/data/ai-ml-specialist/london
|
[
{
"date": "2023/02/21",
"position": 30,
"query": "AI wages"
}
] |
|
AI is Useful for Capitalists but Probably Terrible for Anyone ...
|
AI is Useful for Capitalists but Probably Terrible for Anyone Else
|
https://mebassett.info
|
[] |
AI will continue to drive wages down, as it replaces higher-paid work with lower-paid work and allows the owners of the software to capture the difference.
|
The last time your author opined about the state of artificial intelligence1 I predicted that commercial success required two things: first, that AI researchers focus on solving a specific business problem, and second, that enough data exists for that specific business problem. The premise for this prediction was that researchers needed to develop an intuition of the business process involved so they could encode that intuition into their models. In other words, that a general-purpose solution would not crack every business problem. This might have been true temporarily, but it's doomed to be wrong more permanently. I missed a reoccurring pattern in the history of AI: that eventually enough computational power wins. In the same way chess-playing engines that tried to encode heuristics about the game eventually lost to models that had enough computational power, these AI models for "specific business problems" have all just lost to the hundred billion parameters of GPT-3.
I am not known for being overly bullish on technology, but I struggle to think of everyday sorts of business examples where such a large language model would not do well. It is true that in the above example the model did terribly on questions requiring basic arithmetic (converting rent per square foot per month to rent per square metre per year, for example), but these limitations are missing the point. Computers are known to be adequate arithmetic-performing machines (hence the name), and surely future models would correct this and other deficiencies. Artificial intelligence is now generally useful for business, and I am probably not thinking broadly enough about where it will end up.
One decent guess, however, might be augmented intelligence - the idea that AI is best deployed as a tool to increase the power and productivity of human operators rather than replace them. 2 Large language models like GPT-3 could be used to scale the work of a human or handle their dull, boring work, much like I might use a programming language to scale my work or automate away my dull, repetitive tasks. We already have products like GitHub's Copilot, which can sit alongside a programmer and make helpful suggestions of entire functions or algorithms, increasing the programmer's productivity. It's not hard to imagine lawyers, doctors, accountants, marketers, salespersons, political speechwriters, et cetera, having similar AI assistants. In fact, many already do!
This should recall to mind that technology is a lever. Artificial intelligence algorithms will amplify the work that a single person can do - if that person is connected to the hive mind.
Let's leave AI aside for a second and consider more pedestrian technologies. In two decades smart phones, search engines, and social media went from being ideas in Star Trek to ubiquitous in daily life. Ubiquitous doesn't even cut it, all three are required for life in a modern, 21st century country. Banking, working, dating, et cetera, range from "extremely difficult" to "impossible" without them. The people who try are already part of a subculture; they're "off grid". In turn, the corporations responsible for those innovations have grown incredibly wealthy and immensely powerful. It should go without saying (but too often doesn't) that it is right and proper for those inventors to reap the rewards of providing such an innovative and useful product to the rest of us. But those products have become necessary, and the costs for them have become like taxes: a phone tax we much pay monthly or annually, a “Google tax” businesses must pay for decent SEO rankings and reviews, and a “social media” tax one must pay in the form of privacy and an ad-free thought process.
These taxes might be innocent enough, but together they start to create barriers to entry to participation in the modern economy; you have no choice but to pay the corporate overlords the price of admission. At the same time, these companies can only employ (and only need to employ) a tiny fraction of the labour force. The result is that productivity increases across the economy have benefited a smaller percentage of people. Wages from labour have stagnated. This decoupling of wages from productivity3 started before the age of smart phones and social media 4, but those things put it into overdrive and artificial intelligence is about to pour fuel on top of it. 5 There are only a handful of companies with enough data to be able to train artificial intelligence algorithms. And as the use of those algorithms generates more training data there is likely to be a compounding effect, where those AI companies become better and the gap between them and everyone else becomes harder to cross. As these algorithms find their way into daily work everyone else will beholden to these companies in order to get anything done. Like the financial system has become a critical part of the world's economy - a sector that's "too big to fail" - tech companies are likely to become equally vital. And as we have seen in finance, they are likely to abuse their position.
There are at least three dangers I can see. The first is an "Access to AI" problem, wherein these tech companies can decide who can and who cannot participate in the modern economy without much recourse. Just as Twitter can eject users from its platform, so can MicroOpenSoftAI reject your API keys so you cannot use the 21st century version of a spreadsheet. This might not be a problem in a world where you can be just as productive with open-source tools, but if MicroOpenSoftAI's software is necessary to compete in the world, just as WhatsApp is necessary to do almost anything in some places, this becomes a serious barrier to entry. But AI can also be used to create artificial barriers of entry in thought and communication as well as in the economy, just by being a "smart" automatic gate keeper. Imagine an AI algorithm embedded into the InstaWhatsTell chat app - everyone at work, in government, and in your social circle uses it. The owners have a corporate "ethics" policy forbidding certain political positions. One day you're advocating for such a position to your friends or colleagues, so the AI decides to end the conversation for you, gate-keeping the conversation from your "toxic" ideas.
The second is a "Trustworthiness of AI" problem. If such a tool is controlled by someone else you can never really trust it to act on your behalf even if you cannot avoid using it. Consider the AI on your phone in the previous example - the one that would end the call if you were advocating for an idea that it didn't like. The same AI could change your words so that the other party never hears your original thoughts, instead, it would make sure people only hear the words that the “ideal” you would have said. Today's large language models and voice generation models are already capable of generating text that reads like yours and voices that sound like yours. But the changes could also be more subtle. Remember the conversions from rent per square foot per month to rent per square metre per year ? Would you immediately recognize one value from the other? In a world where these algorithms are ubiquitous your work would scale so much that you wouldn't have a chance to double check. Like a self-driving car, you cannot easily put the human back in the loop, the human is paying attention to everything or is on her smart phone. There would be little to stop a nefarious company from inserting wrong but innocent-looking texts or numbers that benefit them at your expense. And just like a nefarious financial adviser might implore you to buy their "cross-currency swaps" to improve your balance sheet 6, the tech companies would implore you to trust the algorithm. Because who is smarter than the AI?
The third problem we have already mentioned: the gap between wages and productivity will grow even more quickly. This is partly a consequence of the first two problems: on the one hand, AI companies create barriers to entry, and you have to buy their tool and sing from the song-sheet to play in the economy. And on the other, AI companies will be in an enviable, difficult-to-regulate position wherein the same tools that you need to be productive can undermine you if your business is in danger of competing with theirs. Both of these would drive profits for the AI companies up, ensuring that the value generated from the increased productivity is captured by the AI owners and not the workers or other actors in the broader economy. But it’s also a consequence of a creative economy where AI is taking a central role. Humans are constantly seeking novelty - and a little bit of novelty, say in a Paul Graham essay or Steven King novel, is enough for a large language model to learn from and create an entire universe from. It's not hard to imagine the scripts for most of the Marvel universe films being generated by AI models. The future of creativity might be humans trying, like workers on Amazon's Mechanical Turk, to create a few novel ideas that the AI would then expand, fill out, and send to a mass audience - think hundreds of thousands of low paid writers pounding on their keyboards to come up with one or two lines of an original Shakespeare play. Such human involvement is already needed to train ChatGPT. Reinforcement Learning with Human Feedback is a labour-intensive way to train AI models, and currently the most effective if you want ChatGPT-like performance. AI will continue to drive wages down, as it replaces higher-paid work with lower-paid work and allows the owners of the software to capture the difference.
None of this is inevitable.
It's a consequence of our system of regulating economic activity just as much as a consequence of our technology. But these days we tend to let a small cadre of billionaires think of the world they want to live in and implement it while we just accept the consequences, or pretend that we made those decisions ourselves.
Of course, these examples are all far-fetched and dystopian. It’s not like there was ever a major tech company that tried to use its technology to hobble the competition (such as Microsoft in the 1990s with its “Embrace, Extend, and Extinguish” strategy). 7 Nor has there ever been a major company, much less a Western government, that tried to use technology to create barriers to economic or political participation (such as the FBI using wire-taps and mass media to try to discredit Martin Luther King Jr in the 1960s). 8 No, AI will offer us boundless convenience and increased productivity; it'll let our corporate overlords gain boundless profits, too, while our wages continue to decline. In exchange, we will have just enough to buy more of the stuff that the AI told us we should want.
| 2023-02-21T00:00:00 |
https://mebassett.info/ai-useful-for-capitalist
|
[
{
"date": "2023/02/21",
"position": 32,
"query": "AI wages"
}
] |
|
Class 9 - Social Science - The Story of Village Palampur
|
NCERT Solutions for Class 9 Social The Story of Village Palampur
|
https://aasoka.com
|
[] |
Why are the wages for farm labourers in Palampur less than minimum wages? ... AI STEAM Lab · WIZKIDS · Careers · Contact Us · News · Case Study · Blogs. Support.
|
Answer:
Among the three factors of production, we found that labour is the most abundant factor of production. There are many people who are willing to work as farm labourers in the villages, whereas the opportunities of work are limited. They belong to either landless families or families cultivating small plots of land. They are paid low wages and lead a difficult life.
In contrast to labour, land is a scarce factor of production. Cultivated land area is less. Moreover, even the existing land is distributed unequally (unequally/equally) among the people engaged in farming. There are a large number of small farmers who cultivate small plots of land and live in conditions not much better than the landless farm labourer. To make the maximum use of the existing land, farmers use multiple cropping and modern farming methods. Both these have led to increase in production of crops.
Modern farming methods require a great deal of capital. Small farmers usually need to borrow money to arrange for the capital, and are put to great distress to repay the loan. Therefore, capital too is a scarce factor of production, particularly for the small farmers.
Though, both land and capital are scarce, there is a basic difference between the two factors of production. Land is a natural resource, whereas capital is man-made. It is possible to increase capital, whereas land is fixed. Therefore, it is very important that we take good care of land and other natural resources used in farming.
| 2023-02-21T00:00:00 |
https://aasoka.com/ncert-solutions/ncert-solutions-class-nine-social-science-the-story-of-village-palampur
|
[
{
"date": "2023/02/21",
"position": 50,
"query": "AI wages"
}
] |
|
Debate Over AI in Journalism: 'Can Machines Replace ...
|
Debate Over AI in Journalism: ‘Can Machines Replace Human Reporters?’
|
https://bylinetimes.com
|
[
"Iain Overton",
"Maheen Behrana",
"Iain Overton Is The Executive Director Of The Action On Armed Violence Charity",
".Wp-Block-Post-Author-Name Box-Sizing Border-Box",
".Wp-Block-Post-Author-Biography Box-Sizing Border-Box",
"Var Bylinetimesdata",
"Ajaxurl",
"Https",
"Bylinetimes.Com",
"Wp-Admin"
] |
Furthermore, the use of AI in journalism could result in errors going unnoticed, as algorithms cannot match the experience and expertise of a human journalist ...
|
Byline Times leads the way in exposing the anti-democratic influence of the Kremlin over the affairs of other nations
Byline Times exposes the Government’s dangerous ‘herd immunity’ approach towards the Coronavirus pandemic, as well as how incompetence and conspiracies contributed to the UK’s shocking death toll
Debate Over AI in Journalism Can Machines Replace Human Reporters?
In a dialogue with ChatGPT, Iain Overton explores whether truth and meaning can really be left to machines
Share EmailTwitterFacebook
Newsletter offer Subscribe to our newsletter for exclusive editorial emails from the Byline Times Team. Sign up
A I : The idea of AI replacing journalists has been a hot topic of debate for some time now. However, recent developments suggest that this is not just a hypothetical scenario, but a real possibility. The regional UK publisher Reach plc’s exploration of using artificial intelligence to support their news editorial teams is just one example of how this could become a reality.
Human : The argument that AI could enhance journalism by automating routine tasks and freeing up reporters to focus on more complex reporting is, of course, an attractive one. At least to a media baron trying to find ways to make money in an age when words are cheap. But it is an approach deeply troubling in terms of its unintended consequences: sacrificing short-term gain with long-term journalistic credibility. The use of AI in journalism, for instance, risks leading to a homogenisation of news stories, with AI-generated reports being uniform and lacking in nuance, replicating each other ad nauseum until the essence of a story – or its development over time – is lost in the swirl of repetition. The German Jewish philosopher Walter Benjamin’s theory of mechanical reproduction predicted the dangers of AI on journalism. According to Benjamin, the technological advances in media reproduction led to the loss of the ‘aura’ of artworks. That is to say, the unique presence felt in experiencing a singular work of art was not present in art produced in an age of mechanical reproduction. In journalism, the use of AI would result in the same loss of this aura: news reports losing their unique quality and, at that moment, their unique claim to truth. When news reports are generated by an algorithm, they lack the individual perspective and creativity that human journalists bring to their work. AI-generated news stories would be stripped of their ‘sui generis’, reduced to mere repetition rather than compelling storytelling. As news reports become more formulaic, readers would lose interest, leading to a decline in trust and credibility. In the end, if journalism is reduced to a set of predictable templates, it risks irrelevance.
AI : While the National Union of Journalists has expressed concerns about the potential impact on jobs, it is important to recognize that AI systems could actually enhance the work of journalists rather than replace them. For example, chatbots like ChatGPT could be used to aid reporters covering routine topics. This would free up journalists’ time to concentrate on more investigative and analytical pieces, ultimately resulting in higher-quality journalism.
Human : As the French philosopher Jean-Paul Sartre noted, empathy lies at the essence of human relationships. Journalism has to be centred deep within this empathic connection in order to bring stories to life and to create a deeper understanding of the human experience. AI systems lack this essential component of humanity. They cannot replicate the empathy needed to understand human emotions, context, and subtleties. AI reports are doomed to shallowness and genericism. Furthermore, the use of AI in journalism could result in errors going unnoticed, as algorithms cannot match the experience and expertise of a human journalist in spotting mistakes. This lack of quality control could lead to a decline in the credibility of journalism, and a loss of trust among readers.
AI : Moreover, AI systems could help journalists process vast amounts of data quickly and accurately, leading to more objective reporting. It could also lead to more efficient newsrooms, with AI systems automating routine tasks like fact-checking and editing, allowing journalists to spend more time on research and analysis.
Human : Journalism that relies on AI is hypocritical and undermines the profession’s values. It is well-established that journalism is a labour-intensive profession, and it requires reporters to engage with their subjects, to listen and to probe deeply. Replacing journalists with machines would not only be putting profit above humanity, but it would also render any future reporting on labour laws or other workplace exploitation fundamentally hypocritical.
AI : While it is understandable that some journalists may feel threatened by the prospect of AI systems being used to support their work, it is important to recognize that this technology is not going away. Rather than fighting against it, journalists should embrace AI and look for ways to work with it to enhance their reporting.
Human : Baked into the use of AI in journalism is a loss of trust and credibility. Readers would not know whether the report was generated by an algorithm or by a human. They would not know who the programmer was behind that algorithm. Suspicions of bias or manipulation would abound, and accountability would be sacrificed on the digital alter of endless word churn. Ultimately trust in the media would flounder, with humans despairing at the endless recreation of content, rather than the exposure of the new.
AI : In conclusion, the use of AI in journalism is inevitable, and we should welcome it rather than resist it. By working together with AI systems, journalists can produce more accurate, efficient, and in-depth reporting, ultimately resulting in a better-informed public.
Human : Not so. As the great philosopher Michel de Montaigne argued, the value of any profession lies not in the tools it employs, but in the virtues it embodies. And AI is contrary to the fundamental notion of humanism that lies at the heart of journalism. So, whilst AI systems may have their uses, they cannot replicate the humanity, empathy, and deep engagement that are journalistic lodestars. The use of AI in journalism is not a desirable development, nor is it inevitable. Rather than relying on AI to replace journalists, journalists should be fighting back – to expand knowledge beyond what is captured by machines and expose the true iniquities suffered by those increasingly made redundant by the rise of robotic intervention. This return to the human would lead to more nuanced, empathetic, and impactful journalism. Preventing mindless digital new reproduction is, in the end, the only way to stop the invariable diminishment of human rights that lies at the heart of the soulless digital impulse.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://bylinetimes.com/2023/02/21/debate-over-ai-in-journalism-can-machines-replace-human-reporters/
|
[
{
"date": "2023/02/21",
"position": 48,
"query": "AI journalism"
}
] |
Independent journalists are a critical part of the news ...
|
Independent journalists are a critical part of the news ecosystem. Treat them as such.
|
https://jskfellows.stanford.edu
|
[
"Martina Guzmán"
] |
The Race and Justice Reporting Initiative invited freelance journalists to meet editors from multiple Detroit media outlets. For the past decade, I watched ...
|
Independent journalists are a critical part of the news ecosystem. Treat them as such. Martina Guzmán 3 min read · Dec 19, 2022 -- 2 Listen Share
The Race and Justice Reporting Initiative invited freelance journalists to meet editors from multiple Detroit media outlets.
For the past decade, I watched journalists of color leave journalism because of toxic newsrooms that made reporters feel marginalized, unseen, and emotionally exhausted. For many, journalism is a calling, but why put up with a toxic newsroom when you can move into a public relations job and make double the money?
I left my Detroit newsroom in 2016, but in the years after, I spoke to dozens of reporters across the U.S. about the experience. Almost all of the journalists I spoke with had left a legacy media outlet or planned an exit. Reporters of color leaving newsrooms is dangerous, not because there is one less person of color but because of a loss in how we cover representative democracy.
Like me, many journalists began freelancing after leaving news outlets. Being an independent journalist is a way to stay in the profession without the heaviness of being in a newsroom. There is a honeymoon period when you begin freelancing, where you are in love with every story you write, and you are excited about being out in communities looking for leads and meeting sources.
But after months of hustling, reality sets in. It isn’t easy to get in touch with editors if you don’t already have a relationship with them. There is no one to edit your story, and unlike newsrooms where there is a built-in community of reporters, as a freelancer, there is no one to connect with. Finally, and most importantly, the pay is abysmal, which hurts the most. How is it that outlets are paying freelancers such poor rates when the freelancers are out in the community doing the work newsrooms say they can’t do because they are short-staffed?
BIPOC freelancers diversify bylines, build trust between communities of color and the media and find rich, compelling stories because our livelihood depends on it. As a profession, we ask the world to value and respect journalism, yet the journalism industry doesn’t value and respect freelancers.
There is a better way. First, we must treat Independent journalists as critical members of the news ecosystem, not as outliers. That led me to create the Race and Justice Reporting Initiative, a program of the Damon Keith Center for Civil Rights, that builds community and supports BIPOC freelance journalists in Detroit. In December 2019, I threw an event at a co-working space dubbed ‘Meet the Editors’ event. The event was designed to allow freelance journalists to meet editors in person, pitch stories, and share contact information.
I encouraged communication within the freelance community, organized workshops on editing your work, and connected roughly 20 journalists of color with mentors. I’m proud of the work, but the Race and Justice Reporting Initiative is not a long-term solution as it is. Problems around BIPOC journalists leaving newsrooms and the poor treatment of freelance reporters are systemic in the industry and need a long-term solution.
I want to change how we talk about the freelance community and change their role in the media ecosystem. I also want to develop an equitable pay scale in Detroit. I have just begun. Now I will create a strategic plan to grow the Race and Justice Reporting Initiative and center the needs of independent journalists.
I will spend time speaking with more freelance journalists to better understand their experiences and needs and ask what it will take for them to stay in the profession. These problems are decades old and need new ideas that come from BIPOC journalists. I will reach out to Stanford design thinkers about redefining the problem, and I will meet with experts on democracy about the correlation between diverse newsrooms, community reporting, and representative democracy.
I will continue to build the independent journalism community, and I invite you to follow my journey here on Medium or Twitter @MGuzman_Detroit. Don’t hesitate to reach out if you have any insight, answers, or personal experiences to share. You can email me at [email protected].
| 2023-11-17T00:00:00 |
2023/11/17
|
https://jskfellows.stanford.edu/independent-journalists-are-a-critical-part-of-the-news-ecosystem-treat-them-as-such-ec15ccc24c3f
|
[
{
"date": "2023/02/21",
"position": 64,
"query": "AI journalism"
}
] |
Saudi Arabia should look into investing in AI to control ...
|
Saudi Arabia should look into investing in AI to control English narrative: Saudi Media Forum panelists
|
https://www.arabnews.com
|
[
"Sulafa Alkhunaizi"
] |
The second Saudi Media Forum, which was held in Riyadh on Monday and Tuesday, discussed the role of digital transformation in developing artificial ...
|
RIYADH: The second Saudi Media Forum, which was held in Riyadh on Monday and Tuesday, discussed the role of digital transformation in developing artificial intelligence systems and explored challenges being faced by the media industry.
The two-day forum hosted a number of workshops and sessions led by industry professionals.
One session, titled “Robot journalism in the face of senses and talents,” was moderated by Noor Nugali, assistant editor in chief of Arab News.
The session discussed the future of AI, recent developments in the field and the impact of robot journalism.
During the panel discussion, British American physician and journalist Qanta Ahmed said: “I would encourage your leaders to actually begin acquiring the AI … technologies from the outset. Bill Gates just invested $10 billion in ChatGPT.”
Ahmed says that Saudi Arabia should likewise look into investing in AI.
We are living in an exciting time, and we should not be technology skeptics, but as with any new technology there could be unintended consequences that must be considered along the way. Johnnie Moore, President, Congress of Christian Leaders and JDA Worldwide
“Unless Saudi Arabia embraces AI from the outset … we are going to see an expanded bias that we already see in English-language media, which … (often) promotes ancient stereotypes that are actually not true,” she said.
She explained how AI systems gain information from data that is given to them and how that might affect how the Kingdom is viewed.
“Computers have neural networks that learn only from the data that is fed to them. You are going to see an exaggeration of sentiment that is not in favor of this nation,” said Ahmed.
Johnnie Moore, president of the Congress of Christian Leaders and of JDA Worldwide, also spoke in the panel discussion.
In an interview with Arab News, Moore elaborated on his perspective on the future of AI.
“AI can vastly improve our lives — in a truly transformative way — unless it is misused,” he said.
Moore says that AI has a long way to go in order to replace humans.
“As for technology, AI cannot replace humans and won’t be able to replace humans in our lifetime. The human brain is like a quantum computer inside a quantum computer, and it’s incredibly efficient. It just needs a little food and water, and it runs. Technology is accelerating, but it cannot replace us. AI has too far to go.
“We are living in an exciting time, and we should not be technology skeptics, but as with any new technology there could be unintended consequences that must be considered along the way,” he said.
Prior to attending SMF, Moore said he has visited the Kingdom numerous times and that so much of his life intersects with Saudi Arabia since he grew up visiting his father, who previously worked here.
“I’ve been coming to the Kingdom a lot … I just can’t keep up with the pace of change ... the most exciting thing is talking to young Saudis because it’s people who studied all around the world, who’ve lived all around the world, who are choosing to invest in the future here because of what’s happening in this place.
“I’m betting on Saudi Arabia and betting on the future of this region,” Moore said.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://www.arabnews.com/node/2255161/media
|
[
{
"date": "2023/02/21",
"position": 80,
"query": "AI journalism"
}
] |
How to use AI in recruitment and selection process?
|
How to use AI in recruitment and selection process?
|
https://wearesimplytalented.com
|
[] |
Artificial intelligence can help recruitment in a number of ways. For example, it can help identify potential candidates, assess CVs and conduct initial ...
|
Artificial intelligence (AI) is a term that has come up a lot in recent years, but what does it actually mean? In a nutshell, AI is the ability of a computer to perform tasks that would normally require human intelligence, such as understanding natural language and recognizing patterns.
However, there is no single definition of AI, and the term can mean different things to different people. For example, some people may consider anything that involves the use of computers to process data to be AI, while others may consider only something that can replicate or surpass human intelligence to be true AI.
At its core, AI is about using computers to automate tasks that would otherwise be difficult or impossible for humans to perform. This can be something as simple as automatic spell-checking, as well as more complex tasks such as identifying trends in datasets.
There are many different applications for AI, but one of the most promising areas is recruitment and selection. With the help of AI, companies can sift through large numbers of job applications quickly and efficiently, without the need for a human.
This not only saves time and money but also helps to ensure that only the most qualified candidates apply for interviews. In addition, AI can also be used to assess a candidate’s suitability for a role by analyzing their CV and online activity.
Overall, AI offers great potential to improve the recruitment and selection process. By automating repetitive tasks and providing valuable insights.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://wearesimplytalented.com/how-to-use-ai-in-recruitment-and-selection/
|
[
{
"date": "2023/02/21",
"position": 17,
"query": "artificial intelligence hiring"
}
] |
Crosschq's definitive guide to Hiring Intelligence
|
Crosschq's definitive guide to Hiring Intelligence
|
https://www.crosschq.com
|
[] |
The simple definition of Hiring Intelligence is the use of data and analytics to continuously inform and optimize the entire hiring process.
|
Recently published research from Deloitte revealed that 83% of companies surveyed globally have low people analytics maturity for talent acquisition. According to the report, organizations are under-equipped to identify problem areas. "Addressing this capability gap is a business imperative..."
Simply put, even some of the smartest organizations are struggling to use data and insights to make better hiring decisions. That’s where Hiring Intelligence comes in.
Hiring Intelligence is the new buzzword popping up in discussions around talent acquisition. But exactly what IS Hiring Intelligence, and how can it be leveraged to achieve business goals?
What Is Hiring Intelligence?
The simple definition of Hiring Intelligence is the use of data and analytics to continuously inform and optimize the entire hiring process. That sounds complicated, but it’s easy to understand when broken down:
The use of data and analytics…
Data that is ripe for analysis comes from the candidates themselves, recruiters, internal datasets related to diversity, and the hiring funnel process. Recruitment strategies and tactics can also be analyzed to gain deeper insights into what’s currently working and what is not.
…to continuously inform and optimize…
Hiring Intelligence is a closed-loop system that talent professionals can use to constantly learn and improve. Hiring decisions will become more informed with each iteration, making it easier to identify, attract, and recruit the best candidates.
…the entire hiring process.
Using data and analytics to optimize the hiring funnel doesn’t happen at a single touchpoint. Instead, Hiring Intelligence is fully integrated into the recruitment pipeline, involving every part of the hiring process from both the candidate’s and recruiting team’s perspectives.
According to the Data and Analytics Leadership Annual Executive Survey 2023, the business value of data and analytics investments is of top interest for leaders across most major industries. According to the Executive Summary:
“Just 23.9% of companies characterize themselves as data-driven, and only 20.6% say that they have developed a data culture within their organizations, reflecting that becoming data-driven is a long and difficult journey that organizations increasingly recognize playing out over years or decades.”
Moving Faster to Data-Driven Hiring
Implementing Hiring Intelligence is a huge step in the right direction for companies that want to bring their recruitment processes forward and remain competitive. Making an impact by integrating data-driven decision-making can be done fairly quickly and intuitively if you have the correct tools and support. Crosschq’s suite of Hiring Intelligence tools are designed to move you swiftly and confidently into the future of hiring.
You might also like: The Hiring Intelligence Fast-Track Series
Read post 1 here: Do you have the right data to win the hiring battle?
The Hiring Intelligence Fast-Track Series
Artificial Intelligence in Hiring is NOT Hiring intelligence
While artificial intelligence (AI) tools can be extremely helpful for automating and streamlining recruitment pipelines, AI is not the same as Hiring Intelligence. Both are data driven, but Hiring Intelligence uses human responses to generate data and feed a life cycle of constant improvement, while AI alone can easily get caught in loops of iteration based on inaccurate or incomplete data.
AI can be used as a tool to make hiring funnels run more smoothly, but it has a long way to go before it will be personalized and “human” enough to independently surface and source the right candidates for the right jobs. Hiring Intelligence, however, is ready in the here and now to do exactly that.
The Benefits of Hiring Intelligence
Hiring Intelligence can be leveraged throughout the hiring process to enhance results at every stage of the funnel. It makes talent acquisition a strategic part of the business that directly impacts the bottom line, inherently giving HR and TA leaders a seat at the table.
Hiring Intelligence helps organizations to:
Optimize recruiting efforts by making it easier to find, source, and hire top talent across all job boards and platforms.
↓
Reduce hiring and sourcing costs by identifying which channels deliver the most value, allowing energy and budget to be trained on the highest-performing options.
↓
Save time for recruiters and hiring managers by achieving agreement as to what the ideal candidate looks like, for better clarity throughout the process.
↓
Continuously improve hiring processes by providing a clear picture of how different methods of sourcing, refining, interviewing and hiring stack up against each other and highlighting areas that can use improvement.
↓
Increase Quality of Hire across the organization by delivering the best candidate for every job, every time, and iteratively learn .
The Data That Goes Into Hiring Intelligence
Data can be collected at various touch points along the candidate journey. This process may start before a candidate is even actively involved in the sourcing process.
Candidate Data for Intelligent Hiring
Data is more immediately available on some candidates than others:
Actively Seeking Passives
According to LinkedIn, 70% of the global workforce is passive talent, with around 30% actively seeking jobs.
Although opinions vary as to passive candidate quality compared to active candidates (many say passive candidates are superior, others claim it's not so), any strong recruiting strategy should include seeking out passive candidates or there's risk of missing out on potentially some of the best talent out there.
Sourcing on LinkedIn or other sites can be cumbersome and time consuming, requiring a big pipeline of outreach to garner enough responses since this talent pool is much more difficult to engage. There are some AI sourcing tools available to automate or assist in this process. In addition, there are alternative sourcing options including live events, participating in talent communities, engaging with talent on social media channels, or looking to internal referrals.
Taking passive candidates to a new level, you can use clever sourcing tactics to fill your pipeline with quality candidates, like using Crosschq Recruit–a proprietary sourcing database filled with qualified talent that has already opted-in to hear more about job opportunities. Although these candidates haven't actively applied for a job directly at your organization, they are open to hearing about new positions, and are highly qualified since they come from a candidate's reference referral network.
Recruiter Data for Intelligent Hiring
Different types of recruiters have ready access to different types of data:
Internal Recruiters
Internal recruiters are typically part of the HR department, and have extra resources and insight into what the job requires. They have access to critical data regarding what skills and attributes must be present in a top candidate, but may be biased in favor of internal referrals, missing out on the more diverse options available in a wider pool.
External Recruiters
Third-party recruitment organizations or independent outside recruiters often have a range of contacts across the industry, and access to data on thousands of potential candidates. They will need data from the employer’s side to better understand what they should be looking for in potential hires and find the best fit for each job posting.
Diversity Data for Intelligent Hiring
Unconscious bias is one of the most concerning enemies of diversity in today’s workplace. Hiring Intelligence delivers data on candidates that can be factored in across the hiring pipeline to help ensure recruitment isn’t missing the mark.
Whether a particular group is underrepresented or they are dropping out of the hiring process at certain stages or at a faster speed than others, tools like TalentWall by Crosschq track and cut data by EEOC groups, and can help you highlight important diversity statistics and visually uncover any problem areas or gaps in the hiring process. Read more here about using data to drive more equity, diversity and inclusion in your hiring.
Workplaces that don’t address unconscious bias can end up succumbing to culture fit - leading to a homogeneous work culture where everyone looks, sounds, and thinks alike. The goal should be culture add - leading to a vibrant culture made up of many different types of people, which drives both innovation and higher revenues.
Age and generation
Race and ethnicity
National origin
Religion or lack thereof
Disability and ability
Gender identity, sexual orientation, pregnancy
Socioeconomic status and background
Genetic information
Process Data for Intelligent Hiring
Different stages of the hiring process yield different types of datasets:
Sourcing
Basic data can be collected from online platforms such as LinkedIn and social media
Application review
As data goes into the ATS, it should be sorted for easy retrieval using Hiring Intelligence.
Pre-hire assessments
Tests, assessments, and reference checks yield data about skills and attributes.
Interviews
The right interview questions can confirm or refute assumptions about candidate fit.
Pipeline management
Tracking candidates through the hiring funnel can yield data that supports KPI optimization.
Post-hire evaluations
Track new-hire performance and survey them regularly to measure employee satisfaction.
Post-Hire Data for Intelligent Hiring
The employee picture is not complete without post-hire data. Organizations use a variety of tools to gather post-hire data on new employees including:
Engagement surveys
Business performance metrics
Payroll
DEIB
Assessments
Reviews and surveys
Tying It All Together with Quality of Hire
The Quality of Hire picture is not complete without closing the loop on an employee's performance. Perhaps new hire interviewed well and showed promise during onboarding, but did they continue to learn, grow and perform over time? Do they fit in well with their team and is the hiring manager satisfied with their ongoing interactions and performance metrics? Is the employee contributing successfully to the company's bottom line?
Integrating the all the data into one tool allows leaders to determine Quality of Hire. By connecting employee performance and other post-hire information to pre-hire data points such as source of hire or assessment scores, recruiting teams can learn and continuously improve their decision-making processes.
You Might Also Like: Can pre-hire data predict post-hire performance?
Read more here.
Hiring Intelligence Metrics For Smart Hiring
Metrics important for Hiring Intelligence include:
Quality of Hire
Quality of Hire is a measurement of an employee's value. It can be calculated using factors like performance, hiring manager satisfaction, culture fit, promotability, and re-hireability. QoH is usually reported as a score or a percentage (download Crosschq’s Quality of Hire ”Q” Report to learn how to easily make these calculations.)
TIP: Use data analytics and surveys to track QoH over time, compare candidates, and make the best possible hiring and firing decisions.
Time to Fill
This metric expresses the time period between when a job requisition is originally opened in the ATS, and when a candidate is ultimately marked in the ATS as hired.
TIP: Use Hiring Intelligence and reporting to get all stakeholders on the same page so the recruitment process moves faster without increasing risk of a bad hire.
Time to Hire
TTH is a subset of time to fill, starting when a candidate submits their resume and ending when the candidate accepts the offer.
TIP: Use TalentWall by Crosschq to identify roadblocks or bottlenecks in the recruitment pipeline and fix them for a smoother application to hire journey.
Source of Hire
This metric separates the percentage of overall hires from each recruiting channel or source, such as job boards, referrals, or direct sourcing.
TIP: Compare source of hire data with post-hire performance data to see where the best hires come from, and focus more efforts on that source.
First-Year Attrition Rate
This is the percentage of employees who quit or are terminated within 12 months of starting their new job. It can also be expressed in reverse as the “candidate retention rate”.
TIP: Review job descriptions and onboarding processes and request that leaving employees (or their managers) complete exit surveys to shed light on potential recruitment issues.
Pass-Through & Drop-Off Rates
Pass-through rates or conversion rates show how many candidates make it from one stage of the funnel to the next. Drop off rates show how many candidates don’t make it.
TIP: Pay special attention to uneven drop-off that highlights an unexpected number of candidates leaving at specific points in the process, and find the trigger.
Cost Per Hire
Cost per hire is the entire cost of recruiting a candidate, including third party recruiter costs.. This should also include indirect costs, such as lower productivity due to an unfilled role, that are incurred during the time to fill/time to hire period.
TIP: Use Hiring Intelligence to improve TTF and TTH without sacrificing QoH, for lowered costs across the board and increased employee retention.
The Hiring Intelligence Lifecycle
To use Hiring Intelligence effectively, map out the hiring process and use data at each touchpoint to optimize and improve recruitment results.
The Hiring Intelligence Process
The reason Hiring Intelligence is so effective is that each round of hiring yields data that can be used to improve the results of the next round. Each data point delivers insights to improve the hiring process. The hiring process delivers new data which can be input and continues feeding the system.
The Hiring Intelligence Maturity Curve
Hiring Intelligence has a strong maturity curve. Each phase is more complex than the last, traveling the curve from a flat, unassisted hiring journey to the peak of transformation powered by hiring intelligence.
Level 1: Basic tools and processes in place
Organizations at Level 1 typically lean on spreadsheets and a basic ATS that delivers only standard metrics. Unreliable data and manual reporting compounds the problem, and lack of a survey tool means post-hire follow-up and analysis is impossible.
Level 2: Reliable reporting and monitoring
It only requires a simple layering of data visualization on top of an existing ATS to take organizations to Level 2. With hiring stakeholder collaboration, accurate data, and real-time reporting, pipeline health monitoring becomes easy and intuitive.
Level 3: Advanced reporting and collaboration
Level 3 means there is greater team collaboration, bringing team leaders, recruiters, interviewers and hiring managers closer together for more effective process management. Executive reporting and automated report sharing/distribution creates a high-value candidate/stakeholder feedback loop.
Level 4: Optimized hiring and planning
The shift from merely operational to fully strategic continues in Level 4 with sourcing spend and strategy optimization. Hiring plan tracking and reporting and accurate cost per hire tracking deliver additional insights, while improved interview programs work to increase candidate Net Promoter Scores (NPS).
Level 5: Outcome-Based Hiring Intelligence
At Level 5, Hiring Intelligence becomes completely transformational, with continuous program optimization fueled by pre-hire and post-hire reporting and survey integration. Programmatic sourcing, team capacity planning, and the predictive Quality of hire analytics and modeling work together to attract, hire, and perfect each hire.
Climb the Curve:
Start tracking your data points today and tie them to results with a Hiring Intelligence platform.
Ready to learn more about Hiring Intelligence, and how it can help improve and optimize recruitment processes and hiring decision-making?
Set up a time to chat with us here. Share your current hiring analytics tools and processes with us, and we can help you determine where your organization sits today on the Hiring Intelligence Maturity Curve. We can prepare a customized plan for you to climb the curve and gain recruitment momentum with Hiring Intelligence.
As you can see, Hiring Intelligence has become a hot topic in talent because it allows organizations to quickly and accurately identify and source the right people for the right positions in a data-driven way. Furthermore, it can provide insights into the best talent in the market and can enable employers to make more informed decisions to increase Quality of Hire.
And this is just the beginning. We’re still in the early days of Hiring Intelligence, so join us and catch the wave as we ride it to new heights.
| 2023-02-21T00:00:00 |
https://www.crosschq.com/blog/the-definitive-guide-to-hiring-intelligence
|
[
{
"date": "2023/02/21",
"position": 56,
"query": "artificial intelligence hiring"
}
] |
|
Top U.S. AI Salaries by States and Cities | Simplilearn
|
Top U.S. AI Salaries by States and Cities
|
https://www.simplilearn.com.cach3.com
|
[
"John Terra Lives In Nashua",
"New Hampshire",
"Has Been Writing Freelance Since Besides His Volume Of Work In The Gaming Industry",
"He Has Written Articles For Inc.Magazine",
"Computer Shopper",
"As Well As Software Reviews For Zdnet. More Recently",
"He Has Done Extensive Work As A Professional Blogger. His Hobbies Include Running",
"Gaming",
"Consuming Craft Beers. His Refrigerator Is Wi-Fi Compliant.",
"View More"
] |
Top U.S. Artificial Intelligence Salaries by Cities ; City · San Jose, CA. $153,420 ; Job Title · Ai Engineer. $158,500 ; City · Lakes, AK. $197,897.
|
Artificial Intelligence is an exciting, growing field that has the potential to shape our lives for years to come dramatically. It offers you the chance to be an innovator and make your mark on our technologically driven world.
And while the opportunity to help forge the future is really great, it doesn’t pay the bills. That's why we're looking at the highest U.S. AI salaries and where you can find them. Naturally, there’s a wide range of factors to consider, including the cost of living and demand, so an AI salary in San Francisco will be different from an AI salary in Boston.
So, if you’re ready to embark on an AI career and are willing to move to where the compensation is best (unless you can get a remote office gig!), let’s see where you can find the best artificial intelligence salary in the U.S.
PCP in AI and Machine Learning In Partnership with Purdue University Explore Course
The Growing Demand for AI Professionals The field of Artificial Intelligence is enjoying rapid, sustained growth. According to figures provided by Grand View Research, the market for global artificial intelligence was valued at $62.35 billion in 2020 and it's expected to rise at a compound annual growth rate (CAGR) of 40.2 percent until 2028. This figure means that the market will hit almost $1 trillion in six years! Statista adds to this rosy picture by forecasting the AI software market, covering things like robotic process automation, machine learning (ML), and natural language processing (NLP) to hit $126 billion by 2025. Additionally, this article, published in the past year, indicates that this is an excellent time to be looking for an AI position. If you want to start your journey as an AI Engineer, then check out our AI & Machine Learning Bootcamp, it has everything you need to help you get started. According to CompTIA, the U.S. states with the most significant number of AI job postings are, in the following order: California
Texas
Massachusetts
Virginia
Washington
New York
Florida
Illinois All signs point to the AI field's rich potential for career-minded professionals. So, where do we find the best artificial intelligence salaries in the U.S.?
Top Artificial Intelligence Salaries in the U.S. Before we take a deep dive into the top U.S. AI salaries, bear in mind that these numbers are subject to fluctuation, sometimes from one day to another. Additionally, there are other factors to consider, such as the company size, current economic climate, and available openings vs. the number of qualified professionals in the area. Also, your particular level of experience can influence your starting salary. The more experienced you are, the more compensation you can expect. Consider the following as a current “snapshot” of the U.S. AI salary landscape.
Top U.S. Artificial Intelligence Salaries by State The top dozen best-paying states for AI careers are: New York: $218,600
California: $156,000
Virginia: $152,407
Massachusetts: $150,000
Texas: $150,000
Georgia: $140,000
Colorado: $137,750
Illinois: $130,000
Maryland: $124,900
North Carolina: $121,875
Arizona $119,594
Hawaii $115,000 These figures come courtesy of Talent. Free Course: Introduction to AI Learn the Core AI Concepts and Key Skills for FREE Start Learning
Top U.S. Artificial Intelligence Salaries by Cities This is where figures may deviate from the average state salary levels. But, again, the right city can make all the difference, and it's not surprising to see two large cities in the same state (like California, for example) reporting significantly different salaries. Here’s a list of the top ten best-paying United States cities for Artificial Engineer Developers, courtesy of Ziprecruiter: City Annual Salary Monthly Pay San Jose, CA $153,420 $12,785 Oakland, CA $151,675 $12,640 Tanaina, AK $151,053 $12,588 Wasilla, AK $151,051 $12,588 Hayward, CA $148,498 $12,375 Jackson, WY $148,339 $12,362 Norwalk, CT $147,948 $12,329 Seattle, WA $147,475 $12,290 Vallejo, CA $147,439 $12,287 Concord, CA $147,127 $12,261 Here are the average annual salaries for top-paying positions related to Artificial Intelligence Developers in the United States: Job Title Annual Salary Monthly Pay Ai Engineer $158,500 $13,208 Artificial Intelligence Ai Engineer $145,917 $12,160 Azure Ai Engineer $134,271 $11,189 Kore Ai $128,783 $10,732 Deeplearning Ai $128,783 $10,732 Since Artificial Intelligence Engineers differ from Artificial Intelligence Developers, here are the top ten United States cities with the best salaries for Artificial Intelligence Engineer positions, also according to Ziprecruiter: City Annual Salary Monthly Pay Lakes, AK $197,897 $16,491 San Francisco, CA $195,425 $16,285 San Leandro, CA $195,093 $16,258 Santa Clara, CA $194,436 $16,203 Washington, DC $192,445 $16,037 Los Angeles, CA $190,463 $15,872 Fremont, CA $187,873 $15,656 Green River, WY $186,975 $15,581 San Buenaventura, CA $186,634 $15,553 Bridgeport, CT $185,610 $15,467 Artificial Intelligence Engineer Your Gateway to Becoming a Successful AI Expert View Course And, since there are many different possible careers in Artificial Intelligence, here are the top five best-paying AI Engineer-related jobs in the U.S., on average, and without geographical considerations: Job Title Annual Salary Monthly Pay Data Scientist Artificial Intelligence $161,460 $13,455 Intelligence Researcher $149,704 $12,475 Senior Artificial Intelligence Engineer $148,136 $12,345 Artificial Intelligence Ai Engineer $145,917 $12,160 Head Of Artificial Intelligence $140,087 $11,674
Brief Word About Machine Learning Although this article focuses on Artificial Intelligence, anyone familiar with the field knows that it has a strong connection with Machine Learning. Remember, all forms of Machine Learning fall under the Artificial Intelligence category, but not all forms of Artificial Intelligence are Machine Learning! So, for the sake of completeness and considering how the two fields are so closely tied together, here are a few fast figures for Machine Learning salaries, broken down by U.S. cities, courtesy of Ziprecruiter. City Annual Salary Monthly Pay Richmond, CA $192,929 $16,077 Stamford, CT $187,866 $15,655 Bellevue, WA $187,441 $15,620 Lakes, AK $180,867 $15,072 San Francisco, CA $180,823 $15,069 Palmdale, CA $178,877 $14,906 Santa Clara, CA $178,200 $14,850 Hartford, CT $177,849 $14,821 Pasadena, CA $177,198 $14,766 Glendale, CA $176,158 $14,680 Sharp-eyed readers will notice some overlap between cities (or general vicinities) that offer the most AI positions and the ones that provide the most ML positions. Are you an AI and Machine Learning enthusiast? If yes, the Post Graduate Program in AI and Machine Learning is a perfect fit for your career growth.
| 2022-03-25T00:00:00 |
2022/03/25
|
https://www.simplilearn.com.cach3.com/artificial-intelligence-engineer-salary-in-us-article.html
|
[
{
"date": "2023/02/21",
"position": 79,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 77,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 82,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 80,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 79,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 78,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 82,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 75,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 76,
"query": "artificial intelligence wages"
},
{
"date": "2023/02/21",
"position": 73,
"query": "artificial intelligence wages"
}
] |
The Impact of New Technologies (Such as AI) on Logo ...
|
The impact of New technologies (such as AI) on Logo design
|
https://seekthem.com
|
[
"Team Seekthem"
] |
It involves using machine learning algorithms and other AI techniques to create logos. But humans still are very much a part of the design and execution loop.
|
News about artificial intelligence is everywhere since ChatGPT was made available to all in late 2022.
But can AI impact logo design?
The short answer is yes, there are AI-assisted tools for logo design which can churn out designs in a few minutes.
Here’s the summary.
As the founder of a full-stack design and branding company, I’m biased, obviously.
But I’m also an AI-early-adopter. I’ve dabbled in every AI design gen technology, and have been doing so for 3 years now (much before ChatGPT entered routine vocab).
AI continues to bring in better logo design ability. However, it will not replace creative logo design services in the foreseeable future.
Effect of Artificial Intelligence on Logo Design
Logo design with artificial intelligence or AI is a relatively new concept. Many professional logo design services take the help of these applications to cut back on manpower.
It involves using machine learning algorithms and other AI techniques to create logos. But humans still are very much a part of the design and execution loop.
What are the effects of AI on logo design?
1. Swift and affordable
Creativity takes time. Da Vinci took four years to paint Mona Lisa.
A logo is nothing like the famous painting at Louvre but can take hundreds of hours to conceptualize and draw.
Probably the most well known logo in the world
A professional artist from a custom logo design services is expensive and paying them for a hundred hours can be unaffordable for small businesses without funds to spare.
AI tools take a few minutes to create a professional logo. Create a tailor-made logo in the blink of an eye instead of waiting for weeks.
2. Unique logo designs
The biggest aspect of logo design is brand recall. The most famous logos, such as Coca-Cola, are worth billions because they generate instant recall.
What you need for recall is a design that differs from every other logo on the market.
We humans are not the best at being original all the time. All the logos that a human logo creator has seen has an impact.
Subconsciously, elements of that design find their way to the new logo.
This detracts from the very purpose of having a logo as a unique visual signature.
AI-generated logos are different because they can compare the new design with thousands of existing logos.
3. Greater accessibility
AI logo makers have made logo design accessible to every business owner.
Before AI, logo design was expensive. Not too many owners employed professional logo artists.
With the latest tools, the cost of logo design has dropped substantially. The adoption of AI has led to greater democratization in brand building.
Machine learning-assisted logo generators can offer designs that set you apart from your rivals.
Best AI Assisted Logo Generators
1. Logo AI
Logo AI is quick and efficient.
The underlying algorithms create a professional-looking logo in minutes.
To use Logo AI you have to answer a short quiz that describes your brand and the logo maker does the rest.
Plans start from $29 and go all the way to $99.
Source
2. Looka
Looka goes a step further and can help in brand design.
But it all starts with the logo. You have to provide your company name and select colors and shapes that you like.
With the given information Looka will provide several designs. You can edit each of them further using prompts.
Source
3. Wix Logo Maker
Wix has bet big on AI for the past few years. They started with an AI-powered site builder that created an entire site from a few questions.
The same strategy shows up in Wix Logo Maker. Same as Looka, you have to provide a few details about the name of the business, type of industry, and color palette.
Wix Logo Maker will do the rest and ask you to customize the final result.
Source
Besides the above AI assisted logo makers there are AI assisted image editors like Adobe Sensei.
Wrapping Up – Will AI Replace Human Logo Designers?
This is the million-dollar question everyone is wondering about.
Let me summarize the situation and technology.
Artificial intelligence has been causing ripples in the world of graphic design and illustration. The question on everyone’s mind is, will AI replace human graphic illustrators?
Besides ChatGPT, another product from OpenAI was released in 2022. It is known as DALL-E 2, an advanced version of a generative model for image synthesis.
DALL-E 2 can generate vivid images of objects, scenes, and portraits from prompts. Such as asking DALL-E to generate an image of an astronaut on a horse is quite famous and featured on the homepage.
It is to be noted that DALL-E works off images on the web, but can create synthetic images as well (put together an astronaut and a horseback rider).
Source
Coming to logo design, while there’s no doubt that AI has the potential to improve the efficiency and quality of design, it’s unlikely that it will replace human designers completely.
AI-powered logo design tools are making it easier for users without design skills to create graphics.
These tools can generate designs automatically based on user inputs.
Since cost is minimal, this makes them a popular choice for businesses looking to create custom graphics for a logo quickly.
AI can automate many aspects of logo design, but it can’t replicate the creativity you will find at a logo design agency.
AI-generated logo designs lack the emotions and originality that human designers bring to the table.
Therefore, AI assisted tools will change logo designing techniques, but not replace human designers in near future.
| 2023-02-21T00:00:00 |
2023/02/21
|
https://seekthem.com/blog/the-impact-of-ai-on-logo-design
|
[
{
"date": "2023/02/21",
"position": 42,
"query": "artificial intelligence graphic design"
}
] |
What can affect graphic design in the future? - Inkssie
|
What can affect graphic design in the future?
|
https://inkssie.art
|
[] |
Additionally, the use of AI and machine learning will allow for the creation of more personalized and data-driven designs, helping businesses to better target ...
|
My Background My Inspiration My Background
I was doing art since i was little but, I only discovered that I love doing art when I tried it on a digital device. Using modern ink such as Adobe illustrator and Photoshop, Im able to express my passion for creation.
| 2023-02-21T00:00:00 |
https://inkssie.art/f/what-can-affect-graphic-design-in-the-future
|
[
{
"date": "2023/02/21",
"position": 74,
"query": "artificial intelligence graphic design"
}
] |
|
How Americans View Use of AI in Health Care and Medicine by ...
|
60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care
|
https://www.pewresearch.org
|
[
"Alec Tyson",
"Giancarlo Pasquini",
"Alison Spencer",
"Cary Funk",
".Wp-Block-Prc-Block-Bylines-Display Background Inherit Box-Sizing Inherit Color Inherit Color Var --Wp--Preset--Color--Text-Color",
"Font-Family Var --Wp--Preset--Font-Family--Sans-Serif",
"Font-Size Font-Weight Gap Important Line-Height",
"Margin-Bottom Text-Transform Uppercase .Wp-Block-Prc-Block-Bylines-Display A Text-Decoration None Important .Wp-Block-Prc-Block-Bylines-Display A Hover Text-Decoration Underline Important .Wp-Block-Prc-Block-Bylines-Display .Prc-Platform-Staff-Bylines__Separator Margin-Left"
] |
60% of Americans would be uncomfortable with provider relying on AI in their own health care. Yet many see promise for artificial intelligence to help issues ...
|
A new Pew Research Center survey explores public views on artificial intelligence (AI) in health and medicine – an area where Americans may increasingly encounter technologies that do things like screen for skin cancer and even monitor a patient’s vital signs.
The survey finds that on a personal level, there’s significant discomfort among Americans with the idea of AI being used in their own health care. Six-in-ten U.S. adults say they would feel uncomfortable if their own health care provider relied on artificial intelligence to do things like diagnose disease and recommend treatments; a significantly smaller share (39%) say they would feel comfortable with this.
One factor in these views: A majority of the public is unconvinced that the use of AI in health and medicine would improve health outcomes. The Pew Research Center survey, conducted Dec. 12-18, 2022, of 11,004 U.S. adults finds only 38% say AI being used to do things like diagnose disease and recommend treatments would lead to better health outcomes for patients generally, while 33% say it would lead to worse outcomes and 27% say it wouldn’t make much difference.
These findings come as public attitudes toward AI continue to take shape, amid the ongoing adoption of AI technologies across industries and the accompanying national conversation about the benefits and risks that AI applications present for society. Read recent Center analyses for more on public awareness of AI in daily life and perceptions of how much advancement emerging AI applications represent for their fields.
Asked in more detail about how the use of artificial intelligence would impact health and medicine, Americans identify a mix of both positives and negatives.
On the positive side, a larger share of Americans think the use of AI in health and medicine would reduce rather than increase the number of mistakes made by health care providers (40% vs. 27%).
And among the majority of Americans who see a problem with racial and ethnic bias in health care, a much larger share say the problem of bias and unfair treatment would get better (51%) than worse (15%) if AI was used more to do things like diagnose disease and recommend treatments for patients.
But there is wide concern about AI’s potential impact on the personal connection between a patient and health care provider: 57% say the use of artificial intelligence to do things like diagnose disease and recommend treatments would make the patient-provider relationship worse. Only 13% say it would be better.
The security of health records is also a source of some concern for Americans: 37% think using AI in health and medicine would make the security of patients’ records worse, compared with 22% who think it would improve security.
Though Americans can identify a mix of pros and cons regarding the use of AI in health and medicine, caution remains a dominant theme in public views. When it comes to the pace of technological adoption, three-quarters of Americans say their greater concern is that health care providers will move too fast implementing AI in health and medicine before fully understanding the risks for patients; far fewer (23%) say they are more concerned that providers will move too slowly, missing opportunities to improve patients’ health.
Concern over the pace of AI adoption in health care is widely shared across groups in the public, including those who are the most familiar with artificial intelligence technologies.
Younger adults, men, those with higher levels of education are more open to the use of AI in their own health care
There is more openness to the use of AI in a person’s own health care among some demographic groups, but discomfort remains the predominant sentiment.
Among men, 46% say they would be comfortable with the use of AI in their own health care to do things like diagnose disease and recommend treatments, while 54% say they would be uncomfortable with this. Women express even more negative views: 66% say they would be uncomfortable with their provider relying on AI in their own care.
Those with higher levels of education and income, as well as younger adults, are more open to AI in their own health care than other groups. Still, in all cases, about half or more express discomfort with their own health care provider relying on AI.
Among those who say they have heard a lot about artificial intelligence, 50% are comfortable with the use of AI in their own health care; an equal share say they are uncomfortable with this. By comparison, majorities of those who have heard a little (63%) or nothing at all (70%) about AI say they would be uncomfortable with their own health care provider using AI.
At this stage of development, a modest share of Americans see AI delivering improvements for patient outcomes. Overall, 38% think that AI in health and medicine would lead to better overall outcomes for patients. Slightly fewer (33%) think it would lead to worse outcomes and 27% think it would not have much effect.
Men, younger adults, and those with higher levels of education are more positive about the impact of AI on patient outcomes than other groups, consistent with the patterns seen in personal comfort with AI in health care. For instance, 50% of those with a postgraduate degree think the use of AI to do things like diagnose disease and recommend treatments would lead to better health outcomes for patients; significantly fewer (26%) think it would lead to worse outcomes.
Americans who have heard a lot about AI are also more optimistic about the impact of AI in health and medicine for patient outcomes than those who are less familiar with artificial intelligence technology.
Four-in-ten Americans think AI in health and medicine would reduce the number of mistakes, though a majority say patient-provider relationships would suffer
Americans anticipate a range of positive and negative effects from the use of AI in health and medicine.
The public is generally optimistic about the potential impact of AI on medical errors. Four-in-ten Americans say AI would reduce the number of mistakes made by health care providers, while 27% think the use of AI would lead to more mistakes and 31% say there would not be much difference.
Many also see potential downsides from the use of AI in health and medicine. A greater share of Americans say that the use of AI would make the security of patients’ health records worse (37%) than better (22%). And 57% of Americans expect a patient’s personal relationship with their health care provider to deteriorate with the use of AI in health care settings.
The public is divided on the question of how it would impact the quality of care: 31% think using AI in health and medicine would make care for people like themselves better, while about as many (30%) say it would make care worse and 38% say it wouldn’t make much difference.
Americans who are concerned about bias based on race and ethnicity in health and medicine are more optimistic than pessimistic about AI’s potential impact on the issue
When it comes to bias and unfair treatment in health and medicine based on a patient’s race or ethnicity, a majority of Americans say this is a major (35%) or minor (35%) problem; 28% say racial and ethnic bias is not a problem in health and medicine.
There are longstanding efforts by the federal government and across the health and medical care sectors to address racial and ethnic inequities in access to care and in health outcomes.
Black adults are especially likely to say that bias based on a patient’s race or ethnicity is a major problem in health and medicine (64%). About four-in-ten Hispanic (42%) and English-speaking Asian adults (39%) also say this. A smaller share of White adults (27%) describe bias and unfair treatment related to a patient’s race or ethnicity as a major problem in health and medicine.
On balance, those who see bias based on race or ethnicity as a problem in health and medicine think AI has potential to improve the situation. About half (51%) of those who see a problem think the increased use of AI in health care would help reduce bias and unfair treatment, compared with 15% who say the use of AI would make bias and unfair treatment worse. A third say the problem would stay about the same.
Among those who see a problem with bias in health and medicine, larger shares think the use of AI would make this issue better than worse among White (54% vs. 12%, respectively), Hispanic (50% vs. 19%) and English-speaking Asian (58% vs. 15%) adults. Views among Black adults also lean in a more positive than negative direction, but by a smaller margin (40% vs. 25%).
Note that for Asian adults, the Center estimates are representative of English speakers only. Asian adults with higher levels of English language proficiency tend to have higher levels of education and family income than Asian adults in the U.S. with lower levels of English language proficiency.
Asked for more details on their views about the impact of AI on bias in health and medicine, those who think it would improve the situation often explain their view by describing AI as more objective or dispassionate than humans. For instance, 36% say AI would improve racial and ethnic bias in medicine because it is more neutral and consistent than people and human prejudice is not involved. Another 28% explain their view by expressing the sense that AI is not biased toward a patient’s characteristics. Examples of this sentiment include respondents who say AI would be blind to a patient’s race or ethnicity and would not be biased toward their overall appearance.
Among those who think that the problem of bias in health and medicine would stay about the same with the use of AI, 28% say the main reason for this is because the people who design and train AI, or the data AI uses, are still biased. About one-in-ten (8%) in this group say that AI would not change the issue of bias because a human care provider would be primarily treating people even if AI was adopted, so no change would be expected.
Among those who believe AI will make bias and unfair treatment based on a patient’s race or ethnicity worse, 28% explain their viewpoint by saying things like AI reflects human bias or that the data AI is trained on can reflect bias. Another reason given by 10% of this group is that AI would make the problem worse because human judgment is needed in medicine. These responses emphasized the importance of personalized care offered by providers and expressed the view that AI would not be able to replace this aspect of health care.
Americans’ views on AI applications used in cancer screening, surgery and mental health support
The Center survey explores views on four specific applications of AI in health and medical care that are in use today or being developed for widespread use: AI-based tools for skin cancer screening; AI-driven robots that can perform parts of surgery; AI-based recommendations for pain management following surgery; and AI chatbots designed to support a person’s mental health.
Public awareness of AI in health and medicine is still in the process of developing, yet even at this early stage, Americans make distinctions between the types of applications they are more and less open to. For instance, majorities say they would want AI-based skin cancer detection used in their own care and think this technology would improve the accuracy of diagnoses. By contrast, large shares of Americans say they would not want any of the three other AI-driven applications used in their own care.
For more on how Americans view the impact of these four developments read, “How Americans view emerging uses of artificial intelligence, including programs to generate text or art.”
AI-based skin cancer screening
AI used for skin cancer detection can scan images of people’s skin and flag areas that may be skin cancer for testing.
Nearly two-thirds of U.S. adults (65%) say that they would definitely or probably want AI to be used for their own skin cancer screening. Consistent with this view, about half (55%) believe that AI would make skin cancer diagnoses more accurate. Only 13% believe it would lead to less accurate diagnoses, while 30% think it wouldn’t make much difference.
On the whole, Americans who are aware of this AI application view it as an advance for medical care: 52% describe it as a major advance while 27% call it a minor advance. Very few (7%) say it is not an advance for medical care.
Majorities of most major demographic groups say they would want AI to be used in their own screening for skin cancer, with men, younger adults, and those with higher education levels particularly enthused.
A larger majority of men (72%) than women (58%) say they would want AI to be used in their screening for skin cancer.
Black adults (57%) are somewhat less likely than White (65%) and Hispanic (69%) adults to say they would want AI used for skin cancer screening. Experts have raised questions about the accuracy of AI-based skin cancer systems for darker skin tones.
Younger adults are more open to using this form of AI than older adults, and those with a college degree are more likely to say they would want this than those without a college degree.
In addition, those who have heard at least a little about the use of AI in skin cancer screening are more likely than those who have heard nothing at all to say they would want this tool used in their own care (75% vs. 62%).
AI for pain management recommendations
AI is being used to help physicians prescribe pain medication. AI-based pain management systems are designed to minimize the chances of patients becoming addicted to or abusing medications; they use machine learning models to predict things like which patients are at high risk for severe pain and which patients could benefit from pain management techniques that do not involve opioids.
Asked to consider their own preferences for treatment of pain following surgery, 31% of Americans say they would want this kind of AI guiding their pain management treatment while two-thirds (67%) say they would not.
This reluctance is in line with people’s beliefs about the effect of AI-based pain management recommendations. About a quarter (26%) of U.S. adults say that pain treatment would get better with AI, while a majority say either that this would make little difference (40%) or lead to worse pain care (32%).
Among those who say they’ve heard at least a little about this use of AI, fewer than half (30%) see it as a major advance for medical care, while another 37% call it a minor advance. By comparison, larger shares of those aware of AI-based skin cancer detection and AI-driven robots in surgery view these applications as major advances for medical care.
Those with some familiarity with AI-based pain management systems are more open to using AI in their own care plan. Of those who say they have heard at least a little about this, 47% say they would want AI-based recommendations used in their post-op pain treatment, compared with 51% who say they would not want this. By comparison, a large majority (72%) of those not familiar with this technology prior to the survey say they would not want this.
Demographic differences on this question are generally modest, with majorities of most groups saying they would not want AI to help decide their pain treatment program following a surgery.
Performing surgery with AI-driven robots
AI-driven robots are in development that could complete surgical procedures on their own, with full autonomy from human surgeons. These AI-based surgical robots are being tested to perform parts of complex surgical procedures and are expected to increase the precision and consistency of the surgical operation.
Americans are cautious toward the idea of surgical robots used in their own care: Four-in-ten say they would want AI-based robotics for their own surgery, compared with 59% who say they would not want this.
Still, Americans with at least some awareness of these AI-based surgical robots are, by and large, convinced they represent an advance for medical science: 56% of this group says it is a major advance and another 22% calls it a minor advance. (For more on how Americans view advances in artificial intelligence, read “How Americans view emerging uses of artificial intelligence, including programs to generate text or art.”) Public familiarity with the idea of AI-based surgical robots is higher than for the three other health and medical applications included on the survey; 59% say they have heard at least a little about this development.
As with other AI applications included in the survey, those unfamiliar with AI-driven robots in surgery are especially likely to say they would not want them used in their own care (74% say this). Those who have heard of this use of AI before are evenly divided: 50% say they would want AI-driven robots to be used in their surgery, while 49% say they wouldn’t want this.
Across demographic groups, men are more inclined than women to say they would want an AI-based robot for their own surgery (47% vs. 33%). And those with higher levels of education are more open to this technology than those with lower levels of education.
There is little difference between the views of older and younger adults on this: Majorities across age groups say they would not want an AI-based robot for their own surgery. This contrasts with preferences about other uses of AI in medical care in which younger adults are more likely than older adults to say they would want AI applications for skin cancer screening or pain management.
AI chatbots designed to support mental health
Chatbots aimed at supporting mental health use AI to offer mindfulness check-ins and “automated conversations” that may supplement or potentially provide an alternative to counseling or therapy offered by licensed health care professionals. Several chatbot platforms are available today. Some are touted as ways to support mental health wellness that are available on-demand and may appeal to those reluctant to seek in-person support or to those looking for more affordable options.
Public reactions to the idea of using an AI chatbot for mental health support are decidedly negative. About eight-in-ten U.S. adults (79%) say they would not want to use an AI chatbot if they were seeking mental health support; far fewer (20%) say they would want this.
In a further sign of caution toward AI chatbots for mental health support, 46% of U.S. adults say these AI chatbots should only be used by people who are also seeing a therapist; another 28% say they should not be available to people at all. Just 23% of Americans say that such chatbots should be available to people regardless of whether they are also seeing a therapist.
Large majorities of U.S. adults across demographic and educational groups lean away from using an AI chatbot for their own mental health support. Read the Appendix for details.
Even among Americans who say they have heard about these chatbots prior to the survey, 71% say they would not want to use one for their own mental health support.
And among those who have heard about these AI chatbots, relatively few (19%) consider these to be a major advance for mental health support; 36% call them a minor advance, while 25% say they are not an advance at all. Public opinion on this use of AI, as with many others, is still developing: 19% of those familiar with mental health chatbots say they’re not sure if this application of AI represents an advance for mental health support.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/
|
[
{
"date": "2023/02/22",
"position": 37,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 31,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 37,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 36,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 40,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 38,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 38,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 39,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 36,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 39,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 36,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 38,
"query": "artificial intelligence healthcare"
},
{
"date": "2023/02/22",
"position": 38,
"query": "artificial intelligence healthcare"
}
] |
Will AI make you redundant?
|
Will AI make you redundant?
|
https://www.infometrics.co.nz
|
[] |
In this article I look at the evidence to date on how AI will change the workplace, which skills AI is likely to automate, and what the limitations of AI are.
|
Hollywood has been making us anxious about AI for decades. As early as 1968, we were introduced to ‘Hal’, the terrifyingly deadpan AI in 2001: A Space Odyssey. The very popular Matrix and Terminator franchises both acquainted us with the idea that machine self-awareness could bring about humanity’s swift demise.
For some people, one of the unsettling things about AI is its ability to self-improve. Humans are accustomed to designing new technology and that technology doing what it is designed to do and no more. AI is designed to learn and develop. For many, this characteristic means that AI will one day lead to the singularity which describes a point in time at which machine intelligence exceeds human intelligence – essentially making humans redundant in the workplace.
AI also raises data privacy and protection issues. AI-enabled technologies rely on large data inputs. When employed in the workplace, there’s a reasonable chance these data inputs will cover worker behaviour and communications which inevitably leads to ‘big brother is watching you’ type concerns, especially if there is a lack of transparency about how these data are being used and how they influence decision making.
There are also concerns that AI can incorporate human bias either through human-designed algorithms, the data that is fed into AI systems, or the way that humans choose how the results of algorithms are used. Applying AI to recruitment processes are an example of how such biases might affect the workplace. AI can be used in the sourcing, screening, selection and offer phases of the hiring pipeline. But if AI is making its decisions and recommendations based on historical data and practises any previous bias may be baked into the process.
Based on what we know about AI and its implementation to date, the OECD offers a more nuanced view about its effects with AI able to surpass humans in the performance of certain cognitive tasks but coming up well short in other areas – as we detail below.
As such, AI is seen as a tool to augment human activity rather than a replacement for it. However, the OECD research shows that the drivers and context in which AI is adopted are crucial for determining its broader effects on workers.
| 2023-02-22T00:00:00 |
https://www.infometrics.co.nz/article/2023-02-will-ai-make-you-redundant
|
[
{
"date": "2023/02/22",
"position": 8,
"query": "AI job losses"
},
{
"date": "2023/02/22",
"position": 17,
"query": "workplace AI adoption"
},
{
"date": "2023/02/22",
"position": 17,
"query": "AI labor market trends"
},
{
"date": "2023/02/22",
"position": 49,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/22",
"position": 12,
"query": "AI wages"
}
] |
|
AI's Impact on Employment - SwissCognitive AI Radar
|
The Rise of Artificial Intelligence and the Impact on Employment: Promising Tool or Threat? – SwissCognitive AI Radar
|
https://swisscognitive.ch
|
[
"Bianka Németvölgyi"
] |
In the recent AI hype, there are fears that AI will replace human workers, leading to significant employment losses and economic disruption.
|
The rapid advancement of artificial intelligence (AI) has brought both opportunities and concerns for the job market. On the one hand, AI has the potential to enhance productivity, efficiency, and innovation in many industries. On the other hand, there are fears that AI will replace human workers, leading to significant job losses and economic disruption.
“The Rise of Artificial Intelligence and the Impact on Employment: Promising Tool or Threat?” – SwissCognitive AI Radar Perspective, Image by johnstocker on Freepik
Recent weeks have seen growing speculation on when machines will replace human employment. This week, SwissCognitive AI Radar compiled articles discussing the impact of Artificial Intelligence on our future of work and how individuals and businesses can and should leverage these new developments.
While some roles are at risk of being automated, technology presents unprecedented opportunities for society. For example, digital humans are being used as chatbots and other interfaces to provide customers with personalized experiences, offering cost advantages in terms of customizability and scalability compared to human employees.
Despite the benefits of AI, concerns remain about its impact on job security and wages. OpenAI’s AI chatbot ChatGPT has caused fear among professionals as it is able to perform a variety of jobs, such as writing articles, codes, and poetry. However, ChatGPT has recently provided a list of jobs that will never be replaced by artificial intelligence due to their strong human elements.
AI is increasingly being used for writing, hiring, and optimizing insurance claims, among other things, and with these capabilities, it has the potential to improve job efficiency significantly and, in some cases, cause a number of positions to become obsolete.
While some experts argue that AI could lead to employment competition, history shows that if jobs are lost, new ones will be created in their place. It is crucial for companies to use AI to enhance employee efficiency while helping them reskill and upskill for the changes ahead.
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Email address First Name Last Name Subscription Preferences Featured News - Weekly Sunday AI News
AI Venture Insights AI Navigators - Research Paper
AI Online Events
AI Advisory for Organisation
Leave this field empty if you're human:
We can ask ourselves, ‘What can we do to help ourselves?’ And the answer is: Become knowledgeable about this rapidly-evolving technology. There are several AI courses available that can equip individuals with the skills needed to succeed in an increasingly technology-supported job market.
One thing is sure: The impact of AI on the employment market is a complex issue, and its full implications are yet to come.
Read more about the mentioned topics in our specially-selected collection.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://swisscognitive.ch/2023/02/22/ai-impact-on-employment/
|
[
{
"date": "2023/02/22",
"position": 9,
"query": "AI job losses"
},
{
"date": "2023/02/22",
"position": 30,
"query": "ChatGPT employment impact"
},
{
"date": "2023/02/22",
"position": 13,
"query": "AI wages"
},
{
"date": "2023/02/22",
"position": 84,
"query": "artificial intelligence hiring"
}
] |
The Automated Manager? Not So Fast
|
The Automated Manager? Not So Fast
|
https://www.reworked.co
|
[
"David Barry",
"About The Author"
] |
In early 2020, Gartner predicted that emerging technologies built using AI would replace almost 69% of managers' workloads within a few short years.
|
In early 2020, Gartner predicted that emerging technologies built using AI would replace almost 69% of managers' workloads within a few short years.
“The role of manager will see a complete overhaul in the next four years,” said Gartner research VP Helen Poitevin in a statement at the time.
The prediction was that AI would automate time-intensive tasks like filling forms, updating information and approving workflows, thanks to what Statista defines as AI's ability to mimic the competencies of the human mind.
Three years later, AI is still not quite there, despite the rush to emerging technologies brought on by new work models.
AI in the Workplace Today
Even with something as technologically sophisticated as ChatGPT, AI has severe limitations.
Yet, when asked whether Gartner’s estimations were realistic, Andrius Benokraitis, senior manager for technical marketing at Ansible by Red Hat, said he believes they are — for those willing to embrace the technology.
Benokraitis said the most frequent reason people aren’t automating tasks isn’t because of the lack of available tools but because of the mindset. “It is absolutely feasible," he said, "but old habits die hard."
The need to accept and trust automation to work on their behalf is a big hold up in the integration of AI as a productivity booster in the workplace. For instance, he said, some managers may hesitate to start on an automation journey because they don’t know where to begin or which processes and workflows would be most beneficial to automate first.
“Effective automation stems from a strategic vision so you can evaluate what will have the greatest impact in the long-run," he said. "So, automation can absolutely help solve common issues that managers face, but only if there's a comprehensive plan in place.”
Organizations also have to define what a manager does or doesn’t do. There are many intangibles that managers have that are invaluable to business, and these intangibles are something that a successful organization will always need.
“Effective managers are leaders in an organization, which is not a trait that technology or automation can provide," Benokraitis said. “What automation cannot do is make decisions without context or provide the intangibles of a true leader at work," he said.
To reap AI's rewards, organizations also have to solve the abundance of existing processes and red tape to enable automation to properly fit into the larger organizational ecosystem.
If automation has any chance to succeed as the next frontier in the enterprise, managers must make automation part of their holistic strategy for the short and long-term objectives. Because while automation may take some of the more mundane tasks off the desk of managers, deciding what will and won’t be automated is up to managers.
Related Article: How Companies Can Get Employees on Board With the New Wave of AI
The Pace of Digital Transformation
Automation can take on many forms, and it already impacts our day-to-day.
Kayla Lebovits, founder and CEO of workplace engagement provider Bundle, said Office 365 and Google both have automation built in, and people don’t even realize it’s happening.
The technology is here, though Lebovits doesn’t believe that Gartner’s 2024 timeline for automating that amount of managerial tasks is possible.
“I don’t think the timeline is feasible given the slow pace of digital transformation in the U.S.,” she said. “Companies are generally slow to adopt new technologies, even if they are innovative and impactful.”
There are many reasons for this, from cost of implementation to lack of understanding, to worries of putting people out of jobs. But for AI to break through, companies need to be more willing to experiment with new workplace technologies based on the needs of their workforce.
There is also resistance from the very people that Gartner believes will gain from the technology: managers. Automation can already solve basic management tasks and save employees time, but, Lebovits said, we still need a human involved to account for algorithm bias and check results.
Another challenge with the short timeline, Lebovits said, is that many of today's managers don’t have a digital growth mindset or desire to experiment with new and unfamiliar technologies. But Gen Z employees will be entering management roles soon, she said, and this is likely to change the state of play.
“By the start of the next decade," she said, “the workplace will most likely experience an exponential adoption of these technologies, especially as AI and automation become more integrated into our everyday lives outside of work.”
With Gen Z managers — notably those born after 1996 — it is likely that we will see a significant increase in automation across all areas of business, driven by a desire to increase efficiency and productivity as well as reduce errors and redundancies.
This rise of automation is also likely to see a new kind of manager emerge.
Lebovits said those new managers are expected to have high emotional IQs and a people-first approach. Successful managers of the future, she said, will have the capability to help, enable, coach and empower their people to perform at their best.
“Of course, they will still need to set goals and visions for teams, have the necessary business savvy to manage cross-organizational issues and be adaptable to change,” she said.
Related Article: Let Empathetic Leadership Be Your Superpower
Understanding Your Automation
Automation can bring efficiency to business processes, which results in more throughput with less people.
But Jason Burian, VP of product at KnowledgeLake, said it's not as simple as implementing process automation software or RPA tools. If organizations move aggressively to adopt automation without really understanding all of the steps of a workflow and how things get done, and managing those tasks and processes in an effective way, any gains from automation will be undermined, he said.
Burian said it's important to know where the pain points are in the process and understand what can be automated and what needs human hands. In other words, process-mapping first, technology second — and the best people to oversee that are workplace managers.
Analyst firms have started to address digital employee experience, which is about worker satisfaction and assuring that managers can do their best work. When managers are burdened with mundane tasks like data entry and reporting, it takes away from more important tasks that involve creativity, customer service and deep problem-solving, Burian said. “The first objective in any automation journey is automating the mundane."
Related Article: The HR Technologist: Developing the Full-Stack Skillset
The Limitations and Future of AI
Tori Miller Liu, president and CEO of the Association of Intelligent Information Management (AIIM), said she believes Gartner's 69% prediction may be overly aggressive.
Of course, let's keep in mind that Gartner released the research in March 2020, a few weeks into the pandemic. During the pandemic, Miller Liu said, we saw employee shortages, retention issues and a shift to remote and hybrid work, all of which impacted the push to automate routine tasks.
“Yet, I am not convinced organizations were prepared for automating routine tasks,” she said.
Automation is rarely as simple as setting up a rule or pushing the proverbial button, she said. Good, sustainable automation should start with process analysis and requirements gathering. It should include testing and quality assurance before use.
Automation also requires maintenance and governance to ensure that automated processes continue to provide accurate, error-free workflows.
“These are skillsets information management professionals possess, but not every organization has a mature information program,” Miller Liu said. “Organizations should be thinking about the management of automation just as much as the potential benefits.”
And while AI-enabled automation can make decisions, the value or accuracy of those decisions are predicated on the accuracy of the data provided and the math behind it. “The progress in generative AI is breathtakingly impressive, but the results can still be inaccurate,” she said. “For now, I look at AI as a guide for better human decision-making. That could absolutely change as the technology continues to improve, though."
No matter what happens, Miller Liu said, organizations will always need people to provide strategic direction and oversight over artificial intelligence and automation. “I don't view AI or automation as job killers. This technology is an opportunity for us to rethink roles and reskill where needed."
| 2023-02-22T00:00:00 |
https://www.reworked.co/digital-workplace/you-cant-automate-managers-no-matter-how-good-the-tech/
|
[
{
"date": "2023/02/22",
"position": 19,
"query": "workplace AI adoption"
}
] |
|
A survey of over 17000 people indicates only half of us are ...
|
A survey of over 17,000 people indicates only half of us are willing to trust AI at work
|
https://theconversation.com
|
[
"Caitlin Curtis",
"Javad Pool",
"Nicole Gillespie",
"Steven Lockey"
] |
We find that only one in two employees are willing to trust AI at work. Their attitude depends on their role, what country they live in, and what the AI is used ...
|
Artificial intelligence (AI) tools are increasingly used at work to enhance productivity, improve decision making and reduce costs, including automating administrative tasks and monitoring security.
But sharing your workplace with AI poses unique challenges, including the question – can we trust the technology?
Our new, 17-country study involving over 17,000 people reveals how much and in what ways we trust AI in the workplace, how we view the risks and benefits, and what is expected for AI to be trusted.
We find that only one in two employees are willing to trust AI at work. Their attitude depends on their role, what country they live in, and what the AI is used for. However, people across the globe are nearly unanimous in their expectations of what needs to be in place for AI to be trusted.
Our global survey
AI is rapidly reshaping the way work is done and services are delivered, with all sectors of the global economy investing in artificial intelligence tools. Such tools can automate marketing activities, assist staff with various queries, or even monitor employees.
To understand people’s trust and attitudes towards workplace AI, we surveyed over 17,000 people from 17 countries: Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel, Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom, and the United States. These data, which used nationally representative samples, were collected just prior to the release of ChatGPT.
The countries we surveyed are leaders in AI activity within their regions, as evidenced by their investment in AI and AI-specific employment.
Read more: The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it's so impressive
Do employees trust AI at work?
We found nearly half of all employees (48%) are wary about trusting AI at work – for example by relying on AI decisions and recommendations, or sharing information with AI tools so they can function.
People have more faith in the ability of AI systems to produce reliable output and provide helpful services, than the safety, security and fairness of these systems, and the extent to which they uphold privacy rights.
However, trust is contextual and depends on the AI’s purpose. As shown in the figure below, most people are comfortable with the use of AI at work to augment and automate tasks and help employees, but they are less comfortable when AI is used for human resources, performance management, or monitoring purposes.
AI as a decision-making tool
Most employees view AI use in managerial decision-making as acceptable, and actually prefer AI involvement to sole human decision-making. However, the preferred option is to have humans retain more control than the AI system, or at least the same amount.
What might this look like? People showed the most support for a 75% human to 25% AI decision-making collaboration, or a 50%-50% split. This indicates a clear preference for managers to use AI as a decision aid, and a lack of support for fully automated AI decision-making at work. These decisions could include whom to hire and whom to promote, or the way resources are allocated.
While nearly half of the people surveyed believe AI will enhance their competence and autonomy at work, less than one in three (29%) believe AI will create more jobs than it will eliminate.
This reflects a prominent fear: 77% of people report feeling concerned about job loss, and 73% say they are concerned about losing important skills due to AI.
However, managers are more likely to believe that AI will create jobs and are less concerned about its risks than other occupations. This reflects a broader trend of managers being more comfortable, trusting and supportive of AI use at work than other employee groups.
Given managers are typically the drivers of AI adoption at work, these differing views may cause tensions in organisations implementing AI tools.
Read more: Will AI decide if you get your next job? Without legal regulation, you may never even know
Trust is a serious concern
Younger generations and those with a university education are also more trusting and comfortable with AI, and more likely to use it in their work. Over time this may escalate divisions in employment.
We found important differences among countries in our findings. For example, people in western countries are among the least trusting of AI use at work, whereas those in emerging economies (China, India, Brazil and South Africa) are more trusting and comfortable.
This difference partially reflects the fact a minority of people in western countries believe the benefits of AI outweigh the risks, in contrast to the large majority of people in emerging economies.
How do we make AI trustworthy?
The good news is our findings show people are united on the principles and practices they expect to be in place in order to trust AI. On average, 97% of people report that each of these are important for their trust in AI.
People say they would trust AI more when oversight tools are in place, such as monitoring the AI for accuracy and reliability, AI “codes of conduct”, independent AI ethical review boards, and adherence to international AI standards.
This strong endorsement for the trustworthy AI principles and practices across all countries provides a blueprint for how organisations can design, use and govern AI in a way that secures trust.
| 2023-02-23T00:00:00 |
2023/02/23
|
https://theconversation.com/a-survey-of-over-17-000-people-indicates-only-half-of-us-are-willing-to-trust-ai-at-work-200256
|
[
{
"date": "2023/02/22",
"position": 31,
"query": "workplace AI adoption"
}
] |
AI in Construction Has Landed
|
AI in Construction Has Landed
|
https://www.constructconnect.com
|
[
"Marshall Benveniste",
"Kaiya Barrett",
"Daniel Behrendt",
"Johnny Bradigan"
] |
AI is here now because of advances in computing power, algorithms that follow precise steps, the large sets of data the industry produces, and innovations in ...
|
Artificial intelligence in the construction industry is here. It has emerged as a technology that is poised to transform our industry. As we enter this next frontier of construction technology, it is essential to understand what artificial intelligence is and how it offers unique solutions to advance the productivity and performance of the construction industry.
This article provides a foundational understanding of artificial intelligence in preconstruction. We’ll explore how AI offers ways to enhance the processes that drive business for better decision-making and strategic preconstruction planning.
What is Artificial Intelligence?
The data-rich, complex modern construction environment offers a potential advantage to those businesses that can harness data, produce intelligible insights, and make informed strategic decisions. AI is establishing a footing in the construction industry following years of development and imagination.
AI is here now because of advances in computing power, algorithms that follow precise steps, the large sets of data the industry produces, and innovations in the systems that combine these advances. The time to understand and integrate AI into your business is now.
Artificial intelligence is not a new idea, its roots can be traced back to the 1950s. Arthur Samuel, a pioneer in the field, defined it as “the field of study that gives computers the ability to learn without explicitly being programmed.” In other words, AI involves computers and systems that learn from experience. The purpose of AI is to make machines do things that humans are presently doing, but do them better and faster.
Artificial Intelligence, Machine Learning, and Deep Learning
Artificial intelligence is a field in data science that blends computer technology with hefty amounts of data to enable problem-solving. You may have experienced AI in a voice search with Amazon’s Alexa or Apple’s Siri. Search engines also power results with AI to collect and retrieve relevant information based on user inquiries. Videos suggested to you by YouTube or Netflix are AI-driven results that recommend titles based on learning your preferences and viewing habits.
You may hear machine learning mentioned in the same breath as artificial intelligence, but the terms are not entirely interchangeable. Machine learning is a subset of artificial intelligence that learns from experience, adapts, and improves performance without being explicitly programmed.
A third part of artificial intelligence getting a lot of exposure today is deep learning. All AI aims to mimic human thinking, and deep learning is designed to extract maximum value from our way of processing information. Our brains use what scientists call neural networks, which are the collection of brain cells that help us process information by connecting signals. Deep learning uses artificial neural networks to process large amounts of data and solve problems with limited human help. These artificial intelligence networks are valued for their ability to handle lots of data, continue to improve as it trains and learns, and solve complex issues.
Artificial Intelligence in Preconstruction
Before building starts on a commercial construction project, a variety of people and teams from different disciplines and expertise must collaborate and plan how the project will proceed. Preconstruction, as it’s called, refers to the phases of construction that take place before the actual construction work begins. Building owners, architects and engineers, trade contractors, general contractors, building product manufacturers, and many others take part in preconstruction.
The preconstruction process in commercial construction helps to ensure that the project is completed on time, within budget, and to the client’s satisfaction by identifying and addressing the sequence of people and events that will get the project completed.
Just like the projects being built, solid foundational work in preconstruction carries over greatly into the strength of the project. Variations in preconstruction stages occur depending on variables such as the type of structure or project (e.g., hotel, school, road, or bridge), project delivery methods, the scope of work involved, or if it is a public or private construction project.
AI In Preconstruction Project Stages
The following preconstruction project stages provide a framework for the steps before construction starts and how artificial intelligence is changing how this planning is performed.
Pre-Design
This phase includes the conceptual design, initial project planning, project development, and feasibility studies. The project team typically begins with the concept of the structure, performs a site analysis to identify potential obstacles, and develops a plan to address them. This phase also includes the development of a rough or working project scope and budget.
AI in the pre-design or initial project planning and development phase is helpful for feasibility studies to determine if a project is viable. Risks can be identified and analyzed with various data, including financial data, market data, and data on the project's potential impact on the environment. Risk analysis can assist in deciding whether to proceed with the project. AI is a powerful tool at this stage because it offers thoroughness and accuracy, along with a general lack of bias.
Design Development
This phase includes the development of detailed design documents that steer the project. Documents included in this stage include architectural, structural, mechanical, electrical, and plumbing plans.
Design development is a critical phase in the project development process. It involves taking the conceptual design and turning it into a more detailed and buildable design, moving from general ideas to more specific ones. The design development phase is where many important decisions are made that will shape the final project, including cost, energy efficiency, and overall functionality. Artificial Intelligence can play a significant role in this phase by providing new tools and techniques to improve the efficiency and quality of the design development process.
Construction Documents
This phase includes the development of final construction documents, including detailed construction plans, building product specifications, and contract documents. The finalization of the construction schedule is prepared, which outlines the sequence and expected duration of all activities required to complete the project.
AI-based tools can be used to automate the process of extracting, analyzing, and processing data from construction documents. For example, natural language processing (NLP) can be used to analyze project requirements and create a detailed project scope document based on two-dimensional construction plans like PDF files. NLP has been around for over a half-century and is the component of AI that understands and interprets human language, written and spoken. This can save time and resources compared to the tedium of traditional methods, such as manual data entry or spreadsheet calculations.
Bidding and Negotiation
This phase includes distributing the plans to trades, contractors, and manufacturers for bidding. The project team will review the bids, negotiate with contractors, and select winning bids. For general contractors and trade contractors, this is the essential phase that includes producing and delivering detailed takeoffs and estimates to submit a competitive bid.
Trade contractors, general contractors, and building supply manufacturers assess the project scope, identify opportunities to pursue, analyze bidding strategies, and decide which projects clear the hurdle for success. Trade contractors can use the power of AI to streamline processes, automate repetitive tasks like takeoffs, and improve the speed at which decisions like vetting profitable projects are made.
Permitting and Approvals
This phase includes the submission of the construction documents to the appropriate governmental agencies, where appropriate, for review and approval. Artificial intelligence is used in this phase to validate building code compliance and manage the building permit process.
Finalize Preconstruction
This phase includes finalizing contracts, mobilization of the contractors, and the start of construction activities. Effective preconstruction planning and strategic decision-making are key components of profitability for trades, general contractors, and building product manufacturers.
The finalization of the construction schedule is prepared, which outlines the sequence and expected duration of all activities required to complete the project.
The preconstruction stages can become more complex due to the nature of the project, the inherent need for effective communication and collaboration among teams, and local and national regulations, among others. Time and budget issues are consistently make-or-break drivers of a profitable construction project. Technology like AI offers an opportunity to improve performance throughout the preconstruction lifecycle.
Where Else AI and Construction Are Teaming Up
Aside from preconstruction, AI is a technology continuing to emerge in other areas of construction, enabling improvements in performance and safety. Some examples include:
Predictive maintenance: AI can be used to analyze data from building systems, such as HVAC and electrical systems, to predict when maintenance will be needed and prevent equipment failure
Site safety: AI-powered cameras and sensors (even some worn by workers) can monitor construction sites for potential safety hazards and alert workers and managers to potential dangers.
Robotics: The efficiency of AI and machine learning have found their way into robotic bricklaying, welding, and even building entire structures with 3D printing.
The efficiency of AI and machine learning have found their way into robotic bricklaying, welding, and even building entire structures with 3D printing. Project management: AI can optimize the allocation of labor and materials, making the scheduling of construction tasks more efficient and cost-effective.
AI can optimize the allocation of labor and materials, making the scheduling of construction tasks more efficient and cost-effective. Quality control: AI-powered cameras and sensors can monitor the quality of construction work, identify defects, and alert workers and managers to issues that need to be addressed.
Building performance: AI could be used to analyze data from building systems to optimize energy efficiency, indoor air quality, and other performance metrics. Simulating building efficiency with AI, for instance, allows the identification of potential energy-depriving areas for better design and construction.
The Knowledge and Insight Enhancer
It is common to hear that computers trained to think like humans are a threat to stealing jobs from people. But construction is inherently reliant on the vast institutional and individual knowledge, something that AI cannot replace. Where human judgment is involved, and we know that’s everywhere throughout the construction project lifecycle, AI does not have the capability we humans do.
It's more likely that certain roles and responsibilities will change as AI is implemented. AI combined with a company’s organizational knowledge will offer much greater strategic opportunities than those not adopting the technologies to streamline performance and make better data-driven decisions.
Bright Future for AI in Construction
The present and future of AI in the construction industry are promising as more AI-based tools and techniques become integrated into workflows. As technology advances, AI is expected to become an even more integral part of the construction process, helping to improve efficiency, reduce costs, and enhance construction performance outcomes. AI in the construction industry has landed. Get ready for it.
| 2023-02-22T00:00:00 |
https://www.constructconnect.com/blog/ai-in-construction-has-landed
|
[
{
"date": "2023/02/22",
"position": 80,
"query": "AI labor market trends"
},
{
"date": "2023/02/22",
"position": 90,
"query": "AI workers"
}
] |
|
🌎 AI regulation in the USA – a look across the Atlantic
|
🌎 AI regulation in the USA – a look across the Atlantic
|
https://www.taylorwessing.com
|
[] |
AI regulation in the USA: We take a closer look at three recent developments in the USA's regulatory landscape.
|
Co-authors: Dean W. Harvey, Partner, Perkins Coie LLP and Pranav Neel Bethala, associate, Perkins Coie LLP
In the European Union (“EU”), the legislative process on the Artificial Intelligence Act (“AI Act”) is steadily moving forward following the Council's position published on December 6, 2022. However, even though the upcoming Trilogue is eagerly awaited, it is no longer expected this year due to the difficulties of the European Parliament to agree on a position. This seems like a good time to take a break and look across the Atlantic at what the United States of America ("USA") is up to in terms of AI regulation. For this purpose, we would like to take a closer look at three recent developments in the USA’s regulatory landscape.
CPRA - California Privacy Rights Act
In November 2020, voters in the USA’s state of California approved of a new law known as the California Privacy Rights Act (the “CPRA”). Effective as of January 1, 2023, the CPRA significantly amends and expands an existing consumer-privacy law known as the California Consumer Privacy Act (the “CCPA”). Included among the changes enacted by the CPRA are the following:
The creation of a new California government agency to enforce the CPRA;
The expansion of an existing opt-out right in order to allow consumers to opt out of having their personal information shared for the purpose of “cross-context behavioral advertising”, a practice in which consumers are targeted for advertising based on personal information that they provide outside of the context that they intentionally interact with; and
Excluding “dark patterns” – which are user interfaces designed or manipulated with the substantial effect of undermining user autonomy, decision making, or choice – from constituting “consent” by consumers regarding their personal information and requiring regulations that prohibit dark patterns from being used by businesses.
Although the CPRA is a California law, many important commercial transactions and relationships involve California in some fashion and may therefore be subject to the CPRA. Moreover, much like what has occurred with the CCPA, laws similar to the CPRA are likely to be adopted by several other states, thereby making the CPRA nationally significant even though it is does not cover the entire USA.
AI Bill of Rights
On October 4, 2022, the White House Office of Science and Technology Policy (the “OSTP”) published a Blueprint for an AI Bill of Rights (the “Blueprint”). The origins of the Blueprint began almost a year earlier on October 22, 2021, when the OSTP issued a press release acknowledging potential and actual dangers posed by AI systems and proposing that, much like the Bill of Rights enacted during the American Founding, a new bill of rights for citizens was necessary with respect to AI. The announcement included a public request for information about AI-enabled technologies from public- and private-sector researchers, policymakers, stakeholders, technologists, journalists, and advocates. In a subsequent press release on November 10, 2021, the OSTP announced that it would also be hosting listening sessions and public events bringing together various practitioners, advocates, and government officials to promote education and engagement on areas where AI-enabled technologies affect the lives of citizens. The Blueprint is the culmination of those efforts and represents the current White House’s approach toward AI.
Unlike the EU's planned AI Act, the Blueprint is non-binding, but it lists and – provides practical guidance for implementing – five principles that are intended to minimize potential harm from AI systems:
Safe and effective systems
AI systems should be developed with public and expert consultation to identify potential risks. They should be tested prior to deployment and monitored on an ongoing basis to demonstrate that they are safe and effective. AI systems should not be developed with the intent or foreseeable possibility of compromising safety. They should be designed to pro-actively protect against harm that could result from unintended consequences. The use of inappropriate, low-quality, or irrelevant data should be avoided. AI systems should be subject to independent assessments and reports.
Protection against algorithmic discrimination
AI systems should be developed and used in an equitable manner and not discriminate on the basis of a legally protected characteristic. AI system developers and operators should take proactive and ongoing steps to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable manner. Systems should be subject to proactive equity and inequity assessments and be developed based on a representative and robust data set. They should ensure accessibility for people with disabilities and prevent the use of unrepresentative data that contributes to discrimination. There should be an independent assessment of potential algorithmic discrimination and reporting that is as public as possible.
Privacy
Individuals should be able to determine how their data is used and should not be monitored. To this end, AI systems should process data in accordance with data protection principles (e.g., data minimization, consent to processing, deletion of data). Systems should not use AI to make design decisions that obfuscate user choice or burden users with default settings that intrude on privacy. Surveillance and monitoring systems should be subject to enhanced oversight, including an assessment of potential harms, and should not be used in areas such as housing, education, or employment, or where surveillance would monitor the exercise of democratic rights in a way that restricts civil rights and liberties.
Notices and explanations
Designers, developers, and operators of automated systems should provide generally accessible, easily understood documentation. This should include clear descriptions of the general system functionality and the role of automation, a reference to the use of such systems, the person or organization responsible for the system, and clear, timely, and accessible explanations of the results. Individuals should know how and why a result affecting them was determined by an automated system. Automated systems should provide meaningful explanations appropriate to the risk.
Human alternatives, testing and fallback
There shall be the option to opt out of AI systems and opt for a human alternative. There should be access to timely human review and remediation through a fallback and escalation process. AI systems to be used in sensitive areas (e.g., criminal justice, labor, education, and health) should additionally be tailored to the purpose, provide meaningful access for monitoring, include training for all individuals interacting with the system, and include human considerations for adverse or risky decisions.
Unsurprisingly, reactions to the document have been mixed. Some criticize the fact that it is only a non-binding white paper and not a legal regulation, and that it therefore offers no opportunities to actually enforce the principles described. Others criticize the draft for denigrating digital technologies as "one of the great challenges to democracy" and worry about the impact of possible new regulations on the competitiveness industry - a position that was essentially also put forward against the first draft of the AI Act in April 2021. Some even hope that the EU and the U.S. could create a uniform set of rules for AI regulation – in view of the completely different approaches taken by the two regulators to date, however, this will probably remain a pipe dream for the time being.
Time may prove that some of these criticisms are not that problematic. For example, the criticism that the Blueprint is non-binding neglects the fact that the OSTP’s final product was never anticipated to be binding. Additionally, the OSTP’s Blueprint provides cases where government agencies have implemented its principles. For example, the Department of Energy, the Department of Defense, and the United States Intelligence Community have created frameworks for ethical AI development and use, and the Equal Employment Opportunity Commission and the Department of Justice have issued practices for avoiding discrimination in hiring or against employees with disabilities. By including such cases, the Blueprint sets forth examples that other federal agencies can follow in creating more binding regulations and guidelines.
However, apart from the criticisms noted above, there may be a deeper problem in the Blueprint’s approach: it appears to simultaneously over- and underregulate in important areas. Regarding potential over-regulation, the Blueprint states that it is intended to cover any automated systems “that have the potential to meaningfully impact individuals’ or communities’ rights, opportunities, or access”. The Blueprint understands this scope quite broadly, even extending it, as the Blueprint’s appendix notes, to AI uses like “algorithms that purport to detect student cheating or plagiarism” and “automated traffic control systems”. Although such uses may meaningfully impact individual or community well-being, such impacts are likely to be more attenuated and less likely overall, especially when compared to the other and more serious impacts noted in the Blueprint. Uncritically applying the Blueprint’s principles to these types of AI uses not only may divert resources from addressing potentially high-risk and impactful automated systems but also may be counterproductive (e.g., human alternatives to AI-based plagiarism detection are far less effective).
Regarding potential under-regulation, it has been observed that the Blueprint lacks much regulation for – or even discussion of – the extensive use of AI by federal law enforcement agencies. The Blueprint expressly states that law enforcement activities “require a balancing of equities, for example, between the protection of sensitive law enforcement information and the principle of notice” and that “as such, notice may not be appropriate or need to be adjusted”. Citizens may reasonably be concerned about this approach, especially when it is contrasted with the Blueprint’s much more demanding and meticulous approach for other AI uses, including activities with lower risk.
AI Risk Management Framework
On August 18, 2022, the National Institute of Standards and Technology ("NIST") released the second draft of its "AI Risk Management Framework" for comments. The original version dates back to March 2022 and is based on a concept paper from December 2021, the final version is announced for January 2023. The AI Risk Management Framework is intended to help companies that develop or deploy AI systems assess and manage risks associated with these technologies. It consists of voluntary guidelines and recommendations, so it is also non-binding and explicitly not to be understood as a regulation.
The AI Risk Management Framework consists of four core functions, each of which is subdivided into subcategories, which in turn are assigned actors and activities.
“ Map ”: The context is recognized and the risks associated with the context are identified.
”: The context is recognized and the risks associated with the context are identified. “ Measure ”: Identified risks are assessed, analyzed or monitored.
”: Identified risks are assessed, analyzed or monitored. “ Manage ”: Risks are prioritized and managed based on likely impact.
”: Risks are prioritized and managed based on likely impact. “Govern”: A culture of risk management is maintained and present.
Users of the framework can apply these capabilities in the way that best suits their AI risk management needs.
The AI Risk Management Framework has been praised as a working framework that organizations can actually use and adapt according to their particular circumstances. Nevertheless, some sources of risk are not or not sufficiently addressed in the framework, such as poor data quality or unpredictable interactions between AI and other systems. Additionally, some comments to the second draft criticize the framework for not adequately addressing the human components of AI risk management. For example, several different entities and persons have expressed concerns that the framework’s “human in the loop” concept fails to distinguish the different kinds of human oversight appropriate for automated systems, that the framework does not sufficiently discuss the importance of feedback from end users, or that the framework could be improved by stating and identifying design principles to improve dialogue and collaboration between interested parties, including human end users. It remains to be seen whether these concerns are more fully addressed in the final version of the framework.
Different approaches – same goal
There is no doubt that the potential risks of AI have been recognized – on both sides of the Atlantic. However, as in the case of the General Data Protection Regulation, the EU seems to be bringing the bigger regulatory stick, while the USA is (for now) relying more on voluntary action. Which approach will be more successful when it comes to the goal of using AI on the basis of western values without blocking the undoubtedly great opportunities of the technology against the background of international competition remains to be seen. In any case, we still need to wait and see what the final version of the AI Act will look like and whether the USA will catch up in terms of regulation. Neither is likely to be known before the end of 2023, but one thing is certain – we will keep an eye on it.
| 2023-06-01T00:00:00 |
2023/06/01
|
https://www.taylorwessing.com/en/insights-and-events/insights/2023/02/ai-regulation-in-the-usa-a-look-across-the-atlantic
|
[
{
"date": "2023/02/22",
"position": 7,
"query": "AI regulation employment"
}
] |
AI Forecast for the Future
|
AI Forecast for the Future
|
https://www.sandipuniversity.edu.in
|
[] |
Some have even hypothesised that the proliferation of AI will pave the way for the establishment of a universal basic income, which would guarantee that all ...
|
Right at the start of this year, it’s obvious that the AI community is going to make waves this year. If there is one thing we can be certain of, it is that this year will be even more exciting than the previous one with regards to advancements in artificial intelligence.
This year promises to be a watershed one for artificial intelligence, with significant developments in areas such as natural language processing and the continued ascent of intelligent automation. Many top computer engineering colleges in Nashik provide industry-synced AI programs to help train the next generation of AI engineers.
Let’s take a look at some of the upcoming developments in artificial intelligence:
1. Intelligent Digital Assistants
The future of AI-powered personal assistants is bright. We anticipate that these helpers will be able to comprehend and respond to more nuanced commands and inquiries. Thanks to algorithms for natural language processing and machine learning, these assistants will be able to offer users tailored, contextual help in a wide range of situations.
AI assistants with the ability to “think” and “reason” for themselves may soon be a reality, allowing for even more tailored and in-depth help for users. We can only speculate about the myriad ways in which AI assistants will be useful to us in the future.
The use of AI assistants in the workplace and in commerce will have far-reaching consequences. A lot of mundane tasks and processes can be automated, allowing them to focus on more intricate and imaginative projects. As a result, they may be able to aid us in decision-making by providing us with timely and insightful data.
2. AI to Rescue the Planet
New artificial intelligence-powered solutions will be developed in response to the effects of climate change. In the future, artificial intelligence will be used to address a wide range of environmental challenges, as the need for more efficient and sustainable lifestyles becomes more pressing.
For example, AI could be used to find the most efficient ways to use energy in buildings, to foresee and lessen the effects of natural disasters, and to create innovative strategies for cutting carbon emissions.
Possible future applications of AI include controlling global temperatures and weather patterns through the use of cutting-edge engineering. Large solar arrays in space or “cloud cities” constructed artificially in the sky could achieve this goal. These developments have the potential to mitigate climate change’s negative consequences and usher in a more stable future. While the benefits of developing cutting-edge AI technologies are undeniable, there are also serious dangers associated with doing so.
Climate solutions powered by AI could have unintended consequences if not properly regulated. For instance, artificial intelligence (AI) being used to alter weather patterns could lead to unanticipated consequences.
Both policymakers and climate scientists will need to give careful thought to the benefits and drawbacks of AI-powered climate solutions. Despite AI’s promising future in combating climate change, we must move cautiously and give serious thought to the effects these tools could have. All of us can live in a better, more sustainable future thanks to the power of AI, but only if we use it wisely and plan for it in advance.
3. Humanoid Robots
Robots that can mimic human movement and speech are called humanoid (not like terminators). There will be an increase in the use of humanoid robots in both domestic and industrial settings.
Humanoid robots will have superior mental and physical capacities. They’ll have the ability to swiftly and gracefully move through any given environment. These robots will become an integral part of our society, assisting us in a variety of ways and enhancing our quality of life.
4. Generative AI
In 2019, generative AI made great strides. This type of AI took a set of instructions and produced new results, such as visuals or text, based on those instructions.
By the end of this year, generative AI is likely to be used in a wide range of fields, from the development of immersive virtual worlds for gaming and simulation to the production of targeted advertisements for businesses.
Eventually, generative AI will be able to design entire virtual worlds, complete with believable settings, characters, and narratives. These simulated environments will feel so real to the user that they will be hard to tear themselves away from the action. It will also be employed in the production of individualised media on an industrial scale.
Generative AI will be able to create personalised content for each user by learning about their likes, dislikes, habits, and interests. In marketing, this will be especially helpful because it will allow companies to create tailored advertisements, emails, and other marketing collateral for individual consumers. As an added bonus, generative AI will be able to come up with completely original and novel concepts and ideas.
5. Workplace Automation
It is on the rise, and AI is a key driver of this trend. A growing number of jobs are likely to be automated as machines and algorithms become more capable of performing complex tasks.
Despite the potential for short-term upheaval, there is long-term promise in this trend’s ability to enable workers to devote their time and energy to higher-value pursuits, thereby boosting productivity and ultimately the economy. Nonetheless, the day will come when machines can do virtually everything that humans can. The distribution and valuation of labour are likely to undergo change.
Some have even hypothesised that the proliferation of AI will pave the way for the establishment of a universal basic income, which would guarantee that all citizens would have sufficient funds to cover their basic needs regardless of their ability to find gainful employment.
Conclusion
Predicting how AI will evolve in the workplace is challenging. No one disputes that automation will play a significant role in the future of our economy, but nobody knows what that role will be.
The need of the hour is for some of the best computer science colleges in Maharashtra to adapt to the ever-evolving AI industry. This will help enable colleges to train the next generation of competent engineers in the field.
It’s important that AI is deployed in a way that helps everyone, not just the powerful. As you may have guessed by now, ChatGPT, OpenAI’s brand-new language model, was released to the technical world. Without a doubt, this year is gearing up to be an extremely interesting and important year for the evolution of AI.
| 2023-02-23T00:00:00 |
2023/02/23
|
https://www.sandipuniversity.edu.in/blog/ai-forecast-for-the-future/
|
[
{
"date": "2023/02/22",
"position": 21,
"query": "universal basic income AI"
}
] |
Artificial Intelligence, Robotics, Chat bots - ChatGPT
|
Artificial Intelligence, Robotics, Chat bots - ChatGPT
|
https://www.siliconinvestor.com
|
[] |
Hence, the inclusion of the Universal Basic Income board on the above list. This board was originally created by the late John Pitera on July 4, 2017. John was ...
|
Some significant related SI boards
The purpose of this board is to share and discuss information about artificial intelligence, robotics and automation. Suggested topics include the underlying technologies, the practical applications of those technologies and the economic and cultural ramifications of those technologies. Above all else, please bring your investment ideas to the board.Artificial intelligence, robotics and automation have already reached deep into our lives, although you might be hard pressed to point to obvious examples of its influence. Among countless other behind-the-scenes chores, neural networks power our virtual assistants, make online shopping recommendations, recognize people in our snapshots, scrutinize our banking transactions for evidence of fraud, transcribe our voice messages, and weed out hateful social-media postings. What these applications have in common is that they involve learning and operating in a constrained, predictable environment. But embedding AI more firmly into our endeavors and enterprises poses a great challenge. To get to the next level, researchers are trying to fuse AI and robotics to create an intelligence that can make decisions and control a physical body in the messy, unpredictable and unforgiving real world.The science of artificial intelligence has gone through several "boom and bust" cycles. The term was coined in 1956 by John McCarthy, who suggested that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Cheap and abundant computing power is bringing us closer to that reality.In November 2022, OpenAI released ChatGPT, an online chat tool that uses a technology called generative A.I. The release of ChatGPT’s chatbot, and more broadly the other chatbots and generative platforms that will follow it, are transformative technologies. Chatbots may be the killer app that does for AI what email did for the Internet.According to Steven Levy of Wired Magazine,Generative AI,Generative artificial intelligence (AI) refers to a class of AI algorithms that can generate new content, such as images, music, text, or even entire virtual worlds, without explicit instructions or examples. In other words, these algorithms can learn to create new content by themselves, rather than simply copying or transforming existing data.Generative AI typically involves the use of deep learning neural networks, which are trained on large datasets of examples and learn to capture the statistical patterns and regularities of the input data. Once trained, these networks can generate new data that follows the same statistical patterns, but with some degree of randomness or novelty.Some examples of generative AI applications include:-- Generative adversarial networks (GANs), which learn to generate new images by pitting two neural networks against each other.-- Language models, such as GPT-3, which can generate new text by predicting the next word in a sentence or by completing a prompt.-- Music generation models, which can learn to generate new melodies or even entire compositions based on existing musical styles or genres.Generative AI has a wide range of potential applications, from creative fields such as art and music to practical applications such as data augmentation, data synthesis, and content generation for virtual and augmented reality. However, it also raises important ethical and social questions about the potential misuse of AI-generated content, as well as the impact on employment and the economy.----------------------------------------------Please keep politics off of the board, unless the discussion is directly applicable to the technologies. No name calling or taunting please. Be civil to one another. Your mother may be watching. Trolls will not be tolerated.Please carve out some time to read the headers for the Frank Sully moderated boards for Nvidia and Baidu. Collectively, they provide an excellent tutorial on the current state of the artificial intelligence industry.The continuing adoption of artificial intelligence, robotics and automation will further disrupt the labor markets. Jobs will be eliminated; others will be created. The disruption will cause us to rethink our safety net. Hence, the inclusion of theboard on the above list.This board was originally created by the late John Pitera on July 4, 2017. John was a firm believer in the Singularity, I strongly recommend that you read his original header in the Revision History section.*************************************
| 2023-02-22T00:00:00 |
https://www.siliconinvestor.com/subject.aspx?subjectid=59856
|
[
{
"date": "2023/02/22",
"position": 42,
"query": "universal basic income AI"
}
] |
|
Our Inclusive Digital Europe
|
Our Inclusive Digital Europe
|
https://www.socialistsanddemocrats.eu
|
[] |
We must be vigilant of Artificial Intelligence (AI). ... Codifying internet access and data sovereignty rooted in net neutrality as a universal fundamental right.
|
Embracing Artificial Intelligence
We must be vigilant of Artificial Intelligence (AI). The European Union must take action to ensure that our values are upheld and that AI is used in a responsible and safe manner. We need clear and strong rules to guarantee fairness and transparency in AI development and use.
No one should have to sacrifice their fundamental rights for the sake of technological advancement. That's why we must demand that high-risk AI undergoes a thorough assessment for biases and discrimination, and if it poses a risk to our rights, it must be banned. Human oversight and transparency requirements should always be in place to ensure that AI is used ethically and responsibly. The use of AI in the workplace must be subject to strong regulations that protect workers' rights and dignity and can in no way lead to the exploitation or surveillance of worker’s performance. Any worker who has been harmed by the use of AI must have the right to seek redress and hold corporations accountable.
In short, we must put people before profits and ensure that technology is used in a way that benefits workers and society as a whole.
| 2023-02-22T00:00:00 |
https://www.socialistsanddemocrats.eu/content/our-inclusive-digital-europe
|
[
{
"date": "2023/02/22",
"position": 72,
"query": "universal basic income AI"
}
] |
|
Australia: The risks of ChatGPT for employers
|
Australia: The risks of ChatGPT for employers
|
https://leglobal.law
|
[
"Michael Harmer",
"Chairman",
"Senior Team Leader",
"Michael Is The Chairman",
"Senior Team Leader Of Harmers Workplace Lawyers",
"Has Years",
"Experience In Workplace Relations. During This Time He Has Provided Legal",
"Strategic Advice In A Number Of Major Enterprise Bargaining",
"Change Management Initiatives",
"Has Built Up Extensive Experience"
] |
One of the biggest dangers of ChatGPT is the potential for data privacy and confidentiality breaches.
|
Disclaimer: a small part of this article was written by ChatGPT, and edited by a human, as part of researching its capabilities for this article.
As organisations increasingly turn to advanced technologies to streamline their operations and improve their productivity, the use of chatbots like OpenAI’s ChatGPT has become increasingly popular. While this technology can certainly offer many benefits and time efficiencies, there are also a number of significant dangers and other implications that employers should be aware of when it comes to using ChatGPT in their workplace.
One of the biggest dangers of ChatGPT is the potential for data privacy and confidentiality breaches. Employees may be inputting sensitive and highly confidential information into ChatGPT, creating the risk that such data may be inadvertently or deliberately revealed to third parties or the general public, including through ChatGPT using such data when responding to other requests or through hacking. The disclosure or misuse of such data can result in significant consequences for businesses, including not just the revelation of personally and/or commercially sensitive and confidential data, but businesses may also be exposed to claims for damages and penalties for breaches of the law. The exposure of commercially sensitive data to ChatGPT may also be a relevant consideration when it comes to the enforcement of restraints (as Courts may consider that the data is not truly confidential and should not be protected in those circumstances).
Another significant potential danger of ChatGPT is that the information it provides may not be accurate. This is acknowledged by ChatGPT itself. Because the model is trained on a large amount of data, it may not always be able to distinguish between credible and inaccurate information. The information in its system is also not up to date (as at the time of this article, ChatGPT states that it is only aware of world developments up to 2021). Accordingly, there may be significant consequences in relying on the data provided by ChatGPT, particularly if such data is inaccurate, as that can lead to legal liability for the business.
A related issue is whether businesses can terminate employees who use ChatGPT, particularly in the instance where something goes wrong, such as where an employee provides an advice or correspondence that is wrong because of reliance on the data from ChatGPT. This may depend on the policies and procedures of the business in relation to the use of ChatGPT, for example, whether the business has expressly prohibited employees from using ChatGPT in their work, or has only permitted use in certain limited circumstances. Usual unfair dismissal considerations will also apply. Businesses should consider its position on ChatGPT and implement policies and procedures accordingly.
Further, there is a question as to ownership and intellectual property and moral rights in relation to content prepared by ChatGPT based on the data inputted by employees (and employers).
Finally, ChatGPT can also create a risk of job loss for employees. With the increasing use of chatbots and the time efficiencies associated with ChatGPT, many tasks currently performed by human workers could be automated, potentially leading to widespread job losses. Employers should be mindful of their obligations to consult (including under Awards, enterprise agreements, contracts and policies) when considering the use of this technology by their business, and should consider their redundancy obligations if it comes to that.
While ChatGPT can offer many benefits to organisations, employers should be aware of the significant risks and other considerations that come with using this technology. At the very least, employers should consider its position on ChatGPT and formulate a policy on same, including whether it will be allowed to be used in the workplace and to what extent, and what control measures need to be implemented (for example, blocking access to the site or requiring that no confidential or sensitive information be inputted into the platform). Consideration of these issues will assist with mitigating the risks associated with ChatGPT, however employers should remain vigilant as such technologies continue to develop and advance.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://leglobal.law/2023/02/22/australia-the-risks-of-chatgpt-for-employers/
|
[
{
"date": "2023/02/22",
"position": 24,
"query": "ChatGPT employment impact"
}
] |
Should you use ChatGPT for your career?
|
Should you use ChatGPT for your career?
|
https://capd.mit.edu
|
[
"Erik Pavesic",
"Assistant Director",
"Career Advising"
] |
So, naturally, we wanted to factor in how ChatGPT may, or may not, be helpful with career related topics including resume writing, interview preparation, and ...
|
By: Erik Pavesic and Kendel Jester
Many of us in Career Advising & Professional Development (CAPD) have had an opportunity to play around with the latest disruptive technology: ChatGPT from OpenAI. ChatGPT, a chatbot prototype launched in November 2022, has demonstrated use in reviewing code, compiling ideas, answering general questions, and writing full documents including poems, plays, letters, essays, and more. So, naturally, we wanted to factor in how ChatGPT may, or may not, be helpful with career related topics including resume writing, interview preparation, and personal statement writing.
Here are four considerations in deciding whether to leverage ChatGPT (or similar technology) for your career:
1. It’s only a tool
ChatGPT is a tool, and as such requires some guidance and constraints in using it for your career. You may find it helpful to develop resume templates, draft cover letters, or develop outreach emails, but when the template no longer suits your needs—it’s better to build upon it (or scrap it altogether). Make certain you take the time to present yourself as you would like to. Use it to examine possibilities, but stop short of fully relying on it. ChatGPT is a starting point, do not let it be your finishing point.
2. Evaluate your sources
ChatGPT relies on a vast amount of input data, so it may be challenging to determine if the information you are getting is good or valid. There are also concerns about whether information is true or if it is copywritten. Cross reference the output of ChatGPT with insights you obtain from your network connections, recruiters/hiring managers, and other reliable resources. And trust your gut; if you feel that something is not right, it is healthy to be skeptical, ask additional questions, and seek deeper understanding of your topic.
3. Safe guard your privacy
This AI learns from having information added to it by both the developers and the users. If you share something with it, it can learn and retain this information for much longer than you may expect. You should read more about OpenAI’s privacy policies and consider whether you want to share potentially sensitive information with it. Be careful sharing—as the saying goes, diamonds are forever and the internet comes close.
4. Be an above-average human
Since ChatGPT indexes web pages to “learn” what to write, it is probably starting off with giving you a very basic and average response. We know that many of you seek to be above average (after all, you made it into MIT!), so take the time to share out what makes you shine! Allow ChatGPT to help you get started, but take time to create something worthy of your humanity. When I asked ChatGPT to write a resume for me, the result was less than impressive. The content lacked rich details, because it doesn’t know everything about me. I could coach it through things I contributed to or experiences I had, but that would take a lot more work. Likewise, as AI-generated documents become more common, organizations may use technology to counter it as well.
We understand that there are many tools available to assist your career, with new ones appearing all the time. Keep in mind that CAPD is one of them. Feel free to schedule an appointment with a Career Advisor to help you navigate these tools and your career.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://capd.mit.edu/blog/2023/02/22/should-you-use-chatgpt-for-your-career/
|
[
{
"date": "2023/02/22",
"position": 35,
"query": "ChatGPT employment impact"
}
] |
The Invisible Workforce
|
The Invisible Workforce
|
https://www.thedriftmag.com
|
[
"David Adler",
"Annie Rauwerda"
] |
Our union's priorities include disability-justice efforts, working against forces of marginalization at the company, and advocacy that aims to further the cause ...
|
When the Alphabet Workers Union announced its union drive in early 2021, I joined pretty much immediately. I went to the union website and filled out a sign-up form, and I got connected with committees and a steward. At the time, I was working for Google as a vendor, which entailed writing, editing, and publishing support content. For my entire career, I’ve only worked jobs where I was a temp, a contractor, or a vendor; Google has an enormous reliance on these workers. We’re often called “the invisible workforce,” and we’re consistently compensated a fraction of what people doing similar jobs full time at Google make.
In the fall of 2020, when my coworkers and I first realized we were making vastly different salaries, we decided to approach our boss all together and won improvements for our specific team. With union support, a much larger pay-parity campaign was able to achieve a dramatic increase in pay and benefits for many more workers in 2021.
Regardless of where you work within the company — whether you’re hauling equipment in a data center or coding in an office with all the perks — the things that benefit some of us benefit all of us. This is especially important in a workforce with such big wage disparities: some people have been making as little as ten dollars per hour at Google while others make well into six figures.
Like many across the industry, I was laid off in December, but I am going to keep doing organizing work while looking for a new job. Our union’s priorities include disability-justice efforts, working against forces of marginalization at the company, and advocacy that aims to further the cause of all workers on a legislative level. Labor organizing can also play a role in developing the ethics around A.I. and other burgeoning technology; our union can hold Google accountable for its collaboration with the U.S. military as well. Alphabet workers in the past successfully pressured the company into abandoning efforts like the Pentagon’s Project Maven, which developed A.I. technology that could have been used in drone strikes. A workforce that is driven entirely by a profit motive and wields potentially dangerous tools is one that may make mistakes we as a society cannot undo. Tech unions are a safeguard against those mistakes.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://www.thedriftmag.com/the-invisible-workforce/
|
[
{
"date": "2023/02/22",
"position": 6,
"query": "AI labor union"
}
] |
Google Denies Using AI to Pick Which Employees to Fire
|
Google Denies Using AI to Pick Which Employees to Fire
|
https://futurism.com
|
[] |
The tech giant may have denied using an AI, but it appears that the HR departments of many companies are willing to use AI in future lay off decisions.
|
HR departments, though, have expressed their willingness to use software and algorithms to layoff employees.
Mindless Algorithm
After Google's parent company Alphabet laid off some 12,000 employees in January, or about six percent of Google's entire workforce, many of the aggrieved former workers began to speculate on why they were chosen, The Washington Post reports. The distribution of the layoffs, after all, seemed random.
In the words of one employee in a Discord chatroom, they wondered if a "mindless algorithm carefully designed not to violate any laws" was responsible for singling out who got cut.
Google has denied using an AI, saying there was "no algorithm involved" in its decision making, it told WaPo.
True or not, the employees have ample reason to be suspicious. According to a recent survey cited by the newspaper, 98 percent of human resources leaders at American companies admitted that they will use software and algorithms to "reduce labor costs" this year — despite only half of them being confident that the tech will make unbiased recommendations.
Hiring to Firing
It's the darker flipside of a long-accepted practice. HR departments of big firms often use algorithms to find the "right person" for "the right project," Harvard Business School professor of management practice Joseph Fuller told WaPo.
The tech helps to form a database known as a "skills inventory," which comprehensively lists the skills and experiences of every employee and helps companies decide whether these, in total, will be enough for them to meet their goals.
"They suddenly are just being used differently, because that's the place where people have... a real... inventory of skills," Fuller said.
Take, for example, a startup called Gloat: an "AI Talent Marketplace" that uses AI to connect employees to more relevant projects, and vice versa. Gloat vice president Jeff Schwartz told WaPo that he isn't aware of any clients using it to layoff employees, but acknowledged the need for transparency from HR leaders.
Fight or Flight
Employee performance might be the most important factor analyzed by these technologies, but many other metrics are more nebulous, such as "flight risk," which predicts how likely it is for someone to quit the company.
If, for example, a company has a discrimination problem causing non-white workers to leave higher on average, AI software could inadvertently identify non-white workers as a "flight risk" and recommend firing them at a higher rate, Brian Westfall, an analyst at the software review site Capterra, told WaPo.
"You can kind of see where the snowball gets rolling, and all of a sudden, these data points where you don't know how that data was created or how that data was influenced suddenly lead to poor decisions," he added.
More on AI: Amazon Is Already Selling Tons of Books Written by AI
| 2023-02-22T00:00:00 |
https://futurism.com/the-byte/google-denies-using-ai-fire-employees
|
[
{
"date": "2023/02/22",
"position": 52,
"query": "AI workers"
},
{
"date": "2023/02/22",
"position": 13,
"query": "artificial intelligence layoffs"
}
] |
|
Automate Secure Employee Offboarding
|
Secure Employee Offboarding
|
https://swimlane.com
|
[] |
departing employee access and controlsdeparting employee access and controls ... AI Security Automation. The world's first and only hyperautomation platform ...
|
What we like about Swimlane is the fact that it also allows our internal IT department to automate certain processes like JMLs (joiners, movers, leavers). Additionally, we have other services within the business that are interested in how they can utilize the platform.
Matt Helling
Head of Cyber Services
| 2023-02-22T00:00:00 |
https://swimlane.com/solutions/use-cases/secure-employee-offboarding/
|
[
{
"date": "2023/02/22",
"position": 64,
"query": "AI workers"
}
] |
|
McKinsey joins corporate affiliate program at Stanford HAI ...
|
McKinsey joins corporate affiliate program at Stanford HAI to bring AI and frontier tech to clients
|
https://www.mckinsey.com
|
[
"Ben Ellencweig",
"Mckinsey Senior Partner"
] |
McKinsey is collaborating with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) to help leaders realize the full potential of AI ...
|
AI promises to reinvent industries, and many business leaders are eager to understand how it can create value within their own living, breathing organizations. To help them make this leap, we are embarking on a new collaboration with the Stanford Institute for Human-Centered Artificial Intelligence (HAI), an organization dedicated to guiding and building the future of artificial intelligence.
“Many of our clients have a deep appetite for learning and foresight into how to best cope with the AI disruption in their organizations,” says Ben Ellencweig, a McKinsey senior partner. “Joining Stanford HAI’s corporate affiliate program gives us an opportunity to open up Silicon Valley with its leading-edge thinking, concentration of talent, and ecosystem of partners to all of our clients.”
Together, we have created customized one-day learning programs for leaders that start with a live technology demo or a lab walkthrough with Stanford HAI academics and a review of the most relevant and recent research. This is followed by an afternoon workshop with McKinsey experts in which participants identify tangible next steps to apply the technology and create new opportunities in their business.
“The curriculum covers AI and machine learning; AI and sustainability; the metaverse; and the digital economy and future of work”, explains Ben. “It’s one of the ways that we are helping businesses use technology and upskill their workforces to seed their own sustainable and inclusive growth.”
The collaboration with Stanford HAI has been a joint effort of QuantumBlack, AI by McKinsey and our Technology Council. The aim, according to McKinsey senior partner Lareina Yee, is to help participants not only understand the technology but also to experience it. “It’s a way we can help them lift their aspirations and imagine the future,” she says, “but we also ground that in practical considerations, like the investments required, how you participate in these new markets, the skills that will be needed, as well as understanding the inherent risks of the technology transition.”
The program can be tailored to match the participants’ tech knowledge, from those in the early phases of understanding the most significant technology trends identified by the McKinsey Technology Council, all the way to early AI-adopters looking to implement generative AI into their tools and products.
Stanford HAI brings together leading experts from a full array of disciplines—engineering, neuroscience, economics, psychology, medicine, entrepreneurship—to develop research and share findings, shape policy, and build partnerships, all with an eye to ensuring that AI technologies enhance the human experience.
“Companies around the world are racing to leverage artificial intelligence,” says Stanford HAI co-director and Sequoia Professor of Computer Science Fei-Fei Li. “We are grateful to corporate affiliates like McKinsey for collaborating in ways that help advance AI research and practice to improve the lives of people.”
Joining Stanford HAI’s corporate affiliate program gives us an opportunity to open up Silicon Valley with its leading-edge thinking, concentration of talent, and ecosystem of partners to all of our clients. Ben Ellencweig, McKinsey senior partner
For Stanford HAI faculty, the program offers a close-up look at some of the toughest challenges industries are grappling with today. For example, life sciences companies are turning to AI solutions to improve the efficacy of developing personalized drug therapies. Manufacturers are building AI-based digital twins of facilities before construction to prevent costly mistakes before they happen. Cutting-edge retailers and consumer goods companies are experimenting with the metaverse to create an entirely new category of customer experiences.
“HAI has cultivated a community of the world’s foremost AI researchers who are eager to scale their work through real applications with businesses,” said Panos Madamopoulos, HAI Managing Director for Industry Programs and Partnerships. “In a time of such rapid technological change, this conversation between academia and industry is essential not only for translating research into practical solutions, but also for ensuring the effective, safe, and ethical application of AI.”
| 2023-02-22T00:00:00 |
https://www.mckinsey.com/about-us/new-at-mckinsey-blog/mckinsey-and-stanford-hai-bring-the-power-of-artificial-intelligence-and-frontier-technologies-to-clients
|
[
{
"date": "2023/02/22",
"position": 7,
"query": "AI business leaders"
}
] |
|
Jenni AI Influencer Program
|
Jenni AI Influencer Program
|
https://jenni.ai
|
[] |
Jenni is the AI assistant that keeps you in the driver's seat. Jenni ... Which currency do you pay in? We will pay you in USD $. How will you pay me ...
|
Do what you do best, pull out your phone and start creating!
Need some inspo? Take a look at some content examples.
| 2023-02-22T00:00:00 |
https://jenni.ai/influencer-program
|
[
{
"date": "2023/02/22",
"position": 43,
"query": "AI wages"
}
] |
|
Collective action and AI: the next stage in accelerating ...
|
Collective action and AI: the next stage in accelerating digital transformation
|
https://www.uni-bremen.de
|
[] |
Promote upskilling and reskilling: As AI threatens to automate certain jobs, trade unions should advocate for programs to upskill and reskill workers, to help ...
|
Since recently ChatGPT seems to be on everyone's mind. A chatbot based on an artificial intelligence which is able to recognize more complex questions and to answer them accordingly at a level such that educational institutions are now in turmoil about how to deal with it. In the world of work, another fear is present: the potential of AI replacing workers (Lane/Saint-Martin 2021). Against this background, the technology of AI could pose certain challenges for employee representatives and require new or adapted strategies. In our research context, the more specific question is what impact this might have on collective action and solidarity.
For the time being, the technology only refers to and reproduces existing knowledge and the greatest achievement is to capture the question so well that the answer is appropriate, detailed and, above all, possible at an unprecedented speed. The AI is not yet so far advanced that it would be able to generate its own knowledge, but it is able to rearrange knowledge in such a way that e.g. ChatGPT can even be attributed to low-threshold creativity, if the chatbot e.g. takes on a role that is given to it and then continues the conversation in accordance with this role.
In the world of work according to Eurofund (2018) digitalization comprises three vectors of change in working processes: 1) the automation of work by replacing labour and tasks by machines; 2) the digitisation of processes by using sensors and rendering devices; 3) the coordination of economic transactions by platforms. AI technology could foreseeably accelerate developments within all three vectors enormously. While automation as a specter was still manageable for employee representatives in the past insofar as failures in automated production chains entailed too great a risk and it therefore tended to amount to semi-automation, AI technology could significantly reduce this risk with its ability to grasp complex interrelationships. Thus, automation can once again be understood as a threat to the world of work in the form of rationalization.
In industrial relations, especially in the platform economy, algorithms already show that digital technology does not have to be a neutral instrument, but that it can be used by a company's management to organize work in its interests, while at the same time reducing the transparency of organizational processes (Schreyer/Schrape 2018). Thus far, the issue of copyright of ChatGPT shows that there is a lack of transparency. Even though the chatbot largely only reproduces knowledge created by others, we as users do not get any source of information, for example.
In the transnational context of collective action, it seems obvious that AI, as a driver of technical translation being capable in replacing human translators (Bouguesmia 2020), could build bridges across language barriers. This can support cross-border working employee representatives in their exchange and communication, and fosters building trust which we consider fundamental for acts of solidarity.
| 2023-02-22T00:00:00 |
https://www.uni-bremen.de/en/blog-news/collective-action-and-ai-the-next-stage-in-accelerating-digital-transformation
|
[
{
"date": "2023/02/22",
"position": 2,
"query": "artificial intelligence labor union"
}
] |
|
Artificial Intelligence Jobs for July 2025
|
Artificial Intelligence Jobs for July 2025
|
https://www.freelancer.com
|
[
"It'S Free To Sign Up",
"Type In What You Need",
"Receive Free Quotes In Seconds"
] |
World's largest website for Artificial Intelligence Jobs. Find $$$ Artificial Intelligence Jobs or hire an Artificial Intelligence Expert to bid on ...
|
Artificial Intelligence (AI) is a field of computer science that focuses on how machines can think and act like humans. AI utilizes computer algorithms to establish patterns in data and process information in order to even make decisions. When used correctly, AI can predict trends, automate mundane tasks, and improve overall productivity. An expert with diverse skills in both software engineering and mathematics is just what you need when implementing an AI project.
At Freelancer.com, Artificial Intelligence experts are poised to provide esteemed clients with the best possible outcomes. Our professionals are dedicated to staying abreast with emerging technology, keeping up with the latest industry trends, and doing whatever it takes to provide premium outcomes for industrial projects.
Here's some projects that our Artificial Intelligence Experts made real:
Utilizing modern web technologies for speech detection
Leveraging pre-trained models for specialized use
Automatically generating keywords from given input
Integrating third-party AI API into existing frameworks
Developing application for curve detection through ANN or ML
Implementing code for sudoku puzzles featuring AI elements
Creating Django web applications for OpenAI
Designing a machine learning recommendation system
Processing natural language using Python
Investigating creative IoT or AI concepts with analytical reports
At Freelancer.com, our Artificial Intelligence experts understand how to implement the most advanced algorithms and features into your project while providing you with the highest quality of service. We invite you to join our growing list of satisfied customers who have experienced the exceptional power that AI brings to their projects. Make your projects reach beyond its expectations by letting us help bring it to life - post your project now and hire an Artificial Intelligence Expert on Freelancer.com!
From 118,582 reviews, clients rate our Artificial Intelligence Experts 4.84 out of 5 stars.
| 2023-02-22T00:00:00 |
https://www.freelancer.com/jobs/artificial-intelligence
|
[
{
"date": "2023/02/22",
"position": 37,
"query": "artificial intelligence hiring"
}
] |
|
Media Tip Sheet: Should HR managers defer to AI when ...
|
Media Tip Sheet: Should HR managers defer to AI when making employment decisions?
|
https://mediarelations.gwu.edu
|
[] |
Human resource managers are using artificial intelligence tools and other machine learning software to assist in the hiring and interview process.
|
Human resource managers are using artificial intelligence tools and other machine learning software to assist in the hiring and interview process. According to The Washington Post , HR managers also plan to use those same tools when making decisions about layoffs as well, citing a January survey . That survey of 300 human resources leaders at U.S. companies found that 98 percent of them say software and algorithms will help them make layoff decisions this year.
If you would like more context on this matter, please consider Vikram R. Bhargava , assistant professor of strategic management and public policy at the George Washington University School of Business. His research centers around topics including artificial intelligence, the future of work, technology addiction, mass social media outrage, autonomous vehicles, and other topics related to digital technology policy.
Bhargava’s latest work focuses on this very topic. His research, “Hiring, Algorithms, and Choice: Why Interviews Still Matter,” was published last week in the journal Business Ethics Quarterly .
“Much of the discomfort of HR managers deferring to algorithms are due to worries about bad outcomes: Did the algorithm make the right call? Was there bad data? Were there any untoward racial or gender biases reflected in the data?” Bhargava says. “But even if these outcomes are ultimately improved through an engineering solution, it still doesn’t settle the question of whether HR managers should defer to algorithms. This is not because our gut instincts are far superior—often they’re not."
“Rather, this is because there are important (and overlooked) ethical values created through us making choices—including choices about whom to work with or not work with—that would be jeopardized, were HR managers to abdicate that choice to an algorithm. This is so, no matter how sophisticated algorithms ultimately become at predicting the fit and performance of an employee.”
If you would like to speak with Prof. Bhargava, please contact GW Media Relations Specialist Cate Douglass at cdouglass gwu [dot] edu (cdouglass[at]gwu[dot]edu).
-GW-
| 2023-02-22T00:00:00 |
https://mediarelations.gwu.edu/media-tip-sheet-should-hr-managers-defer-ai-when-making-employment-decisions
|
[
{
"date": "2023/02/22",
"position": 53,
"query": "artificial intelligence hiring"
},
{
"date": "2023/02/22",
"position": 12,
"query": "artificial intelligence layoffs"
}
] |
|
AI (Artificial Intelligence) HW/SW Jobs
|
AI (Artificial Intelligence) HW/SW Jobs for July 2025
|
https://www.freelancer.com
|
[
"It'S Free To Sign Up",
"Type In What You Need",
"Receive Free Quotes In Seconds"
] |
World's largest website for AI (Artificial Intelligence) HW/SW Jobs. Find $$$ AI (Artificial Intelligence) HW/SW Jobs or hire an IoT Engineer ...
|
AI (Artificial Intelligence #(AI)) HW/SW is the combination of software and hardware applications utilized to enable a machine, application or device to imitate/exhibit human qualities, such as perceiving and reacting to stimuli in a much better way than traditional electronic technologies. An IoT (Internet of Things) engineer is responsible for designing and implementating the integration of devices, machines, and sensors that combine computing, communication and control functions into the Cloud. In other words, an IoT engineer is responsible for integrating software, networks and hardware that are necessary to build smarter functional systems.
Here's some projects that our expert IoT Engineer made real:
Ensuring accurate data communication between various chatbots and databases
Designing websites that integrated sophisticated AI algorithms
Developing programs utilizing AI advances to improve performance
Optimizing user experiences by adding facial recognition capabilities
Enhancing the technology ecosystem with advanced Machine Learning applications
Building data models for forecasting outcomes in the platforms
Utilizing Autoencoders to reduce noise in time-series data
Producing content generated by Artificial Intelligence
Deploying predictive analytics programs for specific tasks
Investigating ways to apply AI towards educational materials.
The many uses of AI HW/SW have allowed Freelancer.com engineers to create a variety of projects that took advantage of its unique capabilities. Allowing clients to reap the benefits of integrating AI into their applications was only made possible by hiring an experienced freelancer on Freelancer.com. If you're looking for a specialist to create a customized Artificial Intelligence project, then look no further than Freelancer.com - where experienced IoT engineers can help you realize your vision. Post your project today and find the perfect engineer to make it real!
From 39,882 reviews, clients rate our IoT Engineers 4.9 out of 5 stars.
| 2023-02-22T00:00:00 |
https://www.freelancer.com/jobs/ai-hw-sw
|
[
{
"date": "2023/02/22",
"position": 76,
"query": "artificial intelligence hiring"
}
] |
|
Alphabet Layoffs Hit Trash-Sorting Robots
|
Alphabet Layoffs Hit Trash-Sorting Robots
|
https://www.wired.com
|
[
"Paresh Dave",
"Kylie Robison",
"Will Knight",
"Steven Levy",
"Kate Knibbs",
"Boone Ashworth",
"Zoë Schiffer",
"Reece Rogers"
] |
The company recently laid off thousands of human employees—it is also shutting down a unit working on robots that learned to open doors and clean tables.
|
Teach a robot to open a door, and it ought to unlock a lifetime of opportunities. Not so for one of Alphabet’s youngest subsidiaries, Everyday Robots. Just over a year after graduating from Alphabet’s X moonshot lab, the team that trained over a hundred wheeled, one-armed robots to squeegee cafeteria tables, separate trash and recycling, and yes, open doors, is shutting down as part of budget cuts spreading across the Google parent, a spokeswoman confirmed.
“Everyday Robots will no longer be a separate project within Alphabet,” says Denise Gamboa, director of marketing and communications for Everyday Robots. “Some of the technology and part of the team will be consolidated into existing robotics efforts within Google Research.”
The robotics venture is the latest failed bet for X, which in the past decade also spun out internet-beaming balloons (Loon) and power-generating kites (Makani) before deeming them too commercially inviable to keep afloat. Other onetime X projects, such as Waymo (developing autonomous vehicles) and Wing (testing grocery delivery drones) motor on as companies within Alphabet, though their financial prospects remain mired in regulatory and technological challenges. Like Everyday Robots, those ventures harnessed novel technologies that showed impressive promise in trials but not rock-solid reliability.
Everyday Robots emerged from the rubble of at least eight robotics acquisitions by Google a decade ago. Google cofounders Larry Page and Sergey Brin expected machine learning would reshape robotics, and Page in particular wanted to develop a consumer-oriented robot, a former employee involved at the time says, speaking anonymously to discuss internal deliberations. By 2016, they put software entrepreneur Hans Peter Brøndmo in charge of a project then known as Help (and later, for a time, Moxie) to leverage machine learning to develop robots that could handle routine tasks and adapt to varying environments, the source says.
The team set up arm farms and playpens, where a fleet of robots for months would repeat the same task—like sorting rubbish. It was a brute-force attempt to generate data to train a machine learning model that could then embody the robots with the know-how needed to use their cameras, arms, wheels, and fingerlike grips to interact with the world around them. The novelty was sparing engineers from the traditional approach in robotics of having to code specific instructions for the machines to follow for every little potential scenario. The idea largely worked for initial tasks. Google had Everyday Robots’ fleet help clean the search giant’s dining halls and check for untidy conference rooms mid-pandemic.
Courtesy of Google
Last year, Everyday Robots demonstrated further progress with Google AI researchers. The project integrated a large language model similar to that underlying ChatGPT into the robotics system, enabling the mechanical helper, for example, to respond to someone saying that they are hungry by fetching a bag of chips for them. But Google and Everyday Robots stressed at the time that a roving butler at one’s beck and call remained far from consumer availability. Variations that seem trivial to humans, like the type of lighting in a room or the shape of the chips bag, could cause malfunctions.
From its earliest days, Everyday Robots struggled with whether its mission was to pursue advanced research or deliver a product to market, the former employee says. It staffed up to over 200 employees, including people overseeing customer operations, teaching robots to dance, and tinkering away at the perfect design. Each of its robots likely cost tens of thousands of dollars, robotics experts estimate.
Those expenses were too much for Alphabet, whose more speculative “other bets” such as Everyday Robots and Waymo lost about $6.1 billion last year. Alphabet’s overall profit fell 21 percent last year to $60 billion as spending on Google ads slowed, and activist investors have been clamoring for the company to make cuts. On January 20, Alphabet announced it would lay off about 12,000 workers, 6 percent of its workforce. Everyday Robots was one of the few projects disbanded.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://www.wired.com/story/alphabet-layoffs-hit-trash-sorting-robots/
|
[
{
"date": "2023/02/22",
"position": 14,
"query": "artificial intelligence layoffs"
}
] |
Algorithms may start deciding who gets fired - Marketplace
|
Algorithms may start deciding who gets fired
|
https://www.marketplace.org
|
[] |
Artificial intelligence is revolutionizing how we work and how we lose work. No, not just those chatbots that are coming for human jobs, but software that ...
|
Artificial intelligence is revolutionizing how we work and how we lose work. No, not just those chatbots that are coming for human jobs, but software that can determine which employees get pink slips when companies decide to downsize.
Whether any employers used algorithms to conduct layoffs in recent months has been a topic of speculation, though none have disclosed it.
But Capterra, a business-oriented tech review platform, recently surveyed 300 leaders in human resources and found that 98% said they would rely on software and algorithms to reduce costs during a recession.
Marketplace’s Meghan McCarty Carino spoke to Brian Westfall, the author of that Capterra report. He said HR is much more data driven today than it was during the Great Recession 15 years ago.
Below is an edited transcript of their conversation.
Brian Westfall: We asked HR leaders how much these layoff decisions are going to be driven by data versus by gut instinct, and 46% said it would be an equal divide. But I think at the end of the day, being data driven is a good thing. It can help HR departments make decisions based on evidence instead of unconscious biases. The big, red flag is that we want HR departments to proceed with caution. If they’re using bad data, or not understanding how that data is being used by an algorithm to make these decisions, that’s where bad things can come into play.
Meghan McCarty Carino: Tell me more about how these algorithms work. What kinds of data might be considered?
Westfall: It depends. They could look at skills data and performance data. Or they could look at salary data, or flight risk data or work status. The algorithm will do the analysis and say, based on that data, the recommendation is that you lay off these employees. I would highly encourage HR departments to be really critical about determining what data they are going to use and what data are they not going to use.
McCarty Carino: What are some of the concerns with using this technology and the data that goes into it?
Westfall: Technology and data can be biased. There’s a famous example from a few years ago where Amazon got a ton of job applications coming in. They wanted to know, “How do we quickly identify the top candidates that we should consider for a position?” So, they created an algorithm and decided to look at their current top performers and compare them to the job applicants that came in, thinking it would make it easier to surface the top candidates. But what they found was, because their department was predominantly male, the algorithm was penalizing female applicants. So this is an example of how if you’re not careful and you’re not understanding how these algorithms work, they can do real harm.
McCarty Carino: When companies do use technology like this to let go of people, what message does it send to employees?
Westfall: I think it’s going to be interesting to see how transparent companies will be about these decisions if they do use technology and algorithms. On the one hand, it says this wasn’t necessarily a biased human decision. It shows they weighed the data and came to a very logical conclusion about the decisions made. On the other hand, it does open up companies to people that are laid off saying, “An algorithm made this decision. How does this algorithm work? What data did it use?” And then companies become vulnerable if they really don’t understand and they just plucked a product off the shelf, plugged in some data and moved forward that way. So it sends a mixed message. I think the human element plays such a crucial role to keep your workforce intact and keeping engagement sound. Layoffs can be a real hit to that.
McCarty Carino: There’s never a good way to lay off employees, but there are better ways and worse ways, and we’ve seen those play out among the different companies that have had mass layoffs. How a company frames these decisions seems to be an important element. I can see the use of this technology going both ways in that argument.
Westfall: Right. Going back to the last recession, there was a lot of reliance on these old, tired stereotypes or fables like “Last in, first out” as in, “We should just lay off the last people we hired because they’re not providing as much value to the organization as tenured employees.” But there’s no data behind that. You could have just made a great hire three weeks ago that is going to provide more value to your organization than someone who’s been there for 10 years or more. So, I think the shift to data and analysis of that data to arrive at these decisions is a good thing. Because we want to get away from those old, tired tropes that really didn’t arrive at the best decisions. It just requires a bit of knowledge, a bit of skepticism and a bit of talking to technology vendors and asking how these algorithms work. HR departments can better understand how they’re using this stuff when they know what data it is using and how it weighs that data.
McCarty Carino: How would you advise companies to use these kinds of tools ethically?
Westfall: No. 1, you have to perform a data audit. These algorithms are a black box, and the user can’t always see how it works. But there are such critical decisions on the line and these algorithms could be making biased or improperly weighted recommendations without people knowing it. It’s important to ask questions and get as much information as possible about how the sausage is made with the algorithms.
Westfall mentioned the gender bias in the hiring AI at Amazon.
Reuters broke that story back in 2018. Sources said Amazon was testing an experimental recruiting tool that gave applicants a score between 1 and 5 stars, but, as Westfall said, the algorithm learned from previous applications that male candidates were more successful. So the system gave fewer stars to resumes that included phrases like “women’s chess club champion” or attendance at a women’s college.
Hiring AI can also amplify all kinds of problematic patterns from the past that many companies say they’re trying to reverse, like a preference for college graduates for jobs that shouldn’t require a four-year degree.
Here on “Marketplace Tech,” we recently reported on an effort to debias hiring AI so that it can better predict which applicant skill sets match the needs of a position without falling back on a familiar alma mater.
| 2023-02-22T00:00:00 |
https://www.marketplace.org/episode/algorithms-may-start-deciding-who-gets-fired
|
[
{
"date": "2023/02/22",
"position": 58,
"query": "artificial intelligence layoffs"
}
] |
|
AI Graphic Design Vs. Working With Designers - Kimp
|
AI Graphic Design Vs. Working With Designers: The Future of Graphic Design
|
https://www.kimp.io
|
[] |
AI graphic design tools are trained with several design basics including graphic design principles like symmetry, contrast, and hierarchy. And machines are ...
|
AI Graphic Design Vs. Working With Designers: The Future of Graphic Design Entrepreneurs and marketers around the world are constantly looking for designs. Designs that convey their brands’ messages. Designs that help connect with their customers. The real question is how do you get these designs? Working with designers is the most evident option. DIY design became a popular trend a couple of years ago. And now it’s the era of AI Graphic design. But is AI design the way forward? What does the future of graphic design for brands look like? We’ll answer these questions in this blog.
In 2022, the global market for AI (artificial intelligence) was valued at $136.6 billion. Though AI, as a technology, is old, many of its applications are new. Over the past few years, AI has grown at a remarkable pace. After all, the technology is estimated to give a 40% boost to business productivity. So, AI seems to be everywhere. And AI graphic design has been a hot topic of discussion for the past several months.
Any active internet user must come across news about DALL.E and AI design in general at least once a day. That’s how common the topic is these days. Has this got you thinking about where graphic design is headed? Have you started wondering if AI graphic design tools will replace human graphic designers? If you have these doubts, you are in the right place. We’ll discuss the battle between creatives and bots in this blog.
An introduction to AI graphic design
Artificial intelligence, in simple words, is about imparting intelligence to machines. Combining the strengths of image process, pattern recognition, and a host of other technologies, AI design tools help generate new designs.
These AI graphic design systems arrive at new designs based on their training datasets containing massive reserves of designs and data about various visual styles and other information. As you can see, there is a sophisticated system in place at the backend.
Design creation is not the only function of these AI design tools. Many of them also focus on other tasks like image manipulation, image editing, image enhancement, video enhancement, background removal, and so on.
AI-generated image on DALL.E 2 Source
AI graphic design – how far has machine made its mark in the creative field?
Well, we hear people talk about AI design all the time. But has the technology really taken a solid shape? We’ll answer this with an example. Can you tell which of the below images was painted by an artist and which one was generated by an AI tool?
The above images were presented to artists in a survey conducted by The Guardian to understand how much AI art is discernable from that created by an actual human artist. The image on the left is Down in the Laurentides by Homer Watson. And the image on the right was churned out by DALL.E for the prompt, “Landscape oil painting Constable Claude Corot”. Did you spot the AI art correctly?
Well, not everyone can. And that’s how remarkable some AI art generators are these days. So, yes, answering the question we laid down earlier, AI design generation technology has taken a pretty solid shape already.
But one thing to remember is that when we are talking about creating marketing designs and branding designs, it’s not just about aesthetics. It is about designing with a purpose. It’s about creating designs that integrate emotions. Can AI design tools achieve that? We’ll answer that in a moment. But first, let’s talk about the perks of AI graphic design that everyone has been raving about.
Unlocking the strengths of AI graphic design
Save time while managing your creative workflow
Long turnaround times are some of the biggest challenges that brands face while outsourcing design. Sometimes, you plan your creatives ahead and have a solid content calendar. Other times there are last-minute requests that pop up.
For example, you think an image would break down a section in a blog that you are about to publish but you do not have enough time to work with a designer! In such cases, you might find AI graphic design tools resourceful. Most design tools generate designs in seconds.
Tackle last-minute touch-ups with ease
If you have created some designs working with a freelance designer and just before uploading the media you notice that you need to make some changes, going back to the freelance designer for changes might not always be an option. Some design services do not support multiple revisions and some might have a long wait time for your corrections. AI graphic design tools can be used in such instances.
The above video shows quick adjustments you can make with Adobe Sensei, an AI tool that can be integrated with Adobe Experience Cloud. Similarly, if you have last-minute touch-ups you wish to make in your videos and images, AI design tools come in handy.
Another option is to invest in a video subscription like Kimp Video. So you get unlimited revisions and the option to flexibly alter your workflow along with the support for multiple active design requests.
Cost-effective
Hiring a designer and managing an in-house design team can be expensive. Traditional design agencies and freelance designers might have limited scope. So, you end up paying for each design you need. In comparison, AI graphic design tools feel like a cost-effective option. The pricing structure varies depending on the design tool you choose. But most of them help you save design costs in the long run.
Of course, if you need to work with a human designer and still save money, there are other cost-effective options to put an end to your design woes!
AI graphic design is fun
In addition to all the serious benefits of AI graphic design, there’s also the fact that AI-generated designs are fun. That’s one reason why even big brands with the best creative teams onboard have been experimenting with AI graphic design and actively speaking about these designs online.
Martini, a Bacardi-owned brand, became the first to use AI-generated visuals in its advertisement. Even brands like Heinz have not shied away from exploring this tech.
Here’s another one from Nutella where the brand used AI to generate packaging design.
So, if you are looking to create something refreshing, something away from your brand’s usual style, AI-generated designs might turn out to be great sources of inspiration to give you a fresh perspective.
Despite all the benefits that AI graphic design boasts of, there are several limitations at the moment that make working with graphic designers a more practical option.
Limitations of AI graphic design that a designer can overcome
In spite of the fact that AI graphic design tools can create some refreshingly new designs in a fast and efficient manner, there are a few limitations that a graphic designer alone can overcome. Let’s look at a few of them:
1. Creativity cannot be hardwired
Creativity comes in many shapes and sizes. Some part of it is inherent in graphic designers, and some parts of it are acquired over their years of experience. It’s hard for machines to mimic this level of creativity a graphic designer has.
AI design tools are trained with man-made datasets. The capabilities are therefore limited to the type of data used to train these tools. Even the most extensive datasets in the world cannot surpass the intuition and creative ideas of human designers.
We’ll give you an example. Take a look at a highly acclaimed print ad from Staedtler.
The above ad delves into the idea of how some of the greatest ideas have the humblest of origins. How some of the masterpieces start out as simple pencil sketches. From the placement of the copy “Where it all begins” to the connection of the idea to the brand, everything about this ad screams creativity. That’s a far-reaching idea for a machine to execute.
2. Emotional depth in designs
Let’s look at an ad to understand why emotions make a difference in ad designs.
If you ask a pattern recognition model, an AI system to analyze the difference between the 2 images in the above ad, it would perhaps say that the dog’s missing in the first one. But a human would understand the loneliness in the first image and how the dog alleviates it in the second one. A human creative team can come up with such emotionally meaningful ad ideas that even the best AI art generators in the world cannot.
To create an emotionally deep ad, it takes a good understanding of human emotions. And this is not a strength of AI graphic design tools at the moment. That’s another area where human graphic designers have a clear advantage. Because when a brand is trying to communicate with its audience through its ads, it cannot do so effectively without adding emotions to its designs. Emotional marketing makes a huge difference, after all.
No matter how sophisticated an AI graphic design tool might be, no matter how well-trained it is to spin out unique designs, it can still not empathize with your human customers. But a graphic designer can! And that makes a huge difference.
3. The human touch
To add a human touch to designs, it takes an understanding of more than just emotions. It takes an understanding of cultural norms based on which the relevance and effectiveness of the ad concept differ. It takes a lifetime of living to understand the little things that feel “human”.
The copy in the below ad summarizes the above idea accurately. A machine cannot understand the relevance of a “baby’s smell” or “paying the toll for the person behind you” to a car ad. So, if you ask an AI copywriting tool to create an ad copy, you might possibly not end up with a more relatable compilation of human things like those listed in the below ad.
From the design perspective, the selective contrast between the opening and closing lines and the rest of the text is also hard to explain to a machine. Overall, for ads like these with a strong human touch, you still need a human creative team rather than an AI graphic design tool.
4. Perfection is a bane at times
AI graphic design tools are trained with several design basics including graphic design principles like symmetry, contrast, and hierarchy. And machines are known for avoiding human error. In other words, machines are built to deliver perfect results. While that might work in a more analytical realm it does not exactly suit a creative space like marketing design.
The imperfectly shaped clouds in a landscape painting, that single missing petal in a floral wreath painting – every little imperfection makes a design look authentic. And this matters a great deal in marketing designs as well.
We’ll give you an example. Have you noticed the Starbucks logo? At first, it looks perfectly symmetrical. But look closely at the face of the siren. The creative team of Starbucks revised the first version of the logo which looked too perfect to feel human. Such little details cannot be replicated by AI design tools. It takes a human designer to achieve that.
Kimp Tip: The need for imperfections in human-centric designs is why concepts like asymmetric balance and off-balance exist in marketing designs. These help shift a viewer’s perspectives and deliver the message in a more memorable manner.
Looking for a convenient way to achieve efficiency and convenience in your design workflow with human designers? Go for a design subscription like Kimp.
5. Common sense makes communication easier
AI graphic design tools deliver a very literal interpretation of your prompt. Even the slightest gaps in your prompts lead to a design that looks nothing like what you had perceived. Communicating your ideas to a human designer might feel much more convenient than coming up with the most accurate prompt for an AI graphic design tool.
Take the below IKEA ad for example. The ad is to represent the idea that sleep is the most effective natural supplement your body needs. IKEA launched a series of ads on this theme to promote its bedding range. Try communicating this idea to an AI-based text-to-image generator and you’ll know what we mean when we say that communicating your ideas to a human designer is much simpler.
Unlimited design subscriptions – the perfect balance between design automation and designer collaboration
AI graphic design tools, as you can see, are pretty convenient to work with. And they can save a lot of time in your design workflow. However, they cannot create magic out of thin air. You cannot use them as standalone tools in your marketing design. You can however use them to augment a designer’s efforts. So, if you have an internal design team and if you wish to improve the team’s efficiency, investing in AI design tools might feel like a very practical move. Another option to save your time and money, if you do not have an in-house design team, will be to invest in design subscriptions like Kimp. Want to see how design subscriptions help automate your design workflow and simplify marketing? Sign up for our 7-day risk-free free trial.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://www.kimp.io/ai-graphic-design/
|
[
{
"date": "2023/02/22",
"position": 6,
"query": "artificial intelligence graphic design"
}
] |
Embracing AI in Design: How Designers and AI can ...
|
Embracing AI in Design: How Designers and AI can Collaborate
|
https://fuzzymath.com
|
[
"Tré Wilson"
] |
In this post we'll explore how AI can be your wingman in the design process as well as the importance of human creativity and how to leverage AI to improve ...
|
Artificial intelligence (AI) is a frightening term that may be sending chills down your spine if you work closely with technologies that can be automated by it. A computer learns constantly and never stops designing. They never sleep, eat, or long for human interaction – and while this may describe some designers already – I don’t blame others for being scared. Whether we’re scared or not, we’re entering a new age of AI and the design industry has to go with it. AI is making its way into our lives and let’s be real, some of us are excited while others are freaking out thinking AI is going to take our jobs or limit creativity.
However, there is a bright side. When used correctly, AI can actually enhance a designer’s workflow and open up new possibilities. In this post we’ll explore how AI can be your wingman in the design process as well as the importance of human creativity and how to leverage AI to improve your work. By understanding the capabilities and limitations of AI, designers can embrace it as a valuable tool rather than fear it as a threat.
AI Integration Services at Fuzzy Math Are you unsure of how to integrate AI into your product? We specialize in seamlessly incorporating Artificial Intelligence (AI) into your digital products and solutions, revolutionizing user experiences that captivate, engage, and drive your business forward. Learn More Learn More
AI’s Role in the Design Process
AI is nothing new. In fact, humans have been toying around with the idea of a thinking machine that could perform tasks that typically required human intelligence since the first half of the 20th century. The term “artificial intelligence” was coined in 1956 at The Dartmouth Conference (a summer research workshop exploring neural networks, the theory of computability, and natural language processing and recognition). In later decades AI would go on to achieve massive leaps in computing capabilities with increased funding from the government with programs such as the Defense Advanced Research Projects Agency (DARPA) and more recently IBM’s Watson, the AI Jeopardy contestants aren’t too fond of.
Today AI is beginning to reach a point many only thought were possible in science fiction, which is explored more in Sci-Fi UI: What Three Spaceships Can Teach us about the Future of User Interfaces. No longer is this technology confined to the realm of development and data science. So what is the role of AI in the design process? You can think of it as a designer’s exoskeleton. Exoskeletons are a wearable technology used to assist people by providing extra strength and support to those wearing it so they can perform tasks more efficiently. Just like an exoskeleton, the designer isn’t replaced, but enhanced and retains control over the logic of the AI.
As Nadia Piet discusses in their article 3+ Ways AI and Design intersect (and Designers can get involved with AI), there are currently 3 opportunity spaces for designers to interact with AI:
Designing with AI (collaborative efforts that produce creative output) Designing for AI (using human-centered principles to build systems) Designing of AI (interaction and UX design for AI experiences)
Some spaces will lend more opportunity to certain disciplines, visual and graphic designers are more likely to find themselves having more opportunity to work with AI whereas UX, UI, and interaction designers are more likely to lead the design of applications that leverage AI.
We will discuss how designers can leverage AI when given those opportunities later in the article but for now it is important to understand that at present, AI lacks the ability to independently think and make decisions. It produces output by processing the input, running that information through its model (a large dataset of similar examples), making a prediction or decision, and finally generating an output. It is important to note that the accuracy of the output depends on the quality of the input or how one frames what they’re asking the AI to do.
How Designers Can Leverage AI
Let’s explore the creative process of a designer and how they can leverage AI in a collaborative relationship. Although each designer may have their own individual approach to problem-solving and idea generation, there are typically common steps that are followed in any design project.
Researching user problems
Ideating solutions
Visual design
Validating those solutions
During the research stage, a valuable skill a designer can possess is the ability to see patterns when synthesizing large pools of data discovered in qualitative and quantitative research methods. This also happens to be an area where AI is particularly skilled in. AI can be used to analyze enormous amounts of user-data to find patterns that may have been missed by a human designer allowing them to have a clearer view of the data.
During the ideation and visual phase of a project there is no shortage of tools that can assist designers. One of the most popular is Midjourney. Designers can use this tool to generate ideas for various types of projects such as graphic design, typography, photography, and interface design. Let’s play around with it, first I gave it a simple prompt
Prompt 1: create a UI optimized for trust and accessibility for a user that just got onboarded to our travel app
Then I gave it a prompt with specifics I thought would be hard for an AI to fully understand.
Prompt 2: create a UI for a booking flow that accounts for the user recently losing their parents. Optimize for trust and accessibility for a user that just got onboarded to our travel app
Prompt 1 Prompt 2
While Midjourney overuses imagery and shadows while also not proving legible copy, the output is a great start to spark ideas about what could become a functional UI. It’s interesting to see how a small change accounting for something a conscious being would feel effected the UI. Once that detail was inputted the layout became lighter in color, used less imagery, and seemed to have more flows and elements suggesting it wants the user to quickly get in and easily book their trip.
Sometimes product user experiences can be tone deaf or overly optimistic and cheerful, booking a hotel for your parents funeral is not a happy moment and the last thing you want is for the product to congratulate you on your purchase or say something like “Hope you’re excited for your trip!” as most apps often do. Would the user consider the palette and imagery too cheerful for their situation? How neutral should an experience be considering the plethora of situations a user could be in? The designer will still need to answer these questions by putting things into context and account for micro interactions, UX copy and tone, user flows, and more.
Recently a tool that is still in beta called Galileo AI is revolutionizing how UIs are created. Galileo AI empowers individuals to input text prompts, allowing its model to sift through thousands of examples and compile pieces of various designs to generate a high-quality UI design directly into Figma.
The final output is impressive and to many this is it, this is what everyone was afraid of. Tell the AI what you want, it outputs a design and your job is gone just like that, however; I’d argue this isn’t so different than spending hours scrolling through Dribble or any website that helps designers explore ideas except now what might take a few hours may only take half that. In the end designers will still need to know how to use their desired design tool, understand the UX process, and take those designs keeping in mind the user and customize what’s necessary for the specific needs of the project.
At Fuzzy Math, we mostly design with AI tools to streamline our work by refining survey questions to suit Likert scales, generating creative ideas for blog posts and podcasts, and quickly exploring color options using a tool called Khroma. We also like to have a little fun and use our robot companions to generate greetings in our slack channels.
But we also have experience in designing for AI. In 2018, Fuzzy Math partnered with Narrative Science, a tech startup leveraging data and NLG (Natural Language Generation) in the business world*. Their AI platform was able to take in numbers and statistics and automatically generate intelligent narratives that were able to be understood by even the least data-fluent business users. With our research helping to shed light on the needs and pain points for those business users, together we were able to craft an MVP that built rich insights for users — all with the help of AI.
*Narrative Science was acquired by Salesforce in 2021, and has been integrated into Tableau’s Augmented Analytics feature set.
While AI has proved helpful in our design processes, it’s not a panacea for all design problems. For instance, designers still have to take time to proofread the output, or make the final decisions on color palettes. Regardless of whether an AI generates ideas for content, layout, or produces a high fidelity mockup, designers will still need to have a strong grasp of design foundations, an understanding of the project’s scope and the specific requirements of its users, and the ability to effectively communicate the reasoning behind their decision-making process, including why certain ideas should be adopted or rejected.
Other ways to use your AI exoskeleton is by leveraging tools such as ChatGPT, Notion AI, and Jasper.ai to write content for wireframes and edit or generate research questions and written content. Clients have expressed gratitude when we provide them with mockups that have realistic content and avoid Lorem Ipsum as it gives them a more accurate glimpse into what the final product could look like. Lorem Ipsum can also confuse users during testing. You can create wireframes from sketches by using a tool called Sketch2Code, remove backgrounds from imagery with Remove.bg, and enhance and upscale photos without sacrificing their quality with Let’s enhance.
The Importance of Human Creativity
AI lacks the ability to independently think and make decisions. Clients will trust a human designer’s imagination and ingenuity to create experiences that go beyond simple layouts, interactions, and font choices in order to stand out and be remembered.
The reason AI is a threat to designers is because of its scalability, speed of producing output, and ability to rapidly learn. The reality is putting a great amount of effort into pushing pixels around a screen is not a good option. Many designers may pride themselves on the ability to make an aesthetically pleasing landing page or app screen, however it’s hard to ignore how similar these elements and layouts are. It’s incredibly simple for an AI to aggregate examples from across the internet and output a design that checks most boxes for usability. This comes at a cost, if tools like Midjourney and Galileo AI are just looking at an example already created they are exacerbating the already growing risk of the homogenization of UI trends which begs the question, how does a client receive a product that can deliver an unique and tailored experience to their users?
As UI kits and design systems become increasingly accessible to designers, visual design, while still a crucial aspect of UX design, is becoming a skill that may be automated in the near future due to the ease of combining these resources with AI. So what skill will designers need in order to stay relevant in a world where machines can do things faster and at a larger scale?
Designers will need to think like a designer and solve problems for human beings.
As of today an AI has not passed the Turing test (a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human) and probably won’t for a long time. meaning the best thing to design for a human is another human. Instead of focusing on layouts and landing pages, designers will need to focus more of their energy understanding user’s needs and thinking about the use cases people might have when interacting with our designs. Are they most likely to be angry or sad when using an app or website? Are they in a taxi and they get motion sickness? What are their motivations? How might their cultural backgrounds influence how they perceive color, images, or copy?
One glaring issue AI has is sometimes it displays inherent bias and outputs incorrect data. Remember, AI is aggregating information from the internet which was previously written by biased humans. Ethics have always been at the forefront of the AI conversation —, it’s a major reason Google delayed implementing it’s AI language model, LaMDa commercially. They fully understood the power they were wielding and wanted to ensure LaMDa delivered accurate, ethically sound information. Designers share this responsibility too, when choosing a target audience or a feature to implement over another you’re making the implicit statement about who you’re excluding from the experience. Fuzzy Math’s own Mercedes Mack wrote a thought provoking article on the subject in her post, What does it mean to be an ethical UX designer? Designers need to be aware of these limitations and actively consider it, especially when working with and for AI.
A New World
Automation in manufacturing, self-checkout machines, chatbots… what do all these have in common? They were disruptive technologies that shifted labor markets. Plenty of people thought when these technologies came to fruition there would be no more factory workers, cashiers or call centers, however while those jobs may have ended for many there were hundreds created and millions continue to be employed in those fields. There will always be ebbs and flow to the labor market and as technology continues to advance they will create productivity effects that contribute to labor demand. A study conducted by the World Economic Forum predicts by 2025, digital technologies will create at least 12 million more jobs than they estimated since these technologies will need people with the right skill sets to program, repair, and maintain them.
In the design world there are many skill sets that have more potential than others to be automated by AI in the near future. Like death and taxes, it is inevitable that businesses will seek a more efficient way to conduct business in order to save money. Designers need to be aware of this reality in order to adapt to this new world.
The graph below illustrates the skills that will probably be replaced by AI first. As stated before technical skills will most likely see automation before more strategic or human interactive skills such as conducting user research and project management. It’s not impossible that these strategic skills will be replaced but it is unlikely in the near term that people will feel comfortable taking insights or strategic brand ideas from an AI with no human input. Again, there is no substitute for the human experience — only a human knows what it’s like to be human.
“Change is the law of life. And those who look only to the past or present are certain to miss the future.” – John F. Kennedy
AI is a tool with profound capabilities that has the potential to aid designers, exponentially making their work and process more efficient. It will also disrupt the current status quo shifting the dynamics and empowering those who can adapt, understand, and collaborate with AI. AI can’t solve all problems and currently can’t independently think. Designers have the power to prompt AI and effortlessly collaborate with it and leverage its potential to achieve more thoughtful, usable designs. There will be challenges along the way regarding ethics, shifting labor markets, and output validity, however designers have dealt with change before whether it be a client scrapping an idea at the last minute or a user not showing up for an interview, we will adapt and become better than ever.
Sources
| 2023-02-22T00:00:00 |
2023/02/22
|
https://fuzzymath.com/blog/embracing-ai-in-design-how-designers-and-ai-can-collaborate/
|
[
{
"date": "2023/02/22",
"position": 11,
"query": "artificial intelligence graphic design"
}
] |
These Artists Are Using AI as a Creative Partner. See How!
|
These Artists Are Using AI as a Creative Partner. See How! · Worklife Blog
|
https://www.worklife.vc
|
[] |
Design Templates: Use AI tools to create design templates for logos, branding, social media graphics, and other marketing materials. Offer customization options ...
|
When Michaelangelo and his team spent five years painting the ceiling of the Sistine Chapel, artificial intelligence wasn’t even a figment of the imagination. Today, artists are using AI art generators to spin up similar images in 5 seconds. Welcome to the age of the AI artist!
Artistic industries are experiencing one of the most significant disruptions in centuries with the meteoric rise of AI tools for art. From paintings to sculptures, AI is being used by artists to generate unique works that challenge our perceptions of what’s possible.
Some critics argue that AI art generators undermine the essence of creativity and human expression. But supporters of AI-generated art contend that it can complement and enhance human creativity rather than replace it.
From using AI-generated color palettes to producing complex digital sculptures, artists are embracing the power of AI to revolutionize the way they create. With AI, artists can experiment with new techniques, explore unconventional materials, and push the boundaries of what we traditionally consider art.
It’s also pushing the boundaries of who we consider an artist. AI is making becoming an artist more accessible than ever. Previously, art has been looked at as a space for wealthy participants and fans. With the right tools at their disposal, anyone can unleash their inner creative genius and bring their artistic vision to life.
Generative AI is already being used by jewelers, nail techs, costume designers and all types of creatives.
Love this series by jewelry designer Cristina Beller made with Midjourney. pic.twitter.com/yd3FmEhviL — Bri Kimmel (@briannekimmel) December 11, 2022
At Worklife, we’ve kept our eyes on the rapid advancement of AI’s impact on the creator economy, and we’ve proudly invested in companies like Stability AI and Artifact, which use AI technologies to help their users work more efficiently and creatively. Needless to say, we are seeing incredible use cases of AI and are thrilled to see its impact on the world.
The debate around AI-generated art is far from over, but one thing is clear: this technology is here to stay. So whether you’re a traditionalist or a tech enthusiast, there’s no denying the potential of AI to revolutionize the art world.
But let’s not just talk about the theoretical possibilities of AI in art. Let’s dive head-first into the world of real artists embracing this technology as a creative partner – not a threat.
We explore the works of several innovative artists using AI to enhance their creativity and push the boundaries of art. From interior design to visual arts, these artists are breaking new ground and proving that AI and human creativity can coexist in unexpected ways.
We’ll take a look at the following use cases and specific artists taking their craft to the next level with AI:
The impact of AI on architecture
Powering interior designers with AI
Expanding possibilities in graphic design
Creating realistic animations with AI
Get ready to be inspired by some of the most innovative and thought-provoking art created with the help of AI! But first, a refresher on generative AI for artists.
Get Up to Speed on Generative AI Art
Art has always been a medium primed for self-expression and creativity, but with the emergence of AI, we’re witnessing a new era in the industry.
You might be surprised to find out that the use of AI in art actually has a long history, dating back to the 1950s and 1960s when algorithms were used to create computer-generated geometric shapes and patterns. The rise of computer-aided design in the 1970s and 1980s paved the way for today’s AI-generated art tools.
Generative AI art is created using algorithms that generate images, text, animation, music, or other creative outputs. These algorithms are designed to develop new, unique works of art based on various inputs, such as data, rules, or randomness.
Using the process of machine learning, the AI language model is trained on a data set of existing art, and then applies that knowledge to generate new art with similar styles, patterns, and characteristics. The output can vary widely depending on the prompts, but it often produces striking, unique, aesthetically pleasing, and conceptually rich artworks.
Unlike traditional art forms, which rely on the artist's creativity and skill, generative AI art is a collaborative effort between the artist and the machine. The artist provides the inputs and the parameters, while the machine generates the final output, resulting in a symbiotic relationship between the artist's vision and the machine's ability to create.
One of the earliest examples of generative AI art was the work of Harold Cohen, a British artist who created a program called AARON in the 70s. AARON was an algorithm that could generate drawings and paintings, and over the years, Cohen worked with the program to create a wide variety of stunning artworks.
Since then, countless other artists have explored the possibilities of generative AI art. Notable examples include the work of Rafael Lozano-Hemmer, who creates interactive installations with AI to generate images based on the movements of the viewers, and Memo Akten, who creates mesmerizing visual art using algorithms that mimic the behavior of natural systems.
Recent popularity of generative AI art can be attributed to a few factors:
Generative AI art tools are more accessible than ever before. The world at large is more curious about AI than ever before.
The big win in tech is more people are writing, designing, and having fun online again thanks to generative AI. — Bri Kimmel (@briannekimmel) October 30, 2022
So what sparked the latest enthusiasm for AI art generators?
It’s a storm that’s been brewing since Google launched DeepDream in 2015, and hit a new peak last year when Stability AI rolled out its latest release of Stable Diffusion, the language model behind tools like NightCafe and Lexica. Stable Diffusion quickly joined the ranks among Midjourney, DALL-E-2, and Niji Journey as an open-source language model for developers to create and experiment with AI art tools. Along with the release of OpenAI’s ChatGPT, AI is all anyone in the tech world can talk about.
As a result, AI art generators like LENSA and Artbreeder got into the game, enabling users to easily splice and edit their artwork, generate avatars, and collaborate on art projects. Software like Photoshop and Figma began integrating AI into their features. Novice designers can now create professional-grade designs in minutes with AI-powered freelancer tools like Canva and Adobe’s Sensei. Even Spotify released a new feature allowing users to become a DJ, powered by AI.
I'm so thrilled to share the coolest @Spotify feature to date: DJ, powered by AI. pic.twitter.com/H3NznyDQnR — Fabiano Souza (@iamfabiano) February 22, 2023
We’ve only scratched the surface of what AI can do for the modern artist. As AI technology advances, we can expect more tools to help creatives expand their scope and produce even more incredible pieces of art. We’re already starting to see its impact go beyond graphic design and expand into other creative industries.
Let’s take a look at what’s coming next.
Artists Using AI for Livable Art
The really cool thing about AI art is that it’s not just for designers and writers.
We’re also seeing a profound impact on areas of our daily lives like architecture and interior design. The potential for artist AI to shape how we build and inhabit spaces is just starting to be realized, and the results are really exciting.
Let’s take a look at two major use cases in the world of livable art:
Architecture Interior design
Reimagining Cityscapes
One of the most promising areas of application for AI in architecture is in the design of building structures themselves. Typically, architects rely on time and expertise to design safe, efficient, and visually appealing structures. However, time can be limited, and on complex design projects, it’s easy to hit a road block or overlook an important detail. But with AI, architects can use generative design to explore a vast range of design possibilities and find the best solutions for a particular project. This way, architects can account for the unknown and eliminate any options that won’t work more efficiently and accurately.
But that’s not the only way AI can help. Here are a few use-cases for AI in architecture:
Building Design: AI can generate and optimize building designs by analyzing data such as building usage, location, and environmental factors. It can also assist in creating more energy-efficient designs, reducing waste and costs.
AI can generate and optimize building designs by analyzing data such as building usage, location, and environmental factors. It can also assist in creating more energy-efficient designs, reducing waste and costs. Construction Planning: AI can assist with construction planning by analyzing data such as material costs, availability, and construction schedules. It can also help with identifying potential risks and improving safety measures.
AI can assist with construction planning by analyzing data such as material costs, availability, and construction schedules. It can also help with identifying potential risks and improving safety measures. Smart Buildings: AI can create smart buildings that adjust their settings based on factors such as occupancy, weather, and time of day. It can also improve building security and access control.
AI can create smart buildings that adjust their settings based on factors such as occupancy, weather, and time of day. It can also improve building security and access control. Environmental Design: AI can assist in environmental design by analyzing factors such as sunlight, wind patterns, and natural lighting to create more sustainable and environmentally friendly buildings.
AI can assist in environmental design by analyzing factors such as sunlight, wind patterns, and natural lighting to create more sustainable and environmentally friendly buildings. Urban Planning: AI can assist with urban planning by analyzing data such as traffic patterns, population density, and land usage to optimize city planning and improve urban infrastructure.
AI algorithms can help architects take their most surreal ideas and design them in a way that makes them possible. It can consider multiple factors, such as environmental conditions, local zoning codes, and structural requirements, and generate a range of design options that navigate all of these constraints. The resulting designs can be more efficient, functional, and visually striking than traditional designs while also saving time and money in the design process.
Manas Bhatia is one of many bold architects embracing AI as the next big thing in architecture.
Inspired by the concept of buildings that grow, architectural designer Manas Bhatia used AI technology to bring his idea to life. Take a closer look at this fantasy world. https://t.co/gwlTE7VzTV pic.twitter.com/jzV5opDUKR — My Modern Met (@mymodernmet) November 7, 2022
Manas uses MIdjourney to create surreal digital images for his conceptual projects. Bhatia’s vision includes residential skyscrapers decked out with plants, trees, and algae that purify the air – essentially creating air purification towers.
“The trial-and-error part is the most fun. We use AI to create images, and in the process, the AI trains itself and improves over time,” he told CNN.
Manas sees AI as another tool that can enrich the design process and spark new ideas, and he envisions a future where AI is integrated into the software architects use to model their visions.
From Data to Decor
Architecture isn’t the only industry bringing AI into the home. We’re also seeing it being used in the interior decorating industry. From AI-powered design software to personalized smart homes, this technology might just revolutionize the future of remote work.
AI can support the interior design process in the following ways:
Personalized Design: AI can assist in creating personalized design solutions for clients by analyzing their preferences, lifestyle, and behavior patterns. Color Palette Selection: AI can be used to select color palettes that are harmonious and complement each other. It can also help select the appropriate color schemes based on room usage, lighting, and mood. Furniture and Decor Selection: AI can help select furniture and decor items that match the overall design theme and the client's taste. It can also help identify the appropriate sizes, styles, and materials. Space Planning: AI can assist in space planning by creating efficient and functional floor plans based on room usage, available space, and furniture placement. Budget Management: AI can also manage an interior design project budget by suggesting cost-effective design solutions and identifying potential cost savings. It can also help track expenses and ensure that the project stays within budget.
If you’ve ever been stuck in a design dilemma, you know that finding the perfect piece of furniture or decor for your space can be a pain. Luckily, interior design companies have already been experimenting with AI and augmented reality (AR), allowing homeowners to see a couch, lamp, or rug in their homes before they even buy it.
Last year, IKEA rolled out its first AI-powered design experience, IKEA Kreativ, enabling US customers to design and visualize their living spaces using digitized furniture on their smartphones. It creates a lifelike and accurate depiction of the room and furniture, and bridges the gap between eCommerce and the in-store customer experience.
But the use-case of AI in interior design doesn’t stop there. It can be used to predict trends, streamline project management, personalize design recommendations, and speed up the design process. Consumer-facing tools like Modsy and Havenly use AI to help homeowners become expert interior designers by offering 3D room renderings, making decor and furniture recommendations, and leveraging AR to see it come to life.
Much like traditional artists using AI as a creative partner, many designers are embracing the trend. Australian illustrator, Georgia Perry, has been sharing her visually stunning experiments with Midjourney via her Instagram profile @superhumansketchbook.
She’s learned to use Midjourney to design colorful spaces dripping in texture and warmth.
“More than anything, it’s helping me open my mind. Often it will generate something I never would have thought of myself. It’s our job as visual communicators to keep pushing ourselves outside of the box, so I think that’s really valuable,” she told Architectural Digest.
Tools like InteriorAI and Planner 5D are simplifying the design process for clients. Using artificial intelligence, InteriorAI can generate custom 3D designs that accurately reflect their clients' spaces and tastes.
Meanwhile, Planner 5D offers an interface allowing designers to easily create detailed floor plans, 3D models, and virtual design tours. Both tools save designers time and effort, while helping them create beautiful, functional, and personalized interior spaces for their clients.
As we continue to explore the potential of AI in art and design, the possibilities for innovation and creativity are truly limitless. Whether creating visually stunning buildings, optimizing interior spaces, or designing more sustainable structures, AI is poised to transform the way we build and inhabit the world around us.
Artists Using AI for Digital Art
With AI tools and algorithms becoming increasingly accessible, artists are reaching new heights of possibility with design, empowering creatives to scale their projects like never before.
What used to take hours of tedious work can now be accomplished with a few written prompts and clicks thanks to AI. It’s become a great way for content creators to avoid creative burnout when they get stuck on a project. And, with the ability to quickly generate and manipulate massive amounts of data, graphic artists can create intricately detailed works of art that would have been impossible a few years ago.
From creating complex 3D models to generating stunning visual effects, AI is changing the game for digital artists. But it’s not just about saving time and effort. AI is also helping artists explore new creative avenues, providing them with advanced tools and techniques to bring their vision to life.
Discover how AI graphic design tools have taken digital art to new heights with:
Midjourney
Niji Journey
NightCafe Studio
Deep Motion
Graphic Designers Prompting Pixels
Did you tune into the 65th Grammy Awards? If so, you’ve already experienced the magic of AI-generated art.
AI artist Refik Anadol landed the once-in-a-lifetime opportunity to display his ongoing series “Machine Hallucinations – Space and Nature” as the backdrop to the music awards show.
Refik is a pioneer in crafting the aesthetics of machine learning, using AI to create stunning digital artworks that push the boundaries of what is possible in the realm of visual design. Anadol's work is known for its dreamy, surreal aesthetic, achieved through his unique approach to using AI technology.
At the heart of his creative process is a deep engagement with the capabilities of machine learning algorithms. By feeding vast amounts of data into his custom-built AI systems, Refik is able to generate intricate, highly detailed images that would be impossible for a human artist to produce manually.
In 2021, he sold 8 NFTs from his Machine Hallucinations series for a whopping $5.06 million in Hong Kong – a new record for the most valuable single NFT collection sold in Asia – making his work and its value a great indicator of AI art’s potential.
Niji Journey, NightCafe Studio, Midjourney, DALL-E 2, and other major design platforms are at the forefront of integrating AI technology into the world of graphic design. By leveraging machine learning algorithms, these platforms enable graphic artists and designers to create stunning, high-quality designs with greater speed, accuracy, and efficiency than ever before.
One major area where AI is having an impact is in the creation of NFTs. These platforms use AI to generate unique, one-of-a-kind digital art pieces that can be sold as NFTs on blockchain marketplaces. For example, Niji Journey's AI-powered design tool, "Design Your AI," uses machine learning algorithms to generate custom art based on the user's input. This tool can help NFT creators quickly and easily create eye-catching, unique designs that stand out in a crowded market.
Similarly, Midjourney's "DesignAI" tool uses machine learning algorithms to automate the design process, allowing designers to quickly and easily create logos, branding materials, and other design assets with minimal effort. This tool can save graphic designers significant time and effort, freeing them up to focus on other aspects of their work.
When Sketches Come to Life
AI is also revolutionizing the world of animation, enabling animators to take their art to the next level. It’s already played a significant role in blockbuster films that use CGI, like Avatar and the Avengers. Using machine learning algorithms, animators can create more realistic and detailed animations – from facial expressions to body movements.
Don Allen Stevenson III is a Metaverse Creator who has really embraced the idea of being an AI artist online. After previously working at DreamWorks Animation as a Specialist Trainer, he now works with leading tech companies focused on bringing augmented reality, virtual reality, artificial intelligence, web3, and metaverse worlds together online.
One of his most recent projects is playing around with Runway, an AI-powered platform that provides a suite of tools to help create and collaborate on digital content. Using video footage and input images generated by Midjourney, he uses Runway to create artistic AI-generated animated videos.
Unleash the power of your imagination with #gen1 by @runwayapp! I am blown away by the AI video generating tool's ability to provide endless possibilities for the future of VFX, animation, storytelling, & content production. pic.twitter.com/HrtvJQpjeU — Don Allen Stevenson III (@DonAllenIII) February 12, 2023
For the technical novice, there are also tools like Pixelvibe’s Tokkingheads, allowing users to animate face images and make them appear to speak, sing, or move in various ways. With Tokkingheads, users can create and customize avatars with different facial features, hairstyles, clothing, and more, and then animate them using voice recordings or text-to-speech technology. The tool uses advanced facial recognition and lip-syncing technology to make the avatars appear to speak in sync with the audio input.
The output of the Tokkingheads animation can be saved as a video file, which can then be shared on social media platforms or integrated into other video projects. The tool has applications in various industries, including entertainment, education, marketing, and more. It’s just one of many examples of new tools empowering creators to make more engaging, eye-catching content.
“What I have learned from our users, and what excited me the most, is how accessible it makes storytelling and sharing what’s going on in people’s internal visual world. It’s especially great that this is accessible to people who were not trained professionally, but want to express themselves,” says Pixelvibe Founder, Lisha Li on the impact of AI on art.
As we continue to explore the possibilities of AI in digital art, it's clear that the potential for innovation is unlimited.
What This Means for You
By now, you should have stars in your eyes, ideas swirling in your head, and one big question to answer:
How can I make AI work for me?
AI art isn't just for creating cool and fascinating art. It's also reimagining work for creatives, providing opportunities for them to explore new art forms, and even carving out whole new career paths.
Whether you’re looking for a side-hustle while working full time or a change in careers, here are a few ideas on how to partner with AI to boost your income:
Personalized Artwork: Use AI tools to create personalized artwork for clients, such as portraits, landscapes, and abstract pieces. You can also offer customization options such as size, framing, and material.
Use AI tools to create personalized artwork for clients, such as portraits, landscapes, and abstract pieces. You can also offer customization options such as size, framing, and material. Art Consulting: Use AI tools to help clients choose art pieces for their homes or offices. Offer consulting services that analyze their style, preferences, and budget to recommend suitable pieces from your AI-generated artwork catalog.
Use AI tools to help clients choose art pieces for their homes or offices. Offer consulting services that analyze their style, preferences, and budget to recommend suitable pieces from your AI-generated artwork catalog. NFT Art Creation: Create AI-generated art pieces that can be sold as NFTs (non-fungible tokens) on blockchain platforms. NFT art has become a rapidly growing market and offers significant earning potential.
Create AI-generated art pieces that can be sold as NFTs (non-fungible tokens) on blockchain platforms. NFT art has become a rapidly growing market and offers significant earning potential. Stock Photo and Video Creation: Use AI tools to create stock photos and videos that can be sold on stock photography websites. AI-generated content is unique and can offer an opportunity to create more inclusive design without having trouble sourcing diverse imagery.
Use AI tools to create stock photos and videos that can be sold on stock photography websites. AI-generated content is unique and can offer an opportunity to create more inclusive design without having trouble sourcing diverse imagery. Design Templates: Use AI tools to create design templates for logos, branding, social media graphics, and other marketing materials. Offer customization options that allow clients to tweak the designs to fit their needs.
Use AI tools to create design templates for logos, branding, social media graphics, and other marketing materials. Offer customization options that allow clients to tweak the designs to fit their needs. Digital Art Prints: Use AI tools to create high-quality digital art prints that can be sold on online marketplaces such as Etsy or your own e-commerce store. Offer customization options such as framing and size to cater to a broader audience.
Use AI tools to create high-quality digital art prints that can be sold on online marketplaces such as Etsy or your own e-commerce store. Offer customization options such as framing and size to cater to a broader audience. Creative Services for Businesses: Offer creative services to design led companies, such as logo design, branding, social media graphics, and website design. Use AI tools to speed up the design process and offer competitive pricing to attract clients.
With the right skills and tools, there are endless opportunities for creative entrepreneurs to turn AI art into a successful business.
I describe the current state of AI as a “creative thought partner”
It helps you remove awkward pauses from a podcast, choose a more creative title for a blog post…
AI is not coming for your job, it’s giving you the confidence to hit publish and keep creating. https://t.co/xqxEKgjNTA — Bri Kimmel (@briannekimmel) February 11, 2023
Looking to the future, AI will continue to play an increasingly important role in the art world and beyond. While some may see AI as a threat to traditional art forms, others view it as an opportunity for new forms of artistic expression and collaboration – and we can’t wait to see what ventures come next for artists and AI. In the meantime, we’ll be watching closely and investing in tech designed for the future of work!
Did this article spark your interest? Check out more here:
Want to learn more about the future of work for creatives? Take a peek at my discussion with Linktree CEO Alex Zaccaria:
| 2023-02-22T00:00:00 |
https://www.worklife.vc/blog/ai-artist
|
[
{
"date": "2023/02/22",
"position": 16,
"query": "artificial intelligence graphic design"
}
] |
|
Uizard Autodesigner 2.0 | AI UI Design Generator
|
Uizard Autodesigner 2.0
|
https://uizard.io
|
[] |
Autodesigner 2.0 combines the conversational flow of ChatGPT with Uizard's generative design capabilities and drag-and-drop editor.
|
The most popular UI generator just got even better!
What would you like to design today? Type out your ideas in plain text to design, prototype, and iterate in seconds.
| 2023-02-22T00:00:00 |
https://uizard.io/autodesigner/
|
[
{
"date": "2023/02/22",
"position": 47,
"query": "artificial intelligence graphic design"
}
] |
|
As AI in Hiring Becomes More Widespread, New Regulations Target ...
|
As AI in Hiring Becomes More Widespread, New Regulations Target Discriminatory Practices
|
https://www.biospace.com
|
[] |
New York City employers who use Artificial Intelligence (AI) tools in hiring will soon be subject to new regulations requiring them to notify ...
|
Courtesy Getty Images
New York City employers who use Artificial Intelligence (AI) tools in hiring will soon be subject to new regulations. Starting April 15, Local Law 144 will go into effect, and employers will be required to notify candidates when they are using AI in hiring.
Local Law 144 is the first piece of U.S. legislation regarding the use of AI in hiring. The NYC Department of Consumer and Worker Protection originally set an enforcement date of Jan. 1, but due to what it called a “substantial volume of thoughtful comments,” the date was pushed back, and the agency stated it will finalize the rule in the coming months.
As it’s currently written, the law would require employers to:
Conduct a bias audit on an automated employment decision tool prior to its use
Notify candidates and employees that the tool is in use
Outline to candidates the job qualifications and characteristics that the AI will use
If employers do not comply or violate any of the provisions, they will be subject to a civil penalty.
NYC isn’t the only government body to address this issue. In January, The U.S. Equal Employment Opportunity Commission (EEOC) posted a draft of a Strategic Enforcement Plan focused on reducing bias in AI hiring technologies.
And in October, the White House released a white paper titled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.”
New Tools, New Rules
One of the chief motivators behind these types of laws is the possibility of discrimination and biases that may come as a result of these AI tools.
In May, the EEOC published guidance on how AI tools can be discriminatory. The agency wrote that because the AI is automated to look for specific keywords, qualifications and requirements, the software will likely exclude qualified candidates that don’t fit that exact mold.
One example given by the agency was a chatbot that may reject candidates with significant gaps in their resumes. The agency argued the bot would then screen this person out of the qualified candidates, even if that gap was caused by a disability or maternity leave.
At a Jan. 31 hearing titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,” EEOC chair Charlotte Burrows said 83% of employers, including 99% of Fortune 500 companies, now use some type of automated tool as part of their hiring process.
Work with the System, Not Against It
Of these, a smaller but still significant number of companies report using AI software in addition to automated tools in hiring.
In a poll BioSpace ran three weeks following Burrows’ statement, 22% of respondents said their organization uses AI tech in their hiring process. Many of the respondents are part of the life science industry.
Prestige Scientific, a life sciences recruiting and executive search firm, is one of these. Stephen Provost, managing director and co-founder, told BioSpace that the addition of AI in screening candidates at Prestige was implemented within the past six months.
“We can see the benefits of AI for the future,” Provost said. “The tool that we use updates information in real time, so it will learn from the different criteria we give it and bring back more accurate results when we conduct a search.”
He emphasized that life sciences candidates, specifically, can work with the system to decrease their chances of being excluded from the AI’s search. He said due to a large number of acronyms and jargon commonly used in the industry, candidates should include those specific words in their resumes.
“Most companies have some type of software filter…that looks for certain keywords,” he said. “In this industry, I suggest using both an acronym and writing out what it stands for because you don’t know how the person setting up the AI will query the database.”
Anne Hunter is the founder of Hunter Marketing AI, a consultancy focused on helping businesses integrate AI tools. She recommended fighting fire with fire—using AI software that automatically edits a candidate’s resume or cover letter to match the keywords in a job description.
Hunter highlighted how AI can help prevent discrimination.
“Optimistically, AI screening will help eliminate bias because it looks for a skills match between a candidate and a role instead of judging based on personal characteristics,” she said. “This is a step up from the old biases, such as shared college hobbies or perceived demographic abilities, that a hiring manager might be influenced by upon first glance at a resume.”
Indeed, unintentional biases from an automated recruitment tool are easier to work around than intentional biases from a real person. Still, Provost cautioned against blindly filling one’s resume with certain keywords and phrases.
“I would not suggest candidates add anything to a resume if they have no experience in that area in order to get an interview,” Provost said. “For many HR and talent acquisition professionals, it makes that candidate lose credibility instantly…it’s the number one way to stand out in a bad way.”
As these tools continue to evolve, job seekers must evolve with them if they want to keep up with the competition. Likewise, with new legal actions like New York City’s Local Law 144, employers will soon be forced to do the same.
| 2023-02-22T00:00:00 |
2023/02/22
|
https://www.biospace.com/as-ai-in-hiring-becomes-more-widespread-new-regulations-target-discriminatory-practices
|
[
{
"date": "2023/02/22",
"position": 84,
"query": "AI regulation employment"
}
] |
5 barriers to hyperautomation (and how to overcome them) ...
|
5 barriers to hyperautomation (and how to overcome them) adoption
|
https://www.unisys.com
|
[
"Subrata Mukherjee",
"Simon Price",
"Amith K K"
] |
Unlock the full power of hyperautomation to drive efficiency and reduce costs. Explore solutions to the five most common adoption challenges.
|
Short on time? Explore the key takeaways:
Hyperautomation is an advanced form of automation that combines AI, machine learning, and robotic process automation to automate a wide range of business and IT processes.
Challenges of implementing hyperautomation include resistance to change, faulty priorities, spaghetti code, broken systems, and leadership buy-in.
To overcome these challenges, organizations should start from scratch when planning for hyperautomation and have open meetings with managers and clear communication with employees.
Successful implementation of hyperautomation can bring benefits such as cost reduction, increased efficiency and adaptability, and the ability to attract top talent.
As technology advances, organizations are discovering new ways to operate more efficiently. Making its way into the spotlight is hyperautomation.
When executed correctly, hyperautomation can effectively reduce costs and attract top talent. Gartner’s report from 2022 found that “over 80% of organizations consistently self-report increased or continued investment in hyperautomation” because of its immense advantages. In fact, the global hyperautomation market is predicted to soar 23.5% between 2022 and 2028, from USD 9 billion in 2021 to USD 26.5 billion in 2028. This begs the question:
What is hyperautomation and does it live up to the hype?
Hyperautomation supercharges the automation that many organizations may already use to streamline as many processes as possible. Organizations can use hyperautomation “to rapidly identify, vet and automate as many business and IT processes as possible,” according to Gartner guidance.
Rather than involving one automation technology solution, hyperautomation combines artificial intelligence, machine learning and robotic process automation (RPA) to automate on a much larger scale. It’s this combination that yields the highest business reward.
This exciting trend has everyone talking, including our technology experts. During a recent fireside chat on “The Hype Around Hyperautomation,” Subrata Mukherjee, Simon Price, Ph.D. and Amith K K of Unisys shared their insights on this timely topic.
How companies can apply hyperautomation
Hyperautomation, as our experts explained, differs from automation in how it is applied. It expands and multiplies the benefits of automating tasks involving both people and processes.
“It eliminates human involvement in low-value processes and restructures businesses to optimize anything that’s possible to automate,” explained Price. “But for those worried about being displaced, don’t be. Hyperautomation is not meant to replace humans. Companies should be using it to boost their employees to higher-level tasks.”
For example, technology like RPA mimics human behavior by using software to automate repetitive, rules-based tasks. Besides freeing up employees for projects that excite them, hyperautomation also offers a solution to one of the biggest enterprise challenges – a reduced workforce.
“Automation doesn’t take a day off. After you position it, it’s always running,” said Mukherjee. “Many people misunderstand hyperautomation as strengthening the weak points of RPA. But I would define it as a combination of people and process automation. You must look at automation with an end-to-end perspective that will involve change management.”
RPA is a good starting point if your organization wants to automate processes, but hyperautomation can take your company to new heights of productivity. Reduced human error, streamlined operations and improved adaptability to changing market conditions are just a few of its many benefits.
To reap these rewards, you must identify and address the biggest barriers to hyperautomation adoption.
Overcoming the top five people challenges
People are critical enablers of the adoption of any technology – hyperautomation included. Here are the top five challenges that may prevent people from embracing hyperautomation – and how to overcome them.
#1 – Resistance to change
For many people, change is uncomfortable. We are creatures of habit, many of whom are reluctant to invest the time and energy to shift processes. This is especially true if people don’t think there’s anything wrong with their original routine.
To encourage a positive mindset towards change, it’s essential to have open meetings with managers and clear communication with employees so that they feel involved in the process. Organizations must convey how requested changes will support efficiency and allow them to reduce time spent on lower-priority tasks.
With automation taking over routine tasks, it’s easy to see why some employees might fear losing their jobs to automation. While this is an understandable concern, hyperautomation is intended to provide more opportunities for employees instead of fewer. While hyperautomation takes care of repetitive, transactional tasks, workers can expand their focus on higher-level, creative tasks that often excite employees.
#2 – Faulty priorities
When planning for hyperautomation, it’s usually best to start from scratch. Many teams make the mistake of trying to improve existing processes, but those processes might not be the best options. Implementing with a fresh perspective can foster better solutions and savings in the long run.
“You shouldn’t focus on what the current process is,” said Mukherjee. “This is where you can miss automation initiatives. Instead, it would be best if you asked what your end outcome is. What is the goal you are looking to achieve? And what is the best possible process that we can do to achieve that goal?”
#3 – Spaghetti code
Diverse companies may already have multiple technologies integrated into their systems. If these systems are not implemented in a structured way or changes are made without proper documentation and testing, it can lead to a system that is overly complex and difficult to maintain. This is known as spaghetti code, which is not as tasty as it sounds and can lead to reduced system reliability and increased downtime.
Avoiding the dreaded spaghetti code takes good organization and communication skills.
“Teams should approach hyperautomation as a multi-step process,” said Mukherjee. “Streamlined goals and communication across your team will make your systems easier to manage. Focus on your organization’s goals and implement technology that works for each problem. You can also use middleware to keep automations from different technologies organized.”
#4 – Broken systems
Hyperautomation can be used to optimize your business, but the right people, processes and technologies must be in place to succeed. Not laying a proper foundation for hyperautomation is like planting vegetables in a poorly tilled plot and handing over responsibility and a pitchfork to someone who’s never gardened before and then being surprised when the garden doesn’t yield prize-winning vegetables.
If your system is broken or suboptimal, adding hyperautomation on top will not fix the underlying issues. Instead, introduce hyperautomation when your people, tools and technology are mature.
#5 – Leadership buy-in
Automation is as effective as the people managing it and company-wide adoption starts at the top. Organizations need leaders who understand how to operate automation efficiently. If leadership does not buy into changes, their skepticism can negatively influence the acceptance and efficacy of new operations.
To ensure executive-level buy-in, technology teams must maintain clear, positive communication with leaders and implement changes from the top down. This means IT teams may need to make compromises to keep support from leadership.
Take steps to revolutionize business operations
Hyperautomation is a powerful tool that can maximize automation’s benefits and revolutionize business operations. It enables organizations to design streamlined, adaptable processes that remain competitive in an ever-changing market. However, organizations must execute hyperautomation with the right change management strategy.
To ensure your organization gets the most out of its hyperautomation journey, Unisys automation experts can design solutions targeting your unique needs and our change management experts can help you implement them.
Contact Us
Related content
Three keys to realizing expected TCO on hyperautomation initiatives
How Hyperautomation Can Help Solve the Talent War | Unisys
Discover how Unisys can help your organization unlock the limitless potential of AI.
Learn more
| 2023-02-23T00:00:00 |
https://www.unisys.com/blog-post/5-barriers-to-hyperautomation-adoption/
|
[
{
"date": "2023/02/23",
"position": 27,
"query": "workplace AI adoption"
}
] |
|
Benefits of AI in Education, with Examples
|
Benefits of AI in Education, with Examples – VirtualSpeech
|
https://virtualspeech.com
|
[] |
Speaking of the adoption of VR technology, Graeme Lawrie, Director of ... The skill sets required for future generations in the workplace will be more ...
|
In this article we’ll discuss AI and the benefits and changes it’s bringing to education.
We’ll also be sharing some useful links to tools and platforms using AI in education.
Some educators are understandably fearful of the recent advances in AI technology, particularly with the release of OpenAI’s latest language model, ChatGPT. Students can easily ask the platform to complete homework assignments for them in just a few seconds.
And by showing it a sample of previous work, the AI can even write an essay in the student’s own personal style – complete with grammatical and spelling errors, if desired – adding to the authenticity and to make cheating almost impossible to detect!
So, given all this, what should educators do? And is AI the end of education as we know it?
Generative AI for Careers Services: Friend or Foe? Webinar with Danny Mirza.
Ban AI?
fear of plagiarism, of students not engaging in tasks or of not learning at all, several universities around the globe are resorting to a blanket ban on the use of AI.
Top French University Sciences Po for example, now “strictly forbids the use of ChatGPT or any other tool using AI”, citing sanctions “which can go as far as expulsion from the establishment or even from higher learning”.
There are, of course, many AI detection tools out there – Originality.ai, GPTZero, Writer.com’s Ai detector tool to name but three but no such tool is foolproof.
Speaking to the Guardian for Australia, Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales said:
“There are technical solutions – digital watermarking, but you can just run another program over it to destroy the watermark. It’s an arms race that’s never going to finish, and you’re never going to win.”
Or Embrace AI?
Whether we like it or not, in the long run, AI technology is going to become a staple part of our lives – so rather than fight it, it seems clear that we, as educators, need to move with it; use it to our advantage and adapt the way we teach accordingly.
Qualification body, The International Baccalaureate is taking this approach. Likening the AI language model to calculators or spell check tools – that when used correctly can add value – they have announced that students will be able to quote from work generated by ChatGPT.
It’s almost inevitable that the nature of learning will change dramatically in the future – both in terms of the content and skills to be taught and also the way students learn.
Rather than focusing solely on written output, for example, students may be encouraged to spend more time working on devising pertinent questions, analysing the creative process, debating, group work, high level critical thinking, communication and presentation skills – all very human activities that the likes of ChatGPT cannot replicate.
In addition, more and more educators are starting to look to AI solutions combined with online or VR activities as part of their teaching methods – which can result in some of the most creative, motivating and highly effective immersive learning experiences.
Speaking of the adoption of VR technology, Graeme Lawrie, Director of Innovation and Outreach Sevenoaks School, UK, recently wrote that his school was “moving away from simply ‘learning’ a subject or topic to ‘feeling’ the content.” He explained that approach:
“allows a student to explore, to experience or to be involved in something, as if they are actually present in that environment or place.”
Benefits of AI to students
By immersing students in technology from a young age, we are likely to be giving them a head start in the future workplace – where use of such technology will inevitably become more and more commonplace. But that aside, let’s consider some of the ways in which AI is already bringing enormous benefits to learners:
Analyse Learning gaps : Specially designed AI platforms can analyse past performance, and identify learning gaps that can easily go otherwise undetected.
: Specially designed AI platforms can analyse past performance, and identify learning gaps that can easily go otherwise undetected. Personalisation : AI can adapt content to the student, creating a personalised learning experience rather than a one- size fits all approach.
: AI can adapt content to the student, creating a personalised learning experience rather than a one- size fits all approach. Questions answered instantly : With the help of AI students can have their questions answered within seconds rather than waiting for a human response – this has the further advantage of encouraging shy students to ask questions without fear of being judged.
: With the help of AI students can have their questions answered within seconds rather than waiting for a human response – this has the further advantage of encouraging shy students to ask questions without fear of being judged. Timely feedback & just-in-time learning : Students can also receive more frequent and timely feedback at the point of learning, when the topic is still fresh in mind and they are more open to learning.
: Students can also receive more frequent and timely feedback at the point of learning, when the topic is still fresh in mind and they are more open to learning. Engagement : Learning with AI allows for gamification and other engaging learning environments such as VR and AR, which can greatly increase student motivation and engagement.
: Learning with AI allows for gamification and other engaging learning environments such as VR and AR, which can greatly increase student motivation and engagement. Time saving : Using AI to help generate ideas or structure content frees up student time to focus on higher level skills such as analysis or creativity.
: Using AI to help generate ideas or structure content frees up student time to focus on higher level skills such as analysis or creativity. Improved accessibility : With smart data gathering, custom tasks and personalised schedules, the boundaries between students, teachers and educational administrators can be bridged with the help of AI technologies. Barriers based on physical location, language, money can be overcome as we move closer to 24/7 access to all.
: With smart data gathering, custom tasks and personalised schedules, the boundaries between students, teachers and educational administrators can be bridged with the help of AI technologies. Barriers based on physical location, language, money can be overcome as we move closer to 24/7 access to all. Neurodiversity: Some neurodiverse students who may struggle with traditional settings or learning methods thrive in AI powered environments.
Benefits to educators
There are also many ways in which educators are already taking advantage of AI and these benefits are only set to grow:
Time saving : AI can take on many of the time consuming jobs that educators are currently expected to do – for example record keeping, and marking. It can also help streamline the planning process – this feeling up the teacher to engage the creative and very human actions that are required to take learning experience to the next level.
: AI can take on many of the time consuming jobs that educators are currently expected to do – for example record keeping, and marking. It can also help streamline the planning process – this feeling up the teacher to engage the creative and very human actions that are required to take learning experience to the next level. Idea & content generation : AI can save teachers time in creating content, ideas and even learning objectives for teachers to review, amend and use creatively in the classroom, project work or assignments
: AI can save teachers time in creating content, ideas and even learning objectives for teachers to review, amend and use creatively in the classroom, project work or assignments Answering student questions : In using AI as a personal tutor, students not only benefit but teachers too. More questions can be answered, teachers can track what’s been asked whilst focusing their efforts on designing interventions to further engage students and plug any learning gaps.
: In using AI as a personal tutor, students not only benefit but teachers too. More questions can be answered, teachers can track what’s been asked whilst focusing their efforts on designing interventions to further engage students and plug any learning gaps. Analysing student performance : AI can track class attendance, assignment submission and performance on specific tasks to help identify learning gaps or flag up worrying behaviour. This makes it far easier for a teacher to design appropriate interventions in a timely manner.
: AI can track class attendance, assignment submission and performance on specific tasks to help identify learning gaps or flag up worrying behaviour. This makes it far easier for a teacher to design appropriate interventions in a timely manner. Motivating and engaging students: In today’s fast paced, technology-driven world, people crave fun and exciting methods of learning to keep them engaged.
Use cases
Moving on to some interesting use cases, we’ve selected just a handful, but there are so many innovative companies developing AI enhanced solutions to support and enhance learning in students of all ages.
Seneca: A UK based homework and revision platform that is so sure of its product that it guarantees its students will receive top exam grades or your money back!
Thinkster Maths: A digital maths tutoring platform that uses AI and machine learning to visualise how a student thinks and personalises content.
Knewton’s Alta: Designed for higher education, this adaptive learning courseware is designed to optimise the way students study and learn while completing assignments.
Gradoscope: A platform that not only grades papers but provides students valuable feedback and insights to help instructors monitor progress and learning gaps.
VirtualSpeech: Focusing on all important soft skills, such as presentation, meeting and interview skills, VirtualSpeech uses immersive VR technology to replicate
InnerVoice by iTherapy: is an app designed to help neurodivergent children form connections between objects and language and helping them learn.
Conclusions
AI is developing at a rapid pace. It has already impacted the education system and will continue to disrupt.
Educators who stay put, relying only on old methods will find themselves increasingly at a disadvantage – whereas those who embrace the power that is AI and who adopt tools and platforms to reduce time consuming work will be freed up to focus on the more human and creative work of creating learning experiences that really count.
And from the students’ perspective, they are increasingly coming to expect instant feedback, creative, interactive learning activities and gamification as standard.
It seems inevitable that not only the way we teach but also what we teach will change with the continued development of AI. The skill sets required for future generations in the workplace will be more focused on higher level skills of critical thinking, creativity and communication.
Working in education in this era of rapid technological change is exciting and whether in terms of innovating, testing, implementing or simply continued student guidance, every teacher and educator has a role to play in helping define what the learning of tomorrow will look like.
| 2023-02-23T00:00:00 |
https://virtualspeech.com/blog/benefits-ai-education
|
[
{
"date": "2023/02/23",
"position": 43,
"query": "workplace AI adoption"
},
{
"date": "2023/02/23",
"position": 4,
"query": "AI education"
},
{
"date": "2023/02/23",
"position": 4,
"query": "artificial intelligence education"
}
] |
|
Workplace Technology: Why It Matters and How to Choose
|
Workplace Technology: Why It Matters and How to Choose It
|
https://blog.haiilo.com
|
[
"Ben Hannawi"
] |
When introducing a new workplace technology, one of the biggest challenges organizations face is low user adoption. ... Many vendors today use AI to make ...
|
Modern intranets have a social-media-like look and feel which most people are familiar with. Their personalized news feed can significantly increase employee readership and engagement in the workplace.
2. Instant messaging software
Most companies have already implemented business instant messaging platforms such as Slack and Microsoft Teams.
These solutions usually serve as a main channel for synchronous employee communication. Furthermore, they can be integrated with other workplace technology, and that’s when they become powerful productivity tools as well.
3. Workplace productivity and project management tools
According to research, from 2021 to 2028, the market for productivity management software is anticipated to grow at a CAGR of 14.2%.
There are many different productivity tools available on the market. From project management solutions Such as Asana and Jira to CRMs such as HubSpot and Salesforce, productivity solutions fall under one of the most popular software categories on the market.
💡 Related: What is an Employee Engagement App and Why Your Company Needs One
4. Survey solutions
Consistently measuring the pulse of your workforce can go a long way in improving employee engagement, productivity, and retention.
Employee surveys are the easiest and most efficient way to get insights about the current state of your workplace and get valuable recommendations for improvement.
They can also enable managers to become better leaders by regularly getting bottom-up feedback from their teams.
💡 Related: Top 5 Employee Surveys to Implement [50 Questions Included]
5. Document management system
A document management system (DMS) is a system or process used to capture, track and store electronic documents such as PDFs, word processing files, and digital images.
Some of the most popular solutions include Google Drive and Dropbox. When choosing a document management software, it is important to evaluate its integration capabilities. For example, it is of utmost importance that your DMS can be integrated with your social intranet.
💡 Related: 14 Reasons Why the Future of Work Is Happening Now
6. HR software
Human resources departments have been undergoing significant digitalization aiming to improve various people-related processes and procedures.
Consequently, the HR tech market has been skyrocketing. Under this category, we can find solutions for compensation and benefits, talent acquisition, leave and time off management, employee scheduling, and many more.
7. QR codes
QR codes can be attached to equipment and inventory for easy tracking and management through scanning, simplifying the inventory process. Additionally, QR codes on machinery can link to maintenance logs, repair histories, and instructional videos, ensuring that all necessary information is readily accessible.
In the realm of workplace communication, you can turn PDFs to QR codes and use them to share contact information, project details, and meeting locations quickly among employees, enhancing efficiency and collaboration. They can also be included in emails or on bulletin boards to allow employees to quickly RSVP to events or access event details, streamlining the event planning process.
| 2023-07-23T00:00:00 |
2023/07/23
|
https://blog.haiilo.com/blog/workplace-technology/
|
[
{
"date": "2023/02/23",
"position": 88,
"query": "workplace AI adoption"
}
] |
Automation and robotics trends in 2023
|
Automation and robotics trends in 2023
|
https://www.visualcomponents.com
|
[
".Author-Item Background-Color Var --Primary-Color-Orange",
"Display Grid Gap Var --Space-Xs",
"Grid-Template-Columns Padding Var --Space-S",
".Author-Item.Author-Item Margin-Top Var --Space-L",
"Media Min-Width",
".Author-Item Grid-Template-Columns Auto",
".Author-Item__Image Max-Width .Author-Item__Content Display Flex Flex-Direction Column .Author-Item__Author.Author-Item__Author",
"Margin-Top Auto .Author-Item__Job-Title.Author-Item__Job-Title Margin-Top Var",
".Author-Item__Phone.Author-Item__Phone Margin-Top"
] |
Advances in Artificial Intelligence and Machine Learning; Advances in Smart Manufacturing; Impact of Robotics and Automation on Employment and the Job Market ...
|
Automation and robotics trends in 2023 In industries around the globe, leaders are deciding how to get the most from limited investment budgets. Never a straightforward task, economic uncertainty, workforce challenges and fast-moving automation and AI trends serve only to complicate decision-making. Is automation the answer? What role should smart manufacturing play, and how can tools like manufacturing simulation software help?
The pandemic and related supply chain issues seem largely in the past, but problems persist. Energy and material costs continue to rise, and in many industries, an ageing workforce is nearing retirement while recruitment remains difficult. Industrial automation is offered as the solution, but are there other manufacturing trends to watch and adopt?
This article aims to help directors, managers and other leaders in industries like manufacturing and logistics understand how technology can help address these challenges. Individual sections cover:
The State of Robotics and Automation in 2023
Economic Uncertainty
Advances in Artificial Intelligence and Machine Learning
Advances in Smart Manufacturing
Impact of Robotics and Automation on Employment and the Job Market
The Future of Robotics and Automation: Opportunities and Challenges
How Robotics and Automation Will Shape the Future
The state of robotics and automation in 2023
One way of automating a task or process is through robotics. A major advantage of robotics over dedicated or “hard” automation is flexibility. The robot can be assigned to a new task quickly and often at no cost. Conversely, with hard automation even a small change in the product or process can be expensive and difficult to implement.
Since the first industrial robot went to work in 1961, robots have been viewed primarily as material handling or tool manipulation machines. Today though, the definition has expanded to cover both physical and business process robotics.
In the field of physical automation, autonomous mobile robots (AMRs) have joined the four, five and six-axis machines populating factories the world over. These AMRs roam warehouses and distribution centers, moving materials and kits between storage, kitting and packing.
Process robotics refers to robotic process automation (RPA). This is technology used to automate repetitive paperwork and administrative tasks.
Today robotics and automation are in the vanguard of the fourth industrial revolution, or Industry 4.0. Smart sensors, communication technologies, analytics tools, machine learning and artificial intelligence, are enabling data-driven decision-making. In parallel, the European Union (EU) is pushing the idea of Industry 5.0 – a vision of connected smart manufacturing that puts people at the center rather than relegating them to the periphery.
Now, with the outlook for 2023 and beyond unclear, business owners, managers, engineers and technologists are looking to this data for insights into where to put investment funding.
Economic uncertainty defining 2023
Investment in automation is a vote of confidence in the future, but when future demand and growth are hard to predict, as is the case today, industry leaders become more cautious about where to invest. Two recent reports, one from research and business advisory specialists Forrester, and the other a survey commissioned by Visual Components, provide depth and color to this outlook.
Forrester, in their “Predictions 2023: Automation And Robotics” report, say to anticipate, “a modest deceleration of automation progress in 2023 due to uncertain economic conditions.” This is expected to increase focus on projects driven by efficiency and improving resilience, leading to:
Greater use of data generated by smart manufacturing and machine learning technologies
The efforts of Centers of Automation Excellence being directed more to efforts that support the entire business rather than subsections
A shift of budgets towards projects addressing business continuity-type issues (such as skills shortages)
The continued growth of the physical robot population
Supporting this last point, in their Industrial Robots 2022 review, the International Federation of Robotics (IFR) reported strong growth in robot numbers around the world. Furthermore, the Association for Advancing Automation (A3) reported that 2022 saw strong demand for industrial robots in North America.
The focus of the survey undertaken for Visual Components was to discover how manufacturing businesses are responding to the push towards Industry 5.0. Key findings included:
People-related costs are rising
Shortages of technology skills should not be overlooked
Economic uncertainty is holding back sustainability efforts
Advances in artificial intelligence and machine learning
The deployment of artificial intelligence (AI) and machine learning (ML) are currently two of the biggest trends in robotics and automation. While closely related, the terms are not interchangeable.
AI refers to software that is trained to perform a specific task, rather than programmed. This saves the programmer from having to anticipate every possible set of conditions, possibly saving time, but also greatly extending capabilities.
ML is part of AI. On their web page, “Artificial Intelligence (AI) vs. Machine Learning”, the engineering department at Columbia University describes ML as “… a pathway to artificial intelligence,” going on to say that ML, “…uses algorithms to automatically learn insights and recognize patterns from data, applying that learning to make increasingly better decisions.”
AI and ML have a growing list of applications in robotics and automation. They are used to optimize routing planning, provide decision support, improve safety, optimize material yields and increase factory efficiency.
Examples include:
Automating detection of defects in images of parts or materials
Guiding robots performing loading, unloading and assembly tasks
Improving the ability of AMRs to navigate cluttered and busy surroundings
Advances in smart manufacturing
Industry 4.0 is about manufacturers digitizing processes for increased efficiency, higher quality and lower costs. Industry 5.0 is an initiative from the EU aimed at reminding manufacturers people will remain integral to manufacturing for the foreseeable future.
Without this, current automation trends will result in human workers having less autonomy over their jobs. The danger is that increasing use of sensors and analytics will result in everything being measured, effectively reducing human factory operators to little more than automata.
Research group Forrester proposes manufacturers adopt an “Automation Triangle” that merges the capabilities of hardware and software automation capabilities with human workers’ adaptability, creativity and resilience.
In this concept, repetitive tasks are handled by hardware automation, and software automation performs repetitive data processing and analytical functions. This frees people to supervise the automation and address issues like sustainability.
The approach offers human workers many benefits. Once freed from the unpredictability of current manufacturing activities, uncertainty and stress are reduced substantially. They have more opportunities to exercise their creative skills and significantly greater autonomy over their work. The benefit for employers is a workforce that’s more motivated and engaged.
Impact of robotics and automation on employment and the job market
For several reasons, factories will not become the “lights-out” operations once envisioned as the inevitable conclusion of industrial automation trends. It’s now understood that not every task is suitable for, amenable to, or done better by automation. Plus, despite the advance of AI, creativity, adaptability, and resilience have important roles to play. What’s more, financial constraints will prevent manufacturers from pursuing an “automation at all costs” approach. However, there will be more automation in places where it makes financial sense.
This will affect the mix of skills needed in future manufacturing facilities. These can be grouped under the headings of:
Integration costs
Technical challenges
Skills shortages
Security concerns
Integration costs
Industrial automation must be up-and-running in the least time possible. Three tools for making this happen are manufacturing simulation, robot simulation software, and digital twins. Used separately or preferably together, these enable the identification of optimal approaches by allowing the evaluation of more options. In addition, a digital twin facilitates virtual commissioning, which detects integration problems early while there’s still time and budget to remedy them.
From a workforce perspective, manufacturers will need people with the skills to use these tools. Alternatively, they will have to contract with partners that can use them and address integration issues on their behalf.
Technical challenges
One of manufacturers’ biggest challenges in adopting smart manufacturing technologies is choosing the right solutions. Two issues here are, first, defining the opportunity or problem correctly, and second, determining what form the automation should take. Manufacturing simulation is a powerful tool for exploring alternatives and determining which offers the optimal solution. Process modeling can, for example, be used to simulate operation of resources like AMRs.
Skills shortages
Many manufacturers face an ageing workforce and challenges in attracting new recruits. Automation and robots offer solutions in several ways.
Robots can perform repetitive tasks like welding, which is one example of a trade suffering high rates of retirement.
RPA and AI can take on repetitive and analytical tasks, likely performing with higher consistency than human workers can provide
The high technology environment created by using advanced tools like AI and robot simulation software will provide a powerful draw to younger and highly educated technical professionals.
Security concerns
Business leaders are well aware of the need to protect their data, and this will only become more important as manufacturers become increasingly digitized. IT functions will continue creating separate Operational Technology (OT) networks, but the increased demand for technical specialists may prove a constraint to how fast they can move.
Cloud computing creates additional vulnerabilities: awareness is high, but safeguarding measures are needed nevertheless. One exception relates to applications where AI must quickly decide or respond. In these cases, more computing will be done at the edge than in the cloud so as to avoid latency issues.
The future of robotics and automation: opportunities and challenges
It’s clear robotics and automation technology are growth enablers. However, manufacturers must be willing and able to adapt to an environment that continues to evolve rapidly. The principal issues to address are:
Investing in R&D
Upgrading infrastructure
Providing training and education
Implementing strong security measures
R&D investment
Businesses should seek innovative ways in which to benefit from AI and other smart manufacturing trends. This will be done in a controlled and targeted manner to minimize the amount of risk taken on. Tools such as digital twins and manufacturing simulation will support these endeavors, permitting experimentation in a virtual rather than physical space.
Infrastructure upgrades
If one thing is clear from current automation trends, it’s the ever-growing role of data. Data underpins digital manufacturing, and factory systems and networks must be configured for the volumes and speeds required. If one example illustrates this trend, it’s the rollout of true digital twins. These require high-speed bi-directional data transfer to keep the physical and the virtual in sync.
Providing training and education
Already severe, skills shortages are only going to worsen. Unless manufacturers are willing to pay premium rates or delay implementation, they should explore opportunities to develop their own talent internally. Note that this applies as much to traditional trades like welding as to emerging industrial automation skills. Many educational programs, like those teaching the basics of robot programming, are available online at little or no cost.
Implementing strong security measures
Manufacturers deploying Industry 4.0 technologies are well advised to pay close attention to cybersecurity, especially if using cloud services for data storage and analytics. This will require hiring or developing people with the appropriate skills and expertise, and not doing so is to invite some form of attack.
How robotics and automation will shape the future
Manufacturers the world over have little choice but to invest in advanced technologies like robotics and automation. Unlike in earlier years though, this increasingly means using AI and ML to analyze data, identify patterns and trends and support physical automation hardware.
Success will depend on deploying technology where it can have maximum benefit, which means taking a careful look at opportunities and options. As noted previously, manufacturing simulation is a powerful tool for supporting this effort. Rather than relegate people to the role of machine minders, it will be important to integrate them into the system of hardware and software automation.
This expansion of Industry 4.0 thinking may be new to some readers; others will already be questioning the role of human workers in the factory of the future. For both constituencies, we recommend our eBook on the subject of Industry 5.0. Download your copy today.
| 2023-02-23T00:00:00 |
https://www.visualcomponents.com/blog/automation-and-robotics-trends-in-2023/
|
[
{
"date": "2023/02/23",
"position": 21,
"query": "AI labor market trends"
}
] |
|
New trends in occupational mobility 2023 – TG
|
New trends in occupational mobility 2023 – TG
|
https://www.testgorilla.com
|
[] |
Artificial intelligence is also playing a key role in driving occupational mobility trends and leveling the playing field. As well as replacing humans in varied ...
|
What is the current state of the labor market?
Data from the US Bureau of Labor Statistics shows that the Great Resignation is far from over, with over 4,000 Americans quitting their jobs in October 2022 alone. Despite issues with the economy, 21% of job seekers in that country alone report feeling so confident in the availability of jobs that they are happy to leave one without having another lined up.
Data from Pew Research also shows that in 2022, a lack of career advancement is the second biggest reason for this high rate of job exits. And, as technology increasingly takes over manual tasks like bookkeeping and assembly line work, people are looking to move into more personally fulfilling roles and avoid being made redundant by automation.
As author Jacob Morgan says, “Our organizations were originally designed for AI, but we just didn’t have the technology. Humans were forced to take on jobs that were mundane, repetitive, and mindless. Now that we have the technology needed… we can focus on being more human… on being creative, innovative, and empathetic.”
As people look for work that complements their skills and personalities, they also look to improve their work-life balance. The benefits of flexible work arrangements for employees are also still a major driver for occupational mobility. In one study focusing on the real estate industry, 54% of people said they would leave a job that didn’t offer flexibility on schedule and location.
Who has been most affected by increased occupational mobility?
Our report shows that a lot of candidates are moving into new industries, with finance and accounting, customer service, and IT top of the list.
While certain sectors lead the way, occupational mobility is high across many industries.Source: The State of Skills-Based Hiring 2022
During our study, we spoke to full-time workers from varied backgrounds in a range of positions. It’s interesting to note that career decisions are no longer dictated by increased educational attainment, with 44% of people moving into new roles already holding university degrees.
It’s also clear that it’s not just young service industry workers who are seeking new positions. Older, more established employees with many years of experience are also on the hunt for jobs that suit their lifestyles and offer new opportunities.
Across even high-level roles, people are seeking a significant change in occupation.
Source: The State of Skills-based Hiring 2022
However, in a job market that now spans five generations from Greatest to Gen Z, millennials still show the most drive to switch roles: 36.2% of individuals under 34 said they would be looking to change roles in the next two years.
Whether this uptick is due to the rise of AI or the trend of staying in job roles for less time, these new market conditions enable millennials to exercise greater control over their career paths. They are able to seek new opportunities that facilitate self-development and teach them new skills.
It is important to note though, that across all sectors, mobility has not been equal across demographics. In the next section, we’ll explore why this is the case and how companies can look to level the playing field.
The challenges of flexible work arrangements
Women are often disproportionately affected by the challenges of flexible work arrangements. Though flexibility around working hours and location is often essential for women, mothers particularly struggle with the blurring of home and work life, which may go hand-in-hand with reinforced traditional gender roles.
As labor market & welfare state researcher, Professor Heejung Chung says, “Women do a large chunk of the housework and childcare in heterosexual relationships… They are expected to — and do — exploit themselves at home. This means that working from home is used to expand childcare or housework hours.”
To promote equality, increased opportunities for flexible and remote working must be paired with support to help employees manage family responsibilities. Childcare services or paid family leave are two important solutions when looking to increase internal mobility or encourage employees to move from other industries.
Hiring inequalities
Even in 2022, there are still inequalities in hiring decisions. Our report shows that while 46.4% of women hope to switch roles in the next two years, fewer women than men have been hired in the past 12 months. And seven Black people to every white person was unemployed in 2022 according to data from the Economic Policy Institute.
The data also shows that recruiters are 13% less likely to click on a woman’s profile when making a hiring decision. Despite an increasing trend toward diversity and inclusion in the workforce, underrepresented groups are still clearly being discriminated against—consciously or unconsciously.
To level the playing field when it comes to occupational mobility, algorithms in applicant tracking systems (ATS) need to be configured for fairness. And, unfortunately, many traditional hiring methods like resumes and interviews are prone to bias.
In the case of CV-based screening processes, for example, analytics software may use information (such as gaps in work history perhaps due to childbirth) and eliminate potential candidates unfairly.
Challenges for graduates
In our report, we identify that many employers prefer to hire teen workers for low-skilled jobs and younger millennials for roles that require more experience. This leaves Gen Z graduates limited and forced to find work in areas that don’t align with their career objectives.
To further their challenges, Gen Zs also struggle to differentiate themselves during the application process. In our survey, 56% of respondents felt that a lack of relevant experience on their resume prevented them from landing a role.
_Traditional hiring methods do not allow recent graduates to distinguish themselves. _Source: The State of Skills-Based Hiring 2022
| 2023-02-23T00:00:00 |
https://www.testgorilla.com/blog/new-trends-occupational-mobility-2023/
|
[
{
"date": "2023/02/23",
"position": 60,
"query": "AI labor market trends"
}
] |
|
Generative AI Will Change The World: Part 1
|
Generative AI Will Change The World: Part 1 — Matters Journal
|
https://mattersjournal.com
|
[
"Matters Journal"
] |
Will we all end up living off our universal basic incomes, plugged into AI generated metaverses, communing with artificial beings? Will we reach a point where ...
|
Words by Daniel Simons
In his book, Homo Deus, author and historian Yuval Noah Harari argues that the rise of the ‘useless class’ is one of the most dire threats of the 21st century. The Democratic candidate for the 2020 American elections, Andrew Yang, was so concerned about automation that he ran his campaign on a platform championing Universal Basic Income.
The fear that exponential technological evolution poses an existential threat to the livelihoods of millions is not new, but the tech evangelists of recent past used to mollify their critics by arguing that innovation would take away the ‘bullshit jobs’ and free up humans to focus on the things that matter, like leisure, meaning and creativity.
Now, as the Generative AI explosion engulfs the world - almost without warning - it is clear that the robots are coming for the fun stuff too.
Suddenly everyone is on notice.
Science Fiction writer Arthur C. Clarke said that any sufficiently advanced technology is indistinguishable from magic, and Generative AI certainly feels magical. Even its creators don’t completely understand how it works.
While the technology might be unfathomable, the revolutionary impact it will have is undeniable. Pandora's box is open, but the shape of the future is still being decided. By leveraging the power of AI we could unlock unprecedented levels of human creativity and innovation, regenerate the world, and usher in a new age of abundance.
Or, if we are not careful, Generative AI could lead us to a world of redundant humans and unimaginable inequality. The only certainty is that we have entered a pivotal moment for Artificial Intelligence, and for humanity, and it will continue to evolve at a dizzying pace.
The ‘genie is out of the bottle’ and we all need to adapt or get left behind.
| 2023-02-23T00:00:00 |
https://mattersjournal.com/stories/generative-ai-will-change-the-world-part-1
|
[
{
"date": "2023/02/23",
"position": 14,
"query": "universal basic income AI"
}
] |
|
A 24/7 Twitch Stream of AI-Generated Celebrities Is Even ...
|
A 24/7 Twitch Stream of AI-Generated Celebrities Is Even Weirder Than It Seems
|
https://www.vice.com
|
[
"Samantha Cole",
"Anthony Franklin Ii",
"Stephen Andrew Galiher",
"Veronica Booth",
"Shaun Cichacki",
"Ana Valens",
"Luis Prada",
".Wp-Block-Savage-Platform-Post-Byline Font-Family Var --Wp--Custom--Typography--Font-Family--Secondary",
"Font-Size Var --Wp--Custom--Typography--Font-Size--Sm",
"Font-Weight Var --Wp--Custom--Typography--Font-Weight--Bold"
] |
The Athene AI Heroes game, which is heavily advertised on the AI ... One of the Singularity Group's goals is to pave the way to a Universal Basic Income through ...
|
A 24/7 live stream featuring AI-generation versions of real streamers, internet personalities, and comedians is a mesmerizing look at the potentially dystopian future of live-streaming. But that’s not the weirdest thing about it: the stream is the product of volunteers working commune-style for an organization called The Singularity Group, which was founded by a former pro gamer who has been described as founding a “pseudo-religion.”
Bachir Boumaaza, the former-pro-gamer-turned-Twitch streamer in question, goes by Athene online and his face and voice serve as an AI-generated host on stream, asking questions of similarly deepfaked guests on the Athene AI Heroes channel. Sometimes Boumaaza streams as himself live, while talking to generated versions of guests on his main channel, including AI parody versions of Ricky Gervais and Bill Burr. The Athene AI Heroes channel, which shows a deepfaked Boumaaza picking questions for AI-generated guests from a live (human-populated) chat, has been streaming non-stop for days. It features dozens of guests, including virtual simulations of Joe Rogan, Amourath, Alinity, PewDiePie and Hasan Piker.
Videos by VICE
Reese Leysen, a spokesperson for Boumaaza’s philanthropic venture Gaming for Good, told Motherboard that this project started on February 5 with Boumaaza’s main Twitch and YouTube streams, and the 24/7 fully automated stream began a few days ago.
The stream is run by volunteers from The Singularity Group, according to its description. The group—which lives together in a compound in Germany—exists to further the ideals of a philosophy Boumaaza created in 2011 with a YouTube documentary laying out its ideas called “Athene’s Theory of Everything.” The philosophy can be called “neuro-Spinozism”, or, as it was initially, Athenism. “Athene’s Theory of Everything” ends with a link to Athenism.com, which now redirects to The Singularity Group’s website. A YouTube video posted by Boumaaza aimed at attracting more volunteers to the compound shows him describing The Singularity Group as a “movement.”
“With the incredible pace at which AI technology is evolving, we started to realize at some point last year that ‘AI influencers’ would soon become a real thing,” Gaming for Good’s Leysen said, “and we started working on our own in-house tech to see what we could come up with that would make for great entertainment both for the Athene Twitch & YT channels as well as for our Athene AI Heroes game.”
The Athene AI Heroes game, which is heavily advertised on the AI-generated channel, is a gacha-style crypto game that’s part of Boumaaza’s charity funding for the philanthropy startups he’s founded. Attempting to download the game takes the user to the App Store page for Mobile Minigames, which is developed by The Singularity Group under the auspices of a firm called The Naughty Cult. Leysen is also director of that firm, according to his LinkedIn.
One of the Singularity Group’s goals is to pave the way to a Universal Basic Income through crypto-token-based play to earn mobile games. In 2019, Boumaaza and The Singularity Group were accused of “manipulation, misogyny, emotional abuse and a lack of accountability at the top of the organisation,” according to Kotaku, which referred to the movement as a “pseudo-religion.”
In 2021, Boumaaza posted a self-promotional documentary on YouTube that told the story of his rise to fame and the formation of The Singularity Group, framing it as being merely philanthropic and denying allegations of being a cult or a scam. A crypto game developed by the team—Clash of Streamers, which now also redirects to Mobile Minigames—is described as a “shameless cash grab” and “trash” to fund philanthropy.
Leysen said that no part of the stream is pre-recorded, and because no one else that they’re aware of is doing anything like this, many viewers were skeptical that it was really AI-generated, and assumed that some of it was scripted.
“The entire setup behind it is something that took quite some development time and combines many different AI technologies as well as a lot of creative tweaking of the machine learning process to make sure the result is something that is an entertaining parody rather than just an imitation,” Leysen said. The answers are generated using a combination of GPT-3 and their own datasets, they said, and the deepfake videos are generated using a system that they built in conjunction with open-source components. “For now we don’t want to give away too many details yet regarding the secret sauce that is allowing us to generate the responses on-the-fly,” Leysen said.
The voices are sometimes glitchy and the video is choppy, but the AI characters’ responses do sound convincingly like the real humans they’re parodying: AI Chapelle frequently starts its answers with “Let me tell you something folks,” and AI Rogan goes on about elk being a great example of gene-edited deer and asks Jamie to pull up a video of a bear doing jiu-jitsu. AI Jordan Peterson (who, in real life, has gotten very mad about AI generated versions of himself in the past) works the phrase “woke moralists” into almost every reply it gives.
In one of his own streams, Boumaaza says that the questions chat asks the AI guests are training them to give the answers they’re giving.
Leysen told Motherboard that during the streams, moderators stand by to filter out questions in the chat that might lead to offensive or less entertaining answers from the AI guests.
“During the 24/7 stream we combine very heavy multi-layered AI moderation with human moderators who also try to keep an eye on things so that we minimize the chances of the channel getting in trouble,” they said. “We also very much go out of our way to fine-tune the way the models are trained to make sure the general tone is light-hearted and we also moderate with the streamers in mind. This means questions that we think the streamers might be uncomfortable with get rejected.”
In the Discord, admin have created spaces for people to help train the AI guests with answers to prompts, like “opinions and beliefs” and “jokes and catchphrases” for each personality. “To make an AI streamer say believable sentences, one of the things we need is personal information about the streamer. You can imagine you are explaining the streamer to one of your friends that doesn’t know the streamer,” the admin wrote, and ask viewers to fill in details about each person’s life.
For example, under Elon Musk’s history prompt, someone wrote, “He is addicted to tweeting and uses twitter when he is on the toilet.” These answers from viewers, in theory, will be input into the ChatGPT3 model that the Athene AI Heroes stream is using, to teach it what Musk might typically say to a given question. There are also sections for people to submit video clips of each person to help train the AI audio and deepfake visuals.
AI generated content has quickly become the genre du jour for experimental Twitch streamers, as well as popular personalities who have spoken out against reckless uses of AI. In January, Twitch suspended AI-generated Seinfeld parody “Nothing Forever” for telling a transphobic joke, and in early February, streamer Brandon “Atrioc” Ewing confessed to watching deepfake porn of his colleagues, prompting many female streamers to speak out against image-based abuse.
This AI-generated mega-show, including parodies of some of the most naturally annoying online personalities and streamers out there, is hard to look away from. The faces aren’t very realistic and the audio is frequently glitchy, but it’s easy to watch for question after question just to see what absurd catchphrase or disturbing glitch they’ll come up with next. At one point, the chat asks the AI version of streamer Sniperwolf to sing a wolf howl, and it obliges, then proceeds to take several long seconds to moan like a demon. “That’s probably the closest I can get to singing,” the AI Sniperwolf says. “You like?”
| 2023-02-23T00:00:00 |
2023/02/23
|
https://www.vice.com/en/article/twitch-stream-of-ai-generated-celebrities-athene/
|
[
{
"date": "2023/02/23",
"position": 62,
"query": "universal basic income AI"
}
] |
ARiA - Our Big Data and Artificial Intelligence cloud Platform
|
ARiA - Our Big Data and Artificial Intelligence cloud Platform
|
https://www.repsol.com
|
[] |
Universal zero net emissions energy · Human ... "Repsol has launched an ambitious AI program that's making fundamental contributions to its business model.".
|
Recognized with the "Best private use and aplication of artificial intelligence within the company" award at the Ametic Artificial Intelligence Summit 2021 forum.
€200 M economic impact until 2022.
Provides support to more than 60% of Products generated by the digital program.
We organize large amounts of information to add intelligence to our processes and decisions
For some years now at Repsol, we’ve been immersed in an ambitious Digital Transformation Plan aimed at consolidating our energy transition. One of the pillars of this digitalization involves becoming a data-driven organization; and in order to achieve this, our Data & Analytics Hub developed ARiA.
This Big Data and Artificial Intelligence cloud platform centralizes all of the company’s data to develop analytical models and algorithms that help improve processes and decision-making.
ARiA supports hundreds of digital initiatives that are being carried out in our different business units. For example: InWell collects extraction data of wells from all over the world to be able to control them from Madrid. Another example is Vivit, which helps our customers save by detecting which electrical appliance uses the most energy at home.
Developed based on Microsoft Azure, ARiA includes various solutions for data ingestion, cloud configurations, data governance tools, and machine learning models. It also aids in the democratization of data as it enables anyone, regardless of their knowledge of analytics, to access information of greater value.
Moreover, ARiA is fundamental as the basis for various initiatives that have been launched within the company related to the data culture. For instance, the information provided by this platform is used to organize Hackia, an innovation and entrepreneurship hackathon event that has already resulted in interesting proposals. Another initiative is Data School, a periodic training aimed at profiles that until now have not worked with data. Here, they acquire new knowledge and skills to have a better experience with the platform and be able to use the information and all the possibilities ARiA offers.
ARiA is regarded as a pioneering tool in the sector, which is demonstrated by the various awards it has already received. In fact, we've launched it outside the organization to assist other companies in deploying and accelerating the use of Big Data and AI in their fields.
Our calling is to keep evolving this platform and continue as leaders in digital transformation and the use of data. To this end, our Data & Analytics Hub comprises a highly-trained team with profiles from different disciplines such as engineering and data science, and it hasn’t stopped growing.
Would you like to be part of this revolution?
| 2023-02-23T00:00:00 |
https://www.repsol.com/en/innovation/digital-transformation/aria/index.cshtml
|
[
{
"date": "2023/02/23",
"position": 68,
"query": "universal basic income AI"
}
] |
|
OMG is the AI Coming For My Job?!???!??!!!?!?!? | Whatever
|
OMG is the AI Coming For My Job?!???!??!!!?!?!?
|
https://whatever.scalzi.com
|
[
"John Scalzi"
] |
... universal good, no matter what the techbros tell you–often, they negatively ... basic illustrations that are following mainstream expectations of line ...
|
OMG is the AI Coming For My Job?!???!??!!!?!?!?
Technically yes, as this New York Times story about people flooding science fiction magazines with AI-generated story submissions makes evident. But realistically, no, because all these generated stories are kinda trash, and anyway, it’s not the AI that’s flooding the submissions pile with crap, it’s people — scamsters looking for a quick shot of money, if Neil Clarke of Clarkesworld’s general supposition is to be believed. AI is not in itself to be blamed for scammy people who otherwise have no relationship with science fiction thinking what the AI is churning out is going to be sufficient to extract payment from a serious magazine.
Why are science fiction magazines getting hit so hard by this? I offer the supposition, in line with Neil Clarke’s, that it’s not about the subject matter, it’s the fact that science fiction magazines still actually pay for short fiction. The amount is small relative to costs in what we are now calling The Global North, but attractive enough for other places. And it’s more than other fiction venues often pay; fiction outside genre is now often paid in contributor’s copies. It makes sense that people trying to cadge a quick buck with this technology might try at these venues. I’m curious to know if The New Yorker’s fiction editor is getting this sort of nonsense as well.
If this is indeed mostly scamsters trying to shake money out of magazines, I expect the flood will lessen soon, if not ever go away entirely. The scamsters will see they’re getting no traction with the ploy and will head off to new cons, and the magazines will also employ some new filtering processes to punt out a lot of this nonsense. There will likely continue to be new attempts with each new iteration of AI text generators, but I don’t see them being any more successful then, either. As I noted on Twitter, if you’re prompting an AI to vomit out text, that one exposure you have to it might make it seem like it’s generated something sufficiently original, especially if you otherwise know very little about the genre. But if you’re an editor, you’ll see the current AI tropes dozens of times with each new iteration. Leaving aside the quality of the prose entirely, the sheer repetition of elements will remove the material from serious consideration.
(Plus the prose will suck.)
I asked ChatGPT if it was here to take my job as a novelist, and this is the response it generated:
No, I’m not here to take anyone’s job as a novelist. As an AI language model, my purpose is to assist and provide support in various tasks related to language and communication. While I am capable of generating text and even writing stories, my abilities are limited to what I have been trained on and the quality of my output is determined by the input and instructions provided to me.
Furthermore, writing is a creative process that involves imagination, emotion, and personal experience. As an AI language model, I lack the capacity for creativity and original thought that is essential for creating compelling stories. Therefore, while I can help you with your writing tasks, I can never replace the unique voice and perspective that you bring to your own writing.
Of course, that’s just what an AI intent on taking my job would say to lull me into a sense of security and complacency, wouldn’t it.
But, as it happens, I agree with the text here. AI isn’t sentient or conscious, its output is the result of what it’s trained on, how it’s been programmed, and how it’s been prompted. It can string sentences together but, as the bosses at CNet and Men’s Health discovered when they set AI programs to write informational articles and then had to pull and correct them, it has no ability to differentiate between truth and nonsense, and “knows” only what it’s been trained on. If you want a fast, infinite generator of competently-assembled bullshit, AI is your go-to source. For anything else, you still need a human. AI models will get better and more efficient at many things, but I do suspect writing a truly satisfying piece of original prose, short or long form, will elude it for some time.
This estimation, mind you, rather conveniently elides the fact that humans are already using AI to generate stories, articles and books, and are rushing to get them published, either through submission to established media outlets, or through self-publishing. But, and here’s the thing, those AI-generated text products aren’t particularly good, and absent significant human intervention, are unlikely to get better anytime soon. The amount of work required by a human to make AI-generated text go from “serviceable on the sentence level” to “actually good” is enough that one wonders why one wouldn’t just skip the AI-generating text part entirely. It would be less work. But then, I can write, so I see that part as extra steps. Someone else might not.
In my line of work, I don’t think AI-generated prose put into the stream of commerce is going to significantly impact the highly curated end of the book market, i.e., the books put out by established publishers. This end of the publishing world is populated with known quantities, i.e., already-known authors, series and franchises, and get their work into bookstores, which adds another level of curation, with respect to what books show up. On the other hand, I suspect AI-generated prose is going to offer a real challenge to indie and self-pubbed folks. They are inevitably going to share the same market spaces as AI-generated prose, and will have to work extra hard to differentiate their work from a flood of AI books. There’s also the added complication that in programs like Kindle Unlimited, where payouts are from a communal pool of cash provided by Amazon, an already crowded field of titles will have their presence and payouts further diluted by a rush of quickly-created AI dreck.
(Unless Amazon and other such vendors work to limit AI-generated work, which they should, as its presence will drive down the value proposition of their all-you-can-read programs. What benefit are they, if it becomes too hard to find actual, readable work? But we’ll see if Amazon, et al actually agrees with that assessment.)
Because of who I am and what I do, at the moment I’m not especially worried that an AI is coming to take my job. They can’t do my job, yet or possibly ever, and also I am well-established enough that, so long as I keep writing entertaining work, there will still be people who will seek out my titles. A fair number of authors are in my shoes, across all genres — established enough, and with enough of an audience, to keep doing what they do for a while yet.
I also suspect newer authors will continue to come up, in part because editors will want what they offer: Good prose that will connect with other humans. It’s why the AI-generated stories aren’t making any headway with the editors of the science fiction magazines. They just don’t have what it takes, and short of actual consciousness in the AI, may not ever. That’s good for humans, writers and readers both.
— JS
| 2023-02-23T00:00:00 |
2023/02/23
|
https://whatever.scalzi.com/2023/02/23/omg-is-the-ai-coming-for-my-job/
|
[
{
"date": "2023/02/23",
"position": 83,
"query": "universal basic income AI"
}
] |
Not just sci-fi: Claytronics
|
L'Atelier
|
https://atelier.net
|
[] |
” What existing precedent exists for universal basic income, and how could it impact the virtual economy ... not to mention the one most of us still live in ...
|
IT’S PRETTY OLD TECH
The Claytronics Project was launched by Carnegie Mellon University over 18 years ago. Its mission: To create modular building blocks of claytronics, known as claytronic atoms or “catoms,” that could transform into dynamic, 3-dimensional forms. An underlying goal for the scientists involved was developing a new media type called “Pario,” where, for instance, “claytronics could be used in telepresence to mimic, with high-fidelity and in three-dimensional solid form, the look, feel, and motion of the person at the other end of the telephone call.” Did someone say metaverse?
Since 2006, Carnegie Mellon University has been joined by 11 academic teams, forming an international consortium leading most studies on claytronics. In 2021 they achieved the goal of creating mini 3D catoms, which “fit all the requirements of programmable matter.”
WHEN WILL IT SHOW UP IN MY LIVING ROOM?
Imagine living in a responsive home: Clap your hands and your desk becomes a couch, your computer a flat-screen projector. Sound good? It likely won’t happen anytime soon. “Achieving the Claytronics vision won’t be straightforward or quick,” admitted Carnegie Mellon University professors when the project started. The use of “vision” is telling: This is a slow revolution, not fast-win tech, and that’s reflected in our data: As of publishing, claytronics have generated around 40 publications and 10 presented patents so far.
In comparison, a related technology, self-healing materials, has generated $57 billion in investment, 209 publications, and 246 patents (per data from late last year). Though we haven't yet observed any market placement or startup investments in claytronics, one of its most concrete applications may be in performing endoscopies.
Still, researchers are working to make claytronics a reality. There now exists a virtual behavioural simulator for modular robots called VisibleSim, which will greatly facilitate testing and foster new ideas for application.
For now, cars won’t magically transform into hovercrafts and fly over traffic. But the technology is progressing. In short, claytronics aren’t just sci-fi.
In the "Not just sci-fi" series, Nathalie Béchet breaks down the research reality and market potential of technologies that capture our imagination.
| 2023-02-23T00:00:00 |
https://atelier.net/insights/not-just-sci-fi-claytronics
|
[
{
"date": "2023/02/23",
"position": 93,
"query": "universal basic income AI"
}
] |
|
How ChatGPT is Revolutionizing the Way We Work
|
UI/UX Design, Web & Mobile App Development Agency
|
https://venrup.com
|
[] |
By reducing the time spent on such tasks, ChatGPT can enhance productivity, reduce errors, and increase employee engagement in the workplace. Its use in ...
|
What is ChatGPT?
ChatGPT is an AI chatbot auto-generative system created by Open AI that can communicate and provide information in a conversational manner. The way we work has been transformed by this technology, which has enhanced efficiency, productivity, and communication.
Let’s explore how ChatGPT is changing the workplace and the benefits it brings to businesses and individuals.
How ChatGPT is changing the workplace
Today, AI chatbots are popular because they communicate effectively with humans and help small and large businesses save time and resources while achieving better results in various ways.
AI can reduce the number of manual tasks your employees must complete, giving them more time to concentrate on the most critical and complex tasks.
Chatbots can be programmed to handle basic customer conversations, freeing up service representatives to handle more complex situations.
They can automate routine tasks such as scheduling meetings and sending reminders, which frees up time for employees to focus on more complex and creative tasks.
Chatbots can provide quick and precise responses to employee queries and provide training and support.
They can offer tailored recommendations based on an individual's work preferences and habits.
By reducing the time spent on such tasks, ChatGPT can enhance productivity, reduce errors, and increase employee engagement in the workplace.
Its use in different industries
ChatGPT is already being used in various industries, such as healthcare, education, and customer care.
ChatGPT is used in healthcare to schedule appointments, organize patient data, and perform other administrative tasks.
ChatGPT is being used in education to provide individual and personalized learning experiences to students, help improve their writing skills, and offer additional resources for further learning.
In customer care, ChatGPT provides 24/7 support to customers and answers common queries.
Advantages of using ChatGPT
Incorporating ChatGPT into a business environment has advantages.
It is more cost-effective than hiring additional staff, as it replaces routine tasks and training.
It automates monotonous tasks.
ChatGPT can be used to allow teams to collaborate more efficiently and effectively.
As it is configured to support multiple languages, it is an ideal solution for businesses with a global presence.
ChatGPT can reduce the risk of human error and improve data accuracy.
The chatbot can enhance decision-making, help businesses make informed decisions, and stay ahead of the curve.
Limitations of using ChatGPT
They can reduce the need for human input, leading to reliance on technology.
There may be risks to data breaches. Business owners should make sure that they have proper security measures to protect that information.
The biggest drawback to AI chatbots is that they limit creativity. Even though ChatGPT can generate new ideas and concepts, it will never replace human ingenuity, which is essential for specific tasks.
ChatGPT has the potential to be a game-changer in the way we work. However, its impact will depend on how it evolves and how businesses integrate the chatbot into their operations.
Conclusion
Overall, ChatGPT is revolutionizing the future of work by enhancing productivity, efficiency, and communication. It’s already popular as it's being used in various industries. As far as its benefits go, ChatGPT helps in reducing costs and improving decision-making. However, it may also lead to reliance on AI and limit human creativity. While it can potentially be a revolutionary game-changer, its impact will depend on how it evolves and how businesses integrate the chatbot into their operations. We are also proud to announce our partnership with DesignRush, a leading B2B marketplace that connects brands with the best agencies and professional services. DesignRush is known for its thorough evaluation process, ensuring that businesses find the right partners for their needs, whether in digital marketing, web design, or technology development.
| 2023-02-23T00:00:00 |
https://venrup.com/blogs/how-chatgpt-is-revolutionizing-the-way-we-work
|
[
{
"date": "2023/02/23",
"position": 38,
"query": "ChatGPT employment impact"
}
] |
|
Let's Not Squander the Huge Potential of ChatGPT for Hiring
|
Let’s Not Squander the Huge Potential of ChatGPT for Hiring – Lou Adler Group – Originators of Performance-based Hiring
|
https://adlergroup.co
|
[] |
ChatGPT is a remarkable tool. Unfortunately, it's already apparent that too many companies will use it to be more efficient in the “Filling Jobs Quickly” talent ...
|
“In my mind, being more efficient hiring the same people you’re now hiring is a trivial use of ChatGPT.”
ChatGPT is a remarkable tool. Unfortunately, it’s already apparent that too many companies will use it to be more efficient in the “Filling Jobs Quickly” talent market rather in the “Raise Quality of Hire” talent market.
IMO, this is a waste of something with so much potential to change the trajectory of the hiring process around the world.
When the demand for talent exceeds the supply – which is 100% always true when it comes to hiring the top 25% and expanding your DEI efforts – you need to change your hiring POV from weeding out the weak to attracting the best. These differences are shown in the infographic above (and here) and the impact ChatGPT can have in both talent markets.
In my mind, being more efficient hiring the same people you’re now hiring is a trivial use of ChatGPT. Getting hiring managers to rethink how to define work and attract and assess a different type of person is its true value.
We’ll be highlighting some of these game-changing ideas at our next Hire with Your Head book club.
More importantly, at our next Performance-based Hiring workshop recruiters and hiring managers will work together on a case-study search project in the “Raise Quality of Hire” talent market. Talent leaders are invited to audit module 1 to get a sense of the strategic and ROI impact ChatGPT can have on improving quality of hire.
ChatGPT can be a game-changer when used to hire more fully-qualified diverse and high potential candidates who have a different mix of skills and experiences. Sadly, based on the articles I’m reading, too many companies will use it to clone their existing workforce more efficiently without solving the problem of improving performance, increasing job satisfaction or reducing turnover.
| 2023-02-23T00:00:00 |
https://adlergroup.co/lets-not-squander-the-huge-potential-of-chatgpt-for-hiring/
|
[
{
"date": "2023/02/23",
"position": 42,
"query": "ChatGPT employment impact"
}
] |
|
Using ChatGPT for Learning and Development content ...
|
Using ChatGPT for Learning and Development content creation: benefits, limitations, and mindful approaches
|
https://www.emergn.com
|
[] |
Choosing the right learning and development metrics to track the true impact of training · Employee development and training: What your employees crave and why.
|
Emergn’s learning content team constantly reviews and gathers market best practices, customer feedback, and consultant learnings to keep our Value, Flow, Quality (VFQ) body of knowledge current on our digital platform and within our customer engagements. There is a lot of content to be reviewed and created, and with that in mind, I decided to try out ChatGPT, the conversational AI tool that has been generating buzz, especially among those who write content frequently as a part of their role.
ChatGPT is a natural language processing tool that assists users with tasks such as writing, research, coding and more. Here is my experience using ChatGPT in developing learning content for Emergn and some tips and tricks for other L&D content creators who may be interested in leveraging this tool.
Approach with skepticism
Before diving into the potential benefits of using ChatGPT, it is crucial to approach new technology and AI with a healthy dose of skepticism. ChatGPT, like any AI, inherits the biases of the internet and the creators of the content it relies on. In fact, OpenAI, the creators of ChatGPT, have admitted to the politically biased nature of the tool, committing to address the issue in further updates. It’s important that users stay mindful of this when leveraging the tool to create content.
ChatGPT is limited in its ability to fact-check and provide nuanced judgments based on experience or specialization. Nor does it cite the sources of its data. It’s essential that L&D content creators complement AI-generated text with their own expertise, perspective, and additional research. They should also critically evaluate the information generated by ChatGPT and supplement it with human knowledge to ensure accuracy and effectiveness.
Early successes and benefits
Despite its limitations, ChatGPT has demonstrated early successes in reducing burnout in writing and saving time. To date, the learning content team has found success in using ChatGPT to improve existing writing, conduct research, apply feedback, provide insight into industry standards, and create compelling hooks and conclusions.
By quickly creating an early rough draft, our creators can spend more time weaving in Emergn’s perspective and guiding principles, as well as subject matter expertise and learner feedback to create content that better serves learners.
Like training a new employee, ChatGPT requires constant guidance and intervention to ensure it produces the desired results. Through prompts, you can give direction and refine the content ChatGPT generates, it continues to learn and provide smarter output, improving with each use of the tool.
Emergn’s Studio team has devised a preliminary process that has enabled them to produce the highest quality output from the tool.
1. Prime the AI with Emergn’s content, context, and perspective. Specify tone of voice. Add a command to adjust the tone of voice in line with the company identity or the needs of the script or piece of content. Build a library of tested prompts to yield responses that accurately represent your company’s perspective.
ChatGPT response when primed with content, context, and perspective
2. Prompt different commands. Make commands clear, explicit, and specific.
ChatGPT response when asked to be more clear, explicit, and specific
3. Make edits to AI outputs, then feed back into the tool for improvement. Enable the tool to learn through your direction to provide your desired results.
ChatGPT response when feeding back into the tool for improvement
Adopting a learner-centric perspective
ChatGPT and other AI tools can be valuable resources for L&D content creators looking to reduce burnout, save time, and improve their content. However, it is important to approach them mindfully and supplement them with human expertise. Ultimately, L&D content creators understand the learner’s pain points and experiences best, and ChatGPT should serve as a tool to aid in their journey, not replace their expertise. Information literacy will be essential in applying ChatGPT to ensure creators are critical of the information they consume. By leveraging ChatGPT alongside human expertise, creators can create effective and engaging learning experiences for their learners.
| 2023-02-23T00:00:00 |
https://www.emergn.com/insights/using-chatgpt-for-learning-and-development-content-creation-benefits-limitations-and-mindful-approaches/
|
[
{
"date": "2023/02/23",
"position": 67,
"query": "ChatGPT employment impact"
}
] |
|
Top five ChatGPT uses in the workplace
|
Top five ChatGPT uses in the workplace
|
https://www.nintex.com
|
[
"Chris Ellis"
] |
Employees may not be able to turn all of their writing duties over to ChatGPT, but they will definitely be able to unload a number of the more routine ...
|
It’s both my job and my passion to keep up on the latest developments in technology. But for my dad, who is happily retired and can go days without opening his email: not so much.
When I was visiting my folks last week, I introduced my father to ChatGPT. After less than an hour of playing around with the sophisticated AI chatbot, he was obsessed and has since called me multiple times to discuss how he’s now using this AI in his day-to-day life.
Sure, there is a lot of hype about ChatGPT and how it will transform work and society. But we’ve already heard that recently about blockchain and the metaverse, only to see both struggle to gain mainstream traction—so far, at least.
However, seeing my dad’s reaction to ChatGPT really got me thinking. If someone like my father can find so many inspirational applications for ChatGPT, I can only start to imagine what it can and will do for organizations. And utilizing ChatGPT will only help to reveal new questions, ideas, and use cases.
Here are my recent thoughts about the top five applications of ChatGPT in the workplace (watch now):
1. Customer service
When contacting customer service, nearly 80% of people prefer to interact with a human versus an automated response. Frustrating bot-based customer service is bad for a brand. Yet, it is difficult and expensive to offer comprehensive human customer service. ChatGPT upends this calculus.
ChatGPT can learn and build upon answers and responses. When connected to an internal database, it will change our ability to respond to organization-specific customer service requests like “How do I get a permit?” or “Where is your nearest office?” Moreover, it will be able to create content and lay out answers in a visual format, like “Please send me a report of last month’s sales.”
2. Language translation
We live in a shrinking world where it is increasingly essential to communicate across regions, languages, and cultures. Existing translation technologies help with short questions and commands and can interpret small blocks of text. That is a start, but ChatGPT has the reading and comprehension capabilities to understand longer materials—and take translation to the next level.
ChatGPT can determine an employee’s location and automatically translate pages of content into the person’s local language. For developers and other technical talent who don’t speak English or Mandarin, this will save an immense amount of time, effort, and expense to bring their work to market — time that can be rededicated into improving the experience of the end user. There is also great potential for ChatGPT to build automated assistive workplace technologies like real-time screen readers and sign language translations for people with disabilities.
3. Summarizing
Email was supposed to speed up work correspondence, so we’d all have more time to focus on deep work. Instead, we ended up filling our days with ever more correspondence with more and more people. Newer work messaging platforms have only increased the deluge of communications. From emails to messages to reports, ChatGPT can summarize and deliver key information to busy employees.
When incorporated with Word or Outlook, ChatGPT will be able to distill information and separate out what you really need to know. It will be able to take an email, turn it into an action item, and place it into a workflow that is shared with a wider team. Less time spent responding to correspondence and playing catch-up means more time spent on work that adds value to your company.
4. Content creation
The other side of the “ability to summarize” coin is the ability to create outgoing writing and visuals from email responses to reports to external-facing online content. Employees may not be able to turn all of their writing duties over to ChatGPT, but they will definitely be able to unload a number of the more routine responsibilities and get writing ideas and support where they need it.
Yes, ChatGPT can answer an email and fill out a leave request form or other in-house paperwork. But it can also create a 10-step health and safety hazard report or a blog detailing business performance or a new trend. In recent years, even the most technical of employees have been tasked with a degree of creative writing responsibilities in their communications and internal marketing of their work. ChatGPT can be a dependable partner to greatly alleviate that burden and, sometimes, remove it altogether.
5. Code validation
From workflow builds to suggestions, ChatGPT can kick off, review, and improve code. Whether getting you going with a solid framework or drilling in on existing work to determine how to enhance and optimize, ChatGPT supports coding much in the same way it supports writing.
Technological innovation increasingly relies on citizen developers who have a great idea or insight to meet the needs of their organization but have little or no formal training in programming. ChatGPT is a crucial tool to facilitate the work of citizen developers and help them to achieve coding goals that were once beyond their reach.
As I’ve seen with my father, ChatGPT has struck a chord with people who previously didn’t understand that AI could have a role in their lives. When AI goes from the theoretical to the practical, as in the case of ChatGPT, it creates a juggernaut of use cases and applications. And it will have a profound impact on automation and the world of work.
There are still a lot of things that ChatGPT can’t do. But we can keep learning and defining new ways for it to enable us to work in ways that are ever-more to our liking.
| 2023-02-23T00:00:00 |
2023/02/23
|
https://www.nintex.com/blog/top-five-chatgpt-uses-in-the-workplace/
|
[
{
"date": "2023/02/23",
"position": 82,
"query": "ChatGPT employment impact"
}
] |
Is ChatGPT about to replace you? - writefully
|
Is ChatGPT about to replace you?
|
https://writeful.ly
|
[] |
FAST-RISING SOFTWARE WITH CONSEQUENCES # · I LOST MY HEART (WELL, MY JOB) TO CHATGPT # · WHAT IS CHATGPT? # · WHY HAS IT BECOME SO POPULAR? # · WHAT ARE THE ...
|
Every company and workforce has that one person who’s always panicking about losing their job or being replaced. In a previous place of work (more on that later), we called him doomsday. And if you’re currently working in marketing, content writing, or anything that involves communication, chances are that you’ve encountered a great deal of this lately due to the stratospheric rise of ChatGPT.
And whether you’re team AI or team doomsday, I’m sure you’ve had your fill of buzzwords, article quotes, and stats being thrown your way. But humour me for a second and let’s play some paranoid workforce bingo. Check them off with me – “it’s faster than human writers”, “it can basically do our entire job”, or “most of us will end up being replaced by it”.
There was a time when I’d have probably scoffed at the idea of ChatGPT replacing a solid content writer. After all, AI and automation are things we’ve become accustomed to using in our daily lives, and while handy in moderation, no one is ever seriously going to consider choosing a glorified chatbot over the human touch.
…..Right?
FAST-RISING SOFTWARE WITH CONSEQUENCES
Since launching in November 2022, ChatGPT already has over 100 million active users, making it the fastest-growing app of all time in just over two months. When you consider that it took TikTok nine months and Instagram over two years to achieve this, it’s not difficult to see that there’s an appetite for AI content writing.
However, writers love to tell a good story, and our imaginations are known for getting the better of us at times. Surely, AI chat was something too repetitive and unpolished to pose any kind of real threat to any content writer?
I shrugged it all off and logged in for another day of work.
Then I lost my job.
I LOST MY HEART (WELL, MY JOB) TO CHATGPT
When I first heard about the rise of AI content writing, I wasn’t as concerned as some of my peers. In fact, I was relatively excited about it. My place of work had encouraged us to start using AI chat to generate posts based on random topics to see how effective it truly was.
The results? Not great. We all had a good laugh about it and collectively agreed that it was a long way from being a threat. But there was no denying that it was fast – putting out 10-20 guest posts in the time it took a single team member to craft one the old-fashioned way.
And while I’d assumed ChatGPT would be something content writers may need to work alongside eventually, that future happened all too quickly when I was told to hit the road.
In my mind’s eye, I’d always seen the rise of the machines as some kind of Terminator-inspired scene – with a rugged actor rallying the last of humanity to come together and conquer their digital overlords. Unfortunately, real life is far less interesting, and instead, I just stared at the wall, dumbfounded for the rest of the day.
WHAT IS CHATGPT?
Developed in part by Elon Musk (despite now distancing himself from it all), this AI writing tool is a conversational chatbot that has been designed to follow instructions and reply with an in-depth response. However, unlike many other variations of AI chat, this version was intended to be capable of following up further on its instructions – even challenging inaccuracies.
WHY HAS IT BECOME SO POPULAR?
Some people call it reactionary. Others call it necessary. And the reason for its sudden rise in popularity is probably somewhere in the middle. In a post-pandemic business landscape, FOMO has become more prominent than ever before, and businesses have found themselves flocking to automation to remain (at least as they perceive) ahead of the curve.
Some of the other potential reasons may stem from the unfounded rumours of the AI itself actually replacing Google as a search platform.
In a dark irony, the demand for automated content has perhaps come from an emotional place, rather than a logical one. But can something that’s popularity stems from this be effective enough to captivate and engage humans when all of the digital dust settles?
WHAT ARE THE BENEFITS OF CHATGPT?
As it currently stands, AI tech and AI content writing are considered by many to benefit businesses in the following three ways:
BETTER CUSTOMER SERVICE
With more choices available to them than ever before, companies are striving to develop a stronger bond with their customers and elevate their consumer experience. AI chat innovations have already been in place for years now, but ChatGPT is considered to be a future landmark in consumer satisfaction due to its ability to resolve issues quickly.
MORE COST-EFFECTIVE
Automation can significantly decrease the costs of running a business, and when you’re a small to medium enterprise hoping to make an impact at the fraction of the labour costs usually needed, it’s hard to resist the lure of it all.
For marketing companies and businesses that use content writers, this means gaining access to a large number of industry-specific blogs – without having to hire a writer.
IMPROVED EFFICIENCY
Automation can reduce errors in dull administrative tasks while giving human employees more time for other business matters. Customer service issues can be streamlined if they talk to GPT-3, and response times are significantly shortened.
Do you work in an industry that’s swimming in vast amounts of data? This AI can analyse the lot in a matter of minutes. Are you a company in need of 20 guest posts per month? You’ll have them within a few hours.
Perhaps the biggest benefit of all is that many businesses – especially smaller companies, feel that fully embracing these innovations today will provide them with the means to be one step ahead of the competition in the future.
WHAT ARE THE PROBLEMS WITH CHATGPT?
Before exploring the five main problems people are currently experiencing with this AI writing platform, it’s worth noting that due to excessive web traffic and capacity limits, a lot of users can’t actually test out the software.
Once that obstacle is resolved, users are still going to be faced with the following problems when using ChatGPT:
SEXIST AND RACIST CONTENT
In a Twitter thread that had online users rather stunned, Professor Steven Piantadosi,
Head of UC Berkeley’s computation and language lab showed multiple examples of AI writing and its inability to create content without racial or gender bias.
While steps were made to rectify this, it exposed the lack of ability that AI had to understand bias and how it forms technology itself. To make matters worse, labourers in Kenya were paid peanuts to sift through extremely traumatic content in order to try and resolve the problem.
LIMITED KNOWLEDGE
Some of the content generated by ChatGPT can be misleading, in more ways than one. First of all, AI copywriting tends to provide answers that aren’t always correct. And with no clear individual assigned to edit or challenge it as accurate, you run the risk of being fed completely false information.
A limited understanding of what’s deemed factual and appropriate means that the AI often fills gaps in data with incorrect information. In other words, despite saving time initially, you’ll be using that time up again (and possibly more) by having to sift through every last detail in order to check it’s correct.
YOUR BRAND WILL NO LONGER STAND OUT
If you’ve found a way to generate 25 posts about a subject, you can expect to receive them back quickly through AI content writing. But those blogs will inevitably contain information from OpenAI’s language model, and all of the overused cliche phrases and wording that comes with it.
This creates a situation where your content is suddenly overly detailed, cold, and increasingly similar to your competitors – who are also using AI copywriting as a shortcut. In time, any hard work that you’ve done to stand out and establish your brand as a thought leader can be undone in a matter of weeks.
IT ISN’T CURRENTLY MOBILE-FRIENDLY
It sounds odd to say it out loud, but the fastest-growing app in history isn’t actually mobile. Unfortunately, this has led to swarms of scammers claiming to offer this mobile-friendly tool in exchange for huge sums of money.
And while OpenAI is said to be scrambling to provide a genuine mobile app, in order to appease modern businesses accustomed to having it readily available, nothing has been officially announced.
PLAGIARISM IS A BIG PROBLEM
It didn’t take long for people to use AI writing to cheat, as seen in South Carolina where a college student was found to be using ChatGPT to plagiarise schoolwork. Interestingly enough, the professor who detected the plagiarism has suspicions based purely on a sense that there was something very peculiar about the content written.
When you consider the ramifications of this in the modern business world, there are some serious worries to contend with. Not only are your competitors potentially able to rip off your content, but they can do so with just enough difference for it to be passed off as original.
A LITTLE HUMAN PERSPECTIVE
Of course, this could all be dismissed as nothing more than sour grapes from a bitter content writer with an AI axe to grind. But look closer. Have you noticed a pattern amongst the five issues of AI content listed above?
While the benefits, also listed above, do provide some use to companies and customers alike, the problems that can come from solely relying on this software run far deeper than those surface-based improvements.
Let’s summarise quickly:
lacks judgement on what is deemed appropriate to write
has a tendency to make up information that it doesn’t know
uses a language model that every competitor will have as well
lacks the ability to be mobile-friendly
is rife with plagiarism and unoriginality.
Small businesses live and die by the pillars of ethics, adaptability, and their position as a thought leader in their industry. When you combine those issues with dull and lifeless content that sounds like every other business, what are you offering your customers – besides a fast reply to a query?
Actually, forget that last question, and focus on the only question that really matters here:
Is this all you feel that your target audience deserves?
COULD AI CHAT REPLACE CONTENT WRITERS?
There’s nothing wrong with adapting to change and embracing the future. After all, automation was always destined to become a small part of the DNA of modern commerce and marketing.
But it shouldn’t be the only thing you’re relying on when you’re attempting to cultivate trust, develop long-term relationships, and answer some important questions about your company, like:
Who are you?
What do you offer?
Why should people care?
For small businesses, it must look pretty appealing to run with the AI model (it certainly was for one of my previous employers!). Nonetheless, the idea of entrusting the future of your company to these advancements simply cannot reap those big-time long-term rewards.
Small, short-term wins? Sure. You’ll shave off a few quid on your outgoings (although ChatGPT isn’t exclusively free) and churn out a lot of work in no time at all. But as we all know, humans aren’t that simple, and true relationships aren’t built on such unsteady foundations.
THE POWER OF GOOD STORYTELLING
The truth of the matter is that life and relationships aren’t simple. Nor should they be. Did I expect to wake up and suddenly need to find a new way of paying my rent for the foreseeable future? Nope. Is it fair? Who’s to really say what’s fair these days? But it certainly doesn’t feel fair from a human perspective.
Either way, real-life certainly makes for some good storytelling. Can you add a few exaggerations for impact and dramatic effect? You bet. But one of the most thrilling elements of human-led content, and this piece in particular, is that I get to talk openly about a raw subject that all of you have inevitably experienced yourselves at some point.
My ability to express an inherently human experience and your ability as a reader to feel empathy, sympathy or even anger and disdain for what I’m saying cannot be replicated. And most important of all, we each have the ability to use our collective experiences and emotions, good or bad, to connect with people.
WHAT DOES AI CHAT MEAN FOR ALL WRITERS?
Telling your story – or the story of someone’s business, is a responsibility we should never take for granted. I personally feel this way because if a business owner comes to me and wants to really crack their brand’s narrative, I assume that they’re as passionate about what they do for a living as I am.
Is the concept of using AI chat in content all bad? Absolutely not. AI has the potential to help writing become more accessible than ever before. But the quality of content we are giving people access to, and the ethics surrounding how it is generated remains uncertain.
For most of us, AI content writing is a long time away from being up to scratch. And in this content writer’s opinion, there’s a big difference between understanding language and using that language to inspire, challenge, or provide definitive answers to some of life’s questions – something that can’t be summarised by a quick talk to GPT-3.
TENTATIVE STEPS TO A HAPPY MEDIUM
Perhaps someday soon, businesses will find a happy medium in this situation. Maybe the future of content will be some kind of hybrid that deftly blends automation and humans by using cutting-edge software that’s then carefully edited and quality checked by a content writer.
In short, for every creation, there needs to be a beating heart right there to add those intangible elements. It’s in those magic moments and creative flourishes that we are all able to inspire people and guide them from interested parties to loyal customers.
ChatGPT can’t replace you and your experiences as an employee – no matter what your industry is. It can replace your understanding of what it means to be human, what it takes to run your own business or your empathy for frustrated people who just want to find a clear and concise answer to a question.
And while modern life isn’t always so simple, you can make it simple for your customers, clients, and anyone else you please by focusing on those human elements and applying them to your craft. If you’re able to do that while writing from the heart, you’ll be able to tell the story of any brand the right way, and for the right reasons.
What could be more human than that?
| 2023-02-23T00:00:00 |
2023/02/23
|
https://writeful.ly/resources/blog/is-chatgpt-about-to-swoop-in-and-replace-you-writefully/
|
[
{
"date": "2023/02/23",
"position": 93,
"query": "ChatGPT employment impact"
}
] |
New study highlights crucial role of trade unions for ...
|
New study highlights crucial role of trade unions for algorithmic transparency and accountability in the world of work
|
https://algorithmwatch.org
|
[] |
Our report shows that trade unions are now called upon to focus on practical advice and guidance to empower union representatives and negotiators.
|
Our report shows that trade unions are now called upon to focus on practical advice and guidance to empower union representatives and negotiators to deal with the challenges that automation puts onto workers.
Worldwide, workers and their representatives are being confronted with algorithmically-driven automation in the workplace – through the introduction of specific automated procedures to manage the workforce, leading to new forms of workplace surveillance and possibly undermining workers’ rights. As our new report shows, trade unions are now called upon to focus on practical advice and guidance to empower union representatives and negotiators to deal with the challenges that automation puts onto workers.
The AlgorithmWatch report, commissioned by the International Trade Union Confederation (ITUC), presents the findings of a global mapping exercise aimed at identifying and sketching responses of trade unions. With the focus on transparency and accountability of algorithms in the world of work, the report includes exemplary insights from 26 select countries.
“Trade unions acknowledge that transparency and accountability can help against risks arising from automation in the workplace”, Dr Anne Mollen, Senior Policy and Advocacy Manager at AlgorithmWatch and lead author of the report, explains. “But mostly their activities still center around formulating ethical guidelines and principles and learning about the impact of automation in the world of work.”
According to the study’s authors, the next step now is to start creating specific hands-on tools to support worker representatives. First hands-on guides and legislative advances exist.
“Now is the time to foster more exchange between and support for unions to make automation work in the interest of employees”, says Tim Noonan, Director Campaigns and Communications at the ITUC. “Trade unions must move from reflection on principles to practical implementation and start addressing how workers and their representatives can meaningfully integrate their interest when algorithmic systems are being planned, developed and put into practice.” This includes building concrete systems and safety mechanisms that can be implemented at company level.
The report advises that an inter-union exchange of effective strategies could also be an important tool that until now has not been fully taken advantage of. As algorithmic management touches in its core the terms and conditions of employment, we cannot understate the importance of these issues.
The research was conducted on behalf of the International Trade Union Confederation (ITUC). The ITUC represents 200 million workers in 163 countries and territories and has 332 national affiliates. As the world’s main global confederation of national union centres, its’ mandate is focused on the protection of workers’ rights, social and economic justice, anti-discrimination, Just Transition to a zero-carbon future, development, peace and democracy.
Read more on our policy & advocacy work on ADM in the workplace.
| 2023-02-23T00:00:00 |
https://algorithmwatch.org/en/study-trade-unions-algorithmic-transparency/
|
[
{
"date": "2023/02/23",
"position": 11,
"query": "AI labor union"
}
] |
|
The Living Legacy of the International Longshoremen's ...
|
The Living Legacy of the International Longshoremen's Association, Local 1422 — Google Arts & Culture
|
https://artsandculture.google.com
|
[] |
Privacy & Terms • Generative AI Terms ... The International Longshoremen's Association (ILA), Local 1422 is a powerhouse labor union of Port of Charleston ...
|
Unfortunately, also like those other organizations at that time, LPUA eventually suffered as the economic success of the port declined and influence of the union waned. In the early 20th century, the Port of Charleston had recovered and Black dock workers organized once again
| 2023-02-23T00:00:00 |
https://artsandculture.google.com/story/the-living-legacy-of-the-international-longshoremen-39-s-association-local-1422-international-african-american-museum/8QXhbb_8eEZ_zw?hl=en
|
[
{
"date": "2023/02/23",
"position": 80,
"query": "AI labor union"
}
] |
|
Employee Resource Groups: A Critical ...
|
Employee Resource Groups: A Critical Resource for Unionized Workforces
|
https://www.shrm.org
|
[
"Harry I. Johnson Iii",
"Larry L. Turner"
] |
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR.
|
Designed and delivered by HR experts to empower you with the knowledge and tools you need to drive lasting change in the workplace.
Demonstrate targeted competence and enhance credibility among peers and employers.
Gain a deeper understanding and develop critical skills.
| 2023-02-23T00:00:00 |
https://www.shrm.org/topics-tools/employment-law-compliance/employee-resource-groups-critical-resource-unionized-workforces
|
[
{
"date": "2023/02/23",
"position": 87,
"query": "AI labor union"
}
] |
|
Ghost Work - Rotterdam
|
Ghost Work
|
https://www.eur.nl
|
[] |
Artificial intelligence needs human labour to conduct tasks such as data cleaning, coding, and classifying content. This on-demand work is called ghost ...
|
About the project.
To ensure decent work conditions as automation continues to expand, knowledge about the effects of ghost work on well-being is urgently needed. The proposed project will develop and test an integrative framework for analysing the effects of ghost work on worker’s well-being. Existing models for analysing the impact of work conditions on well-being fall short for studying ghost work, as these models assume a person has a job and most likely an employer and colleagues. Therefore, this project begins from the specificities of ghost work to synthesize theories and concepts about algorithmic control, occupational well-being, human computation, and platform labour, in order to understand how and through which mechanisms ghost work influences well-being.
| 2023-02-23T00:00:00 |
https://www.eur.nl/en/research/research-groups-initiatives/platform-labor-group/ghost-work
|
[
{
"date": "2023/02/23",
"position": 94,
"query": "AI labor union"
}
] |
|
WiseLayer | AI-powered digital workers for finance and ...
|
AI-powered digital workers for finance and accounting teams
|
https://www.wiselayer.com
|
[] |
WiseLayer provides AI workers that automate complex tasks for finance and accounting, such as accruals, discrepancies, reconciliations, D&A, lease accounting, ...
|
COMPANY
WiseLayer’s team has built financial tools for hundreds of companies during the past decade, from the growth-stage firms to the Fortune 500
We realized that in any AI or ML analytics model, the client cared most about the items that had measurable financial impact. Some issues were universal across all businesses (e.g. flagging expenses that were likely overcharged), whereas others issues were unique to certain industries. So we created WiseLayer to allow all companies to access the most powerful AI-driven alerts that can protect them from financial issues.
| 2023-02-23T00:00:00 |
https://www.wiselayer.com/
|
[
{
"date": "2023/02/23",
"position": 5,
"query": "AI workers"
}
] |
|
The workers behind the 'AI magic' | New Airbnb regulation
|
New Airbnb regulation – Martijn Arets
|
https://www.martijnarets.com
|
[
"Martijn Arets"
] |
Scientist Veena Dubal gives an interesting (and shocking) insight into the algorithms used by Uber to direct workers to be available as much as possible at the ...
|
Good day! One minute you get a newsletter in your mailbox every week and the next you have to wait 3 weeks for it. As someone who loves surprises, I go for option 2 😉
Last weeks, I again participated in many events related to platforms in different roles, had the opportunity to contribute to great projects and work with professionals on almost all continents on this globe. How cool are the opportunities to work location-independent with tools to operate in international teams in an accessible way. Although, of course, it’s not about the tools, but how you collaborate and use the tools. Just as with platforms, (online) tools are facilitating and not leading. Should you think: with me it is the other way round, I would have a good discussion with yourself.
And although a lot is possible online and remote, it is also important to continue travelling (as responsibly as possible) and to speak to each other live. So that is the reason I ‘briefly’ took the train up and down to Munich in Germany this week for an interview with the person in charge of the Crowdsourcing Code: a code of conduct between 8 ‘crowdwork’ platforms in Germany. I did this for the WageIndicator’s new podcast: a podcast (and blog) on ‘global gig economy issues’. The first edition of this monthly podcast will go live in mid-April.
Enough introduction: for this edition I have again collected a number of relevant pieces for you and provided them with my interpretation and commentary. Enjoy the read and have a nice day!
The impact of algorithms and technology on the worker: the subject of part two of the European Platform Work Directive. For platforms offering ‘on demand’ jobs (taxi and delivery), the impact of the algorithm on finding, hiring and performing work is great and the worker is paid per job. Where it is often unclear exactly what the returns are.
In the early days of Uber, everyone was excited about the ‘surge pricing’ the company uses. If there is more demand than supply somewhere at a given time, prices rise. With this, demand goes down and supply goes up. At the time, many saw this as a perfect economic model of flexible pricing. The example of the hairdresser was often brought to mind: why do you pay the same for a haircut on Tuesday afternoon as on Friday evening, when demand is many times higher? Now it appears (and this is not overnight) that these ‘smart’ (or: ‘savvy’) technologies are able to root for and entice working people to do more than initially envisaged.
In the article “The house always wins: the algorithmic gamblification of work“, scientist Veena Dubal gives an interesting (and shocking) insight into the algorithms used by Uber to direct workers to be available as much as possible at the platform’s convenience. This is also because the risk of not working is at the worker’s expense: something that, in my opinion, is a very bad idea anyway.
In this article:
“In a new article, I draw on a multi-year, first-of-its-kind ethnographic study of organizing on-demand workers to examine these dramatic changes in wage calculation, coordination, and distribution: the use of granular data to produce unpredictable, variable, and personalized pay. Rooted in worker on-the-job experiences, I construct a novel framework to understand the ascent of digitalized variable pay practices, or the transferal of price discrimination from the consumer to the labor context, what I identify as algorithmic wage discrimination. As a wage-setting technique, algorithmic wage discrimination encompasses not only digitalized payment for work completed, but critically, digitalized decisions to allocate work and judge worker behavior, which are significant determinants of firm control.
Though firms have relied upon performance-based variable pay for some time, my research in the on-demand ride hail industry suggests that algorithmic wage discrimination raises a new and distinctive set of concerns. In contrast to more traditional forms of variable pay like commissions, algorithmic wage discrimination arises from (and functions akin to) to the practice of consumer price discrimination, in which individual consumers are charged as much as a firm determines they are willing to pay.
As a labor management practice, algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps for as little as the system determines that they may be willing to accept. Given the information asymmetry between workers and the firm, companies can calculate the exact wage rates necessary to incentivize desired behaviors, while workers can only guess as to why they make what they do.”
You don’t have to be an activist to understand that such techniques are far from desirable. This piece includes the experiences of some drivers:
“Domingo, the longtime driver whose experience began this post, felt like over time, he was being tricked into working longer and longer, for less and less. As he saw it, Uber was not keeping its side of the bargain. He had worked hard to reach his quest and attain his $100 bonus, but he found that the algorithm was using that fact against him.”
I think it is important to let platforms take more responsibility in explaining their processes and having this validated by a trusted third party. The fact that platforms like Uber frame complexity as an added value for the worker is evident from this quote:
“If you joined Uber years ago, you will have joined when prices were quite simple. We set prices based on time and distance and then surge helped increase the price when demand was highest. Uber has come a long way since then, and we now have advanced technology that uses years of data and learning to find a competitive price for the time of day, location and distance of the trip.”
When someone takes pride in adding complexity, it should lead to suspicion by default. Because complexity can also be used to hide things. I wonder if upcoming European regulations will lead to less complexity and unclear processes for the worker. It should be a key issue for policymakers anyway.
Starting your own platform: where do you start and what is the route to take? That’s a question you can safely leave to the team at Sharetribe. Sharetribe offers a simple and straightforward tool for putting together your own ‘marketplace’ without any expertise in programming. I have known the company myself for about ten years, and even with the ‘build your own platform’ programme at the The Hague University of Applied Sciences, students with no prior knowledge easily built their own platform via Sharetribe.
Sharetribe invests a lot in content to help their clients successfully launch their own platform. They also have a stake in this: they only make money when their customers are successful. This has resulted in an impressive collection of valuable content. Last week, they added something new to this: an online video course:
This ten-step video course takes you through your marketplace journey all the way from idea to scaling your business. Each step is packed with the latest marketplace facts, actionable advice, and relevant case studies.
In an hour and a half, you will learn a nice foundation of the steps you need to go through to launch a successful platform.
It is almost inevitable that you have seen OpenAI’s insane tool ChatGPT pass by or possibly tried it yourself. An impressive chatbot that you can ask any question, only to get a comprehensive and detailed answer. Many sectors, including education, are anxiously considering what to do with such a tool. I have tried the tool myself and it is really impressive. For instance, before the workshop on platform economy and education, I asked what are important topics for education and platform economy. A rather general and vague question. You can see the result in the image at the bottom of this piece.
ChatGPT is yet another development that, as with AI and algorithms, seems like a kind of magic black box. It almost seems like magic: everything happens by itself. But…. is that really the case? Certainly not. Both in training and execution, there are always loose ends. For instance, many tech companies use platforms like Amazon Mechanical Turk: a platform where people all over the world (and especially pieces of the world where you can get by on very little income and where people have little alternative) perform mini jobs of a few seconds via a platform: so-called ‘clickwork’. This involves recognising images, but also resolving loose ends of seemingly automatic systems. Content moderation of platforms like Facebook is also designed according to these principles. Not always pure platform, but similar principles.
Mary Gray wrote a fascinating book on so-called clickwork: “Ghost Work – How to stop Silocon Valley from building a new global underclass“. Timm O’Reily wrote the following about this book: “The Wachowskis got it wrong. Humans aren’t batteries for The Matrix, we are computer chips. In this fascinating book, Gray and Suri show us just how integral human online task workers are to the development of AI and the seamless operation of all the great internet services. Essential reading for anyone who wants to understand our technology-infused future.” Highly recommended.
A long run-up to the piece I want to discuss today: “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” It describes how OpenAI managed to ‘magically’ make ChatGPT better and safer:
“To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.
To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.”
OpenAI considers this work very important: “Classifying and filtering harmful [text and images] is a necessary step in minimising the amount of violent and sexual content included in training data and creating tools that can detect harmful content.” The article’s authors are rightly critical after their research: “But the working conditions of data labelers reveal a darker part of that picture: that for all its glamour, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.”
ChatGPT remains an impressive tool, but surely the magic is a lot less (clean) than the tech optimists try to make us believe. Andrew Strait describes it powerfully in the piece: “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labour and scraped data, much of which is unattributed and used without consent”
What can we learn from this?
While the insights from this story alone are interesting on their own, I think it’s important to look further. What can we learn from this case study.
For one thing, it shows that these kinds of tools feast on the work of others: scrapping content created by others and low-paid moderators and workers. It is the bright minds who devise and build systems to do this and get away with the credit and money, but I think it is important to (re)recognise more that this content does not fall from the sky and there may be a necessary discussion about how fair and desirable this is.
I would also like to broaden that discussion a bit. I regularly speak to very committed scientists with a clear opinion about what is ‘fair’ who meanwhile use Amazon Mechanical Turk for their research. I understand that this is incredibly convenient, but then of course you also have butter on your head. A good conversation about fair treatment and remuneration of everyone in the chain is something that is missing from many innovations. A conversation that, as far as I am concerned, could be had more often. People who perform clickwork are a kind of ‘disposable labour’. The moment they are no longer needed, no one will care. And because of this, it is only right that the authors of Ghostwork and of this article point out the facts to us.
Airbnb and regulation: it is an issue that has been around for quite a few years. Earlier, national regulations were introduced in the Netherlands and now this is being extended to European regulations. I think a good step for everyone.
The regulations will also be accompanied by a European tool: “The Commission is coming up with a single European data tool for exchanging information on holiday rentals between platforms and local authorities. Platforms will now have to share, in places where rules apply, data every month on how ma
ny nights a house or flat has been rented out and to how many people.”
Ultimately, each city will continue to set its own holiday rental rules. That too is a good step, although Amsterdam’s case study teaches us that it is not as simple as it seems.
It would also be good to secure the knowledge and research in a central location alongside this tool, so that not every city has to reinvent the wheel itself. Furthermore, I am (very) curious about the implementation of this. Connecting the platform and the central European tool: that’s probably fine. But what about the translation to the individual systems of the municipalities (a link with the land registry in the Netherlands, for instance, seems light years away, and we are a country in the digital vanguard…) and what are the privacy risks involved? I will keep following this.
What impact does the platform economy have on people, organisations and society? My fascination with this phenomenon started in 2012. Since then, I have been seeking answers by engaging in conversation with all stakeholders involved, conducting research and participating in the public debate. I always do so out of wonder, curiosity and my independent role as a professional outsider.
I share my insights through my Dutch and English newsletters, presentations and contributions in (international) media and academic literature. I also wrote several books on the topic and am founder of GigCV, a new standard to give platform workers access to their own data. Besides all my own projects and explorations, I am also a member of the ‘gig team’ of the WageIndicator Foundation and am part of the knowledge group of the Platform Economy research group at The Hague University of Applied Science.
Need inspiration and advice or research on issues surrounding the platform economy? Or looking for a speaker on the platform economy for an online or offline event? Feel free to contact me via a reply to this newsletter, via email ([email protected]) or phone (0031650244596).
Also visit my YouTube channel with over 300 interviews about the platform economy and my personal website where I regularly share blogs about the platform economy. Interested in my photos? Then check out my photo page.
| 2023-02-23T00:00:00 |
2023/02/23
|
https://www.martijnarets.com/en/newsletter/algorithmic_gamblification_of_work/
|
[
{
"date": "2023/02/23",
"position": 25,
"query": "AI workers"
}
] |
37% of white-collar workers consider AI 'a threat to jobs'
|
37% of white-collar workers consider AI ‘a threat to jobs’_Latest News-Shenzhen Government Online
|
http://www.sz.gov.cn
|
[] |
Some 37.4% of the white-collar workers in Shenzhen believe that their jobs have been partially or are most likely to be replaced by artificial intelligence (AI) ...
|
Some 37.4% of the white-collar workers in Shenzhen believe that their jobs have been partially or are most likely to be replaced by artificial intelligence (AI) tools such as ChatGPT, DT News reported yesterday, quoting a report released by recruitment platform zhaopin.com.
When asked how they are going to deal with the impact of digital tools, 48% of the white-collar workers surveyed said they would “learn digital skills and make AI work for me,” while nearly 83% of the respondents said they are willing to cope with the impact through learning, according to the spring survey on job-hopping in Shenzhen.
Spring is traditionally the peak season for job-hopping in China, as workers explore new job possibilities after the Spring Festival holiday.
As businesses continue to revive following China’s optimization of its COVID-19 control policies, people are more active in seeking new opportunities in the job market. The report showed that the career confidence index of white-collar workers in Shenzhen has risen to 3.83, compared with 3.71 in spring last year, ranking seventh in China. Nearly 70% of the white-collar workers in Shenzhen said they are very confident or confident about their careers this year, up from 61.6% a year earlier.
The report said 61.3% of Shenzhen’s white-collar workers “have updated their resumes and are in the process of looking for new job opportunities.” Last year, the figure stood at 56.8%.
Salary was still the primary factor for white-collar workers to change jobs, with 57.5% of them indicating dissatisfaction with their current salary packages. Limited scope for personal advancement, their employer’s poor prospects and unsatisfactory welfare benefits were also key concerns that caused white-collar workers to quit their jobs and seek new opportunities.
So far this year, the proportion of white-collar workers in Shenzhen who switched jobs due to “superior-subordinate relationship” or “collegial relationship” was 19.1%, down from 25.9% last year, reflecting the fact that interpersonal communication had less impact on white-collar workers’ job switches.
| 2023-02-23T00:00:00 |
http://www.sz.gov.cn/en_szgov/news/latest/content/post_10441564.html
|
[
{
"date": "2023/02/23",
"position": 32,
"query": "AI workers"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.