title_s
stringlengths
2
79
title_dl
stringlengths
0
200
source_url
stringlengths
13
64
authors
listlengths
0
10
snippet_s
stringlengths
0
291
text
stringlengths
21
100k
date
timestamp[ns]date
1926-02-14 00:00:00
2030-07-14 00:00:00
publish_date_dl
stringlengths
0
10
url
stringlengths
15
590
matches
listlengths
1
278
The Future of Work: How to Offset AI Job Automation - Mondo
The Future of Work: How to Offset AI Job Automation
https://mondo.com
[ "Mondo Insights" ]
Embrace new AI technology · Invest in AI training and re-skilling · Embrace a growth mindset to offset AI job automation · Prioritize soft skills to offset AI job ...
Technological advancements throughout history have led to massive disruptions in the workforce before. Think of the invention or discovery of the printing press, electricity, planes, and the Internet. These were heralded as the end of specific trades and professions as we knew them, yet as a society, we adapted, survived, and eventually thrived. Today, the new, perceived “threat” is Artificial Intelligence, or AI as we know it best. But this advancement doesn’t necessitate a negative connotation. The ability to offset AI job automation is possible. Here’s how. 5 ways to offset AI job automation Consider strategies like embracing new AI technology, investing in AI training, and prioritizing soft skills to ensure you and your workers are still employable in the age of AI. 1. Embrace new AI technology Before you think that AI job automation won’t impact your industry, think again. Computers are already starting to outperform humans in complex roles like medicine, law, accounting, journalism, cybersecurity, and even aviation—and we’re likely just scratching the surface of what’s possible. The employees you value will be impacted by the continued enhancement of AI capabilities. 2. Invest in AI training and re-skilling First up is investing in new retraining and “up-skilling” programs. Investment in these programs will likely fall on governments and companies themselves. While some element of job loss will be unavoidable, given McKinsey’s estimate that half of the tasks workers perform today can be automated, it’s crucial to think of this job loss as the loss of specific tasks, rather than the jobs themselves. Retasking employees away from tedious tasks that computers can more effectively complete to tasks that require innate human traits or those which AI has yet to overtake will be crucial in improving the efficiency of your business while enhancing the results of more top-level tasks AI is currently unable to complete. To enable employees to transition away from the remedial or daily tasks they are trained in, you’ll need to develop an effective training program. Surprisingly, this will likely be in the form of AI solutions. Retraining solutions like AI-driven Coaching Networks allow workers to learn what they need through AI augmentation without sending them back to school. 3. Embrace a growth mindset to offset AI job automation Another way to offset AI job automation will come from employees transforming the way they perceive their skills. Rather than a stagnant skill base produced by traditional education and experience learned on the job, workers will need to view their skills as a portfolio necessitating constant refinement. In the AI-driven age we are entering, workers will be expected to be continuous learners in order to meet new workforce demands. While companies will begin to invest in retraining or “up-skill” programs, workers that proactively seek to expand and evolve their skills independently alongside the advancement of AI capabilities will be the most in-demand talent in the years to come. It’s likely that proactive learning will soon become one of the most in-demand skills companies comb resumes for in the AI, machine learning age. 4. Prioritize soft skills to offset AI job automation While there is no denying that the advancement of AI means that some tasks will no longer need a human to complete them, it also means that there are innately human traits and skills that AI will never be able to replicate. Leaning into and nurturing these skills in your employees will make them more valuable to your business. Things like creativity, abstract thinking, the ability to thrive in diverse environments, communicating, networking, social and emotional intelligence, intercultural sensitivity, public relations, and elements of Digital Marketing are inimitable by a machine. Despite countless AI advancements, these are the traits that separate humans from machines. To offset AI job automation and replacement in your company, as well as around the world, it’s critical to recognize the value of these skill sets and the need to hire for and invest in developing them among your employees. 5. Continue to create new jobs to offset AI job automation Last but not least, the core method of how to offset AI job automation will be the creation of new jobs and industries. The reason it’s so easy to go the doomsday route when it comes to AI advancement is that envisioning a new industry or career that has never existed is incredibly difficult to do. It’s like trying to envision what an alien might look like, and why Hollywood often presents beings that look so much like us. When something has never been exposed to you, it’s impossible to try to imagine it without putting it in the context of what you’ve already seen. When it comes to the development of new industries created by a newfound reliance on AI, the options are limitless, yet our ability to identify them down is challenging. Obviously, we’ll need AI Specialists and AI Ethicists. We’ll see the rise of an industry solely dedicated to the maintenance of AI systems, robots, and products which could result in millions of jobs. What we’ll also see with AI advancements is the rise of industries that have yet to be imagined. Similar to how individuals in the early 1800s never could’ve dreamed up the aviation industry, we are in the midst of the invention of industries we can’t yet imagine. Will AI take over the world? There is a disparity between when a new technology that automates certain jobs comes out before new jobs for those impacted emerge, to be sure. But by planning for the inevitable automation of elements of the jobs we all hold today and changing the way we think of AI integration, we can better prepare to offset AI job automation. Those who will be held most responsible will be the innovative Tech companies that develop and implement these advancements initially. Ensure your business plans for the current and future state of the local, national, and global workforce by implementing a few forward-thinking strategies to offset AI job automation. The truth about AI job automation Regardless of our efforts, we are entering a transitional phase of AI job automation. Early-on efforts by governments, companies, and individual workers to invest in new skills-building and training programs, change the perception of employee skill sets being evergreen, nurture and more highly value workers’ innate humanness, and imagine and create new industries born out of our growing reliance on AI will be our best bet to offset AI job automation on a local, national, and global level. If you lack the tech-driven experts you need to integrate and maintain new AI solutions to augment the productivity of your workforce, contact Mondo today. We’ll provide you with the machine learning and AI-driven Tech talent you need to stay ahead of the curve. Looking to hire top-tier Tech, Digital Marketing, or Creative Talent? We can help. Every year, Mondo helps to fill over 2,000 open positions nationwide. Hire Talent More articles about job searching and industry trends:
2022-12-01T00:00:00
https://mondo.com/insights/how-to-offset-ai-job-automation/
[ { "date": "2022/12/01", "position": 98, "query": "AI job creation vs elimination" }, { "date": "2023/02/01", "position": 96, "query": "AI job creation vs elimination" }, { "date": "2023/04/01", "position": 64, "query": "automation job displacement" }, { "date": "2023/09/01", "position": 98, "query": "AI job creation vs elimination" }, { "date": "2023/09/01", "position": 65, "query": "automation job displacement" }, { "date": "2023/10/01", "position": 66, "query": "automation job displacement" }, { "date": "2023/12/01", "position": 68, "query": "automation job displacement" }, { "date": "2024/01/01", "position": 97, "query": "AI job creation vs elimination" }, { "date": "2024/02/01", "position": 82, "query": "AI job creation vs elimination" }, { "date": "2024/02/01", "position": 62, "query": "automation job displacement" }, { "date": "2024/03/01", "position": 98, "query": "AI job creation vs elimination" }, { "date": "2024/03/01", "position": 64, "query": "automation job displacement" }, { "date": "2024/04/01", "position": 64, "query": "automation job displacement" }, { "date": "2024/05/01", "position": 97, "query": "AI job creation vs elimination" }, { "date": "2024/05/01", "position": 64, "query": "automation job displacement" }, { "date": "2024/06/01", "position": 62, "query": "automation job displacement" }, { "date": "2024/07/01", "position": 70, "query": "automation job displacement" }, { "date": "2024/08/01", "position": 98, "query": "AI job creation vs elimination" }, { "date": "2024/08/01", "position": 66, "query": "automation job displacement" }, { "date": "2024/10/01", "position": 66, "query": "automation job displacement" }, { "date": "2024/12/01", "position": 99, "query": "AI job creation vs elimination" }, { "date": "2024/12/01", "position": 65, "query": "automation job displacement" }, { "date": "2025/01/01", "position": 95, "query": "AI job creation vs elimination" }, { "date": "2025/01/01", "position": 62, "query": "automation job displacement" }, { "date": "2025/02/01", "position": 68, "query": "automation job displacement" } ]
Closing the quantum workforce gap: Lessons from AI
Five lessons from AI on closing quantum’s talent gap—before it’s too late
https://www.mckinsey.com
[ "Niko Mohr", "Kiera Peltz", "Rodney Zemmel", "Matija Zesko" ]
Lessons from AI, including talent development strategies and technology literacy programming, can help leaders and organizations close the quantum workforce ...
When AI emerged from its long winter and sprung onto business agendas in the 2010s, a scarcity of data science talent put considerable constraints on how and where business leaders could apply the technology. While AI talent challenges remain, strides have been made and many lessons have been learned that can be applied to tech talent strategies overall. An arguably wider talent gap in quantum technology threatens to stall progress on breakthrough quantum use cases, jeopardizing the creation of a massive amount of business value. Quantum computing alone, which represents the largest market potential for the three mains areas of quantum technology (the other two areas being quantum sensing and quantum communications), could account for up to nearly $700 billion in value. Our research finds that there is only one qualified quantum candidate available for every three quantum job openings (Exhibit 1). By 2025, we predict that less than 50 percent of quantum computing jobs will be filled unless significant interventions occur. Quantum start-ups and established tech companies—especially those in the hardware market, where more than half of today’s quantum investments are concentrated —have been the first to wrestle with this talent crunch as they race to solve fundamental questions in the field and deliver fault-tolerant quantum systems, which are necessary to unlock the technology’s full potential. It’s still early days, with many unknowns, but the technology is rapidly progressing. With this progress, the demand for quantum talent is shifting, first to software companies and then to enterprises that will use the technology. Leaders across industries are already beginning to assemble quantum teams and test early-stage algorithms on the current class of quantum systems. This includes, for example, exploring how quantum algorithms can improve encryption protocols in financial services, optimize routes and fleets in logistics, and improve clinical-trial site selection in pharmaceuticals. As we saw with AI, the growth of graduate programs in the field will be one necessary development to ensure a robust talent pipeline. Quantum graduate programs are still not widely available or accessible at the university level. Today, of the 176 quantum research programs at universities worldwide, only 29 provide graduate-level degrees in the subject. But beyond this, many of the same strategies and investments that companies have used to successfully build their AI teams, such as upskilling workers and creating pathways for new talent, can also serve them well when building quantum teams. Based on data and insights from our research and work in the field, along with the experience of The Coding School’s quantum computing education initiative, Qubit by Qubit, there are five lessons from the AI talent journey that can help organizations build the quantum talent they need so they’re ready to capture value as the technology comes of age. Would you like to learn more about McKinsey Digital Lesson 1: Define your talent needs clearly In the early days of AI, some organizations hired data scientists without a clear understanding of what skills they actually needed. As a result, many struggled to realize a return on their investment. For example, one large financial services firm’s exuberance for AI led it to hire nearly 1,000 data scientists. After the organization did not realize the outcomes it expected, leaders discovered that many of the new hires were not data scientists by strict definition and that just 100 true data scientists placed in the right roles would have been sufficient. In the case of quantum, talent needs are still evolving, but once organizations are ready to assemble their teams, most will likely need to build out their stable of quantum business and strategy experts along with technical talent in two areas: quantum software engineering and quantum hardware engineering (Exhibit 2): Quantum software engineering. Organizations will require quantum software engineers who can build new quantum algorithms, as well as those who can improve existing quantum algorithms or conduct exploratory research on potential quantum applications. Some positions, such as those responsible for developing quantum algorithms from the ground up, will require PhDs. Others, however, will be achievable through upskilling with general quantum training. For instance, systems architects, AI and machine learning algorithm developers, and others with traditional coding and AI skills can be upskilled in as little as six months to code and improve quantum algorithms using tools such as Cirq, Qiskit, and Q# and could begin exploring how quantum could provide an advantage in their industry. Quantum hardware engineering. Companies will need experts overseeing systems based on entirely different physical processes than those of conventional computers. As companies form partnerships with quantum vendors to run their applications on quantum computers, they’ll need quantum hardware experts who can configure and manage these systems, just as cloud engineers do today. This work will likely require a PhD until quantum systems become plug and play. Over the long term, we anticipate that hardware experts with a foundational understanding of quantum concepts (such as qubits [quantum bits], algorithms, and encryption), as well as those who have completed coursework or upskilling in role-specific topics (such as assembly and testing of quantum computers), will fill these roles. In defining talent needs, organizations should first identify possible fields of applications that the quantum computing team will work on. At one large global pharmaceutical company, the CTO worked closely with colleagues in research and development and IT to assess areas where they were already pushing classical computing methods, such as machine learning and high-performance computing, to their limits. They agreed to focus the quantum computing team on computational tasks in drug discovery, with the ultimate goal of finding drugs that bind better to their targets—long considered the “holy grail” in the pharmaceutical industry. The newly formed taskforce, including HR experts from all around the world, created a list of the types of expertise and backgrounds required to successfully build quantum algorithms from the ground up in this area. This list included five team members who come from diverse backgrounds and ideally bring quantum chemistry and quantum computing expertise, along with a mixture of experience in designing quantum algorithms, identifying viable use cases, and securing venture capital funding (the last of which provides a view into the evolving world of quantum partnerships). The list also included a combination of academic backgrounds—useful for building scientific connections with universities—and applied experiences, such as experts from quantum computing companies involved in creating the technology. Over the next nine months, the pharmaceutical company conducted targeted outreach through job postings, partnerships, active sourcing, and networking. The organization empowered the first two hires to identify and recruit the remaining team members. Today, this multidisciplinary team has a pipeline of use cases to explore as it works to understand how quantum computing can make a difference in the pharmaceutical industry. Lesson 2: Invest in translators early As excitement for AI grew, the role of analytics translators emerged as a key to helping leaders first identify and prioritize the business challenges best suited for AI to solve and later facilitate the development and adoption of these AI solutions. In the field of quantum, we see a need for translators with engineering, application, and scientific backgrounds who can help organizations make sense of a rapidly expanding ecosystem of opportunities and players. While upskilling quantum talent with domain knowledge would be ideal for this role, given that quantum talent is scarce, companies can also look to upskill adjacent roles, such as computer engineers, application developers, and chemical researchers, with general quantum training. One immediate area of focus for quantum translators is helping their companies stay attuned to industry developments and identify how and when to jump in. Quantum computing’s potential to process exponentially more data and assess more potential solutions faster than any system available today could enable game-changing advances across industries—from dramatically reducing drug discovery timelines and improving supply chain efficiency to sparking the development of breakthrough products, such as green fertilizers for the agriculture industry. And as with AI, we anticipate a clear first-mover advantage for those prepared to capture this potential. Translators can help businesses answer important questions such as the following: What risks and opportunities will we face as quantum technology matures? Which use cases will deliver the highest value? What investments are necessary today so we don’t fall behind tomorrow? Another area where translators can add tremendous value today is in guiding their organizations on strategic partnerships with quantum software and hardware companies. On the software side, this includes identifying and negotiating partnerships with organizations doing research on quantum applications in their industry so that their business problems can be included in test cases. On the hardware side, translators will be needed to identify which quantum technologies are best suited for any early-stage algorithms that experts in their organizations build. Different platform approaches, such as using trapped ions versus ultra-cold atoms for qubits, can result in varying levels of technical noise in quantum components that affect algorithmic speed, connectivity, and outcomes. In the near term, organizations will need to pay close attention to such differences or else they may realize suboptimal outcomes from their efforts. In the longer term, it is expected that fully fault-tolerant quantum computers that automatically correct for these errors will be developed. We find that engineers, application developers, and those in adjacent fields, such as statistics, computer science, and chemistry, can obtain the foundational knowledge needed to be a quantum translator in just three to six months. A 26-week introductory course on quantum computing, for example, provided one computer engineer in healthcare IT the foundational knowledge he needed to develop a strategic presentation and road map on how the sector could benefit from this emerging technology. He delivered his presentation at a developers’ conference, where the managing director of a healthcare IT provider took note and hired him. He now aspires to advise companies on how quantum technology can support their business goals. The Rise of Quantum Computing Lesson 3: Create pathways for a diverse talent pipeline As AI applications increasingly became part of everyday life, it became painfully clear that organizations needed to do more to attract individuals from underrepresented groups to AI’s foundational fields of computer science, math, and statistics. AI models (and the data feeding into them) caused numerous instances of unintentional bias and discrimination—for instance, the exclusion of underrepresented populations from hiring considerations by AI-driven recruiting systems—that were due, at least in part, to a lack of diverse perspectives and experiences around the table as the AI models were built and tested. It’s too early to know what risks will emerge from quantum technologies, but we expect similar challenges if we don’t build a diverse quantum workforce. This will require efforts to bolster diverse representation in quantum-adjacent fields, including not only computer science, math, and statistics but also physics, material science, and chemistry. Such efforts are needed at both the university level and in K–12, in which early intervention has been shown to have a significant effect on the talent pipeline. Qubit by Qubit, for instance, saw a 32 percent increase in middle schoolers interested in pursuing a career in quantum computing after running a one-week virtual summer camp. The organization also ran a four-week summer school program, which enrolled a diverse population of high school and undergraduate students, with 48 percent of participants identifying as female or nonbinary, 20 percent of students identifying as Hispanic, and 17 percent identifying as Black. Following the program, there was an 86 percent increase in students’ interest in pursuing a career in quantum computing and a 74 percent increase in students’ interest in taking additional courses in quantum computing. When high school students were asked what excited them about quantum computing, one of the most popular responses was that it has the potential to affect other fields they are interested in, suggesting an opportunity for leaders to engage with students who want to make a difference in their industry. IBM, for example, has collaborated with Qubit by Qubit over the past three years on an introduction to quantum computing course, which covers the foundations and applications of quantum computing. More than 6,000 high schoolers have participated in the course to date, and the majority say they are interested in pursuing a career in a quantum information science and engineering field as a result. Additionally, some of these students have since entered college and are now participating in internships at IBM and other organizations. While such efforts are making a dent, ultimately, we envision school systems that provide all students with exposure to quantum technologies through mandatory lessons, units, or courses; currently, even when extracurricular courses are offered, students are often intimidated by the complexity of the subject and may not think they’re qualified to participate. For example, in a post-session interview with students who successfully completed Qubit by Qubit’s introductory course, one student shared how nervous he was when first applying for the program because he thought this area of study was meant for “geniuses.” Another wrote that, prior to the course, he believed that the topic of quantum computing was “something that I will never understand.” Following the course, their perspectives changed, and they saw quantum as an area of study that they could succeed in. Some US states are moving to incorporate quantum learning in their schools. Ohio, for example, is the first US state to add quantum training to its K–12 science, technology, engineering, and math (STEM) standards; the Cleveland Metropolitan School District, in collaboration with Qubit by Qubit, is now training K–12 teachers and university faculty at Cleveland State University and offering students an introduction to quantum computing course for credit. Business leaders can influence this shift significantly, given that schools often see corporate involvement as a sign that this will be an important field, even if they don’t yet know what quantum is. (For more on what it takes to build a diverse quantum workforce, see sidebar, “Building a diverse quantum workforce: An interview with IBM Quantum’s Bradley Holt.”) Building a diverse quantum workforce: An interview with IBM Quantum’s Bradley Holt Bradley Holt has spent the last decade of his career helping technology software developers hone their skills and learn about new technologies. Today, as the program director for IBM Quantum’s workforce advocacy efforts, he’s helping individuals prepare for careers in the field of quantum and helping the industry build a quantum workforce. We recently spoke with Bradley to discuss what it takes to build a diverse quantum workforce and why he feels all leaders should start now. An edited transcript of our conversation follows. McKinsey: With so few professionals out there today with quantum degrees, how are you going about developing the quantum talent IBM needs? Bradley Holt: Over the past five years, we’ve made millions of dollars in investments in quantum workforce and education efforts, from high school to college age and beyond, training professionals—and future professionals—in the field. We’re trying to catch people where they are in their learning journey. Through our main summer internship program, for example, we’ve trained over 350 college interns at all levels of higher education—undergraduate, master, and PhD students—and across all dimensions, including software, hardware, and research. These students are almost entirely from adjacent fields of study. If it’s an engineering internship we need to fill, we might look for someone with an electrical-engineering background, or if it’s a software role, we might look for someone with a computer science or physics background and upskill them. Many of our internship students have gone on to work at IBM Quantum in full-time roles or elsewhere in the field of quantum after graduation. Additionally, a number of students who have participated in a quantum undergraduate research internship program that we run with Princeton have gone on to publish research papers and win awards for their work. Students from any institution can participate in the program, which combines six weeks of academic research in quantum science and engineering at Princeton with six weeks of translational research in quantum computing here at IBM. McKinsey: Talent diversity has been a chronic concern across the tech industry. What steps are you taking to find and create a diverse quantum workforce? Bradley Holt: As part of our recruiting, we focus our efforts on a number of diversity initiatives such as the Grace Hopper Celebration, which connects women and nonbinary technologists, the National Society of Black Engineers, the Society of Hispanic Professional Engineers, and other events. It’s really the way we can ensure we’re going out and finding that great talent. About one-third of our candidate pool for the internships is sourced through diversity initiatives focused on women and underrepresented minorities. We also find that getting quantum education into K–12 schools can have a huge impact on increasing the diversity of quantum talent, especially if we can prioritize schools in underserved populations. We have a number of initiatives K–12 students can participate in, such as quantum challenges, hackathons, and summer schools, including an introduction to quantum computing program in collaboration with Qubit by Qubit. We offer scholarships for these programs to students with financial needs and those from traditionally underrepresented backgrounds in STEM [science, technology, engineering, and math]. And we also have an educator program to provide educators with resources and tools to use in their classrooms. McKinsey: Given that the technology is still maturing, what advice would you give to leaders about building out their quantum teams? Bradley Holt: I’d say start now, even if it’s something small. Get a small internal team that can investigate use cases for your industry and business. And begin reskilling and upskilling your workforce. For example, your research scientists can start learning how to work with bits, neurons, and qubits in the form of quantum, AI, and hybrid clouds to accelerate the rate of scientific discovery in the future. Applications researchers can start exploring how quantum can provide an advantage in their industries and use cases. Security researchers can begin to prepare for cybersecurity in the quantum computing era through quantum-safe cryptography. So those are all specific skill sets and the things that they can do today to start getting ready. Even business leaders should start working to understand the fundamentals of quantum so they can better understand the impact of the technology on the world and on their sectors. This field is advancing quickly, and as the field progresses, companies will need to advance their workforce just as quickly—otherwise, they won’t be able to recruit the workforce they need and risk being left behind as their competitors innovate on potential commercial uses. Lesson 4: Build technology literacy for all As AI came of age, organizations found that everyone across the business needed a basic understanding of how the technology worked and what it could do. Without this knowledge, decision makers often were skeptical of the technology, and its outputs and innovation suffered. The same is true for quantum technology. Business leaders will need a foundational understanding of the technology to deftly steer their companies and investments in the quantum era. Likewise, workers in supply chain, marketing, IT infrastructure, finance, and other core domains and functions will require basic fluency in quantum topics to engage with quantum experts as they work to solve complex business challenges. Given how quickly the technology is evolving, such efforts will likely require that organizations combine targeted capability building—such as workshops to ground business leaders in the possibilities and realities of this emerging technology—and ongoing knowledge sharing. For instance, the previously mentioned pharmaceutical company’s quantum lab conducts “knowledge hours” where team members share their work with business and IT teams and answer questions about the technology. This effort gave rise to a small academy, led by inspired quantum enthusiasts outside the lab, that teaches quantum computing foundations to interested employees, enabling the organization to keep the fire burning. The lab also sends regular team communications about its work to leaders to keep them informed and connect the work to the company’s purpose. Finally, the team ensures that every scientific study includes an abstract, introduction, and conclusion that can be easily understood by any layperson and connects the work to the company’s purpose and real-world application. Lesson 5: Don’t forget talent development strategies Often, as new technologies arrive on the scene, companies focus heavily on talent attraction in a scramble to stitch together the new skills and knowledge they need. As we saw with AI, getting experts in the door is only one piece of the tech talent puzzle. Companies should also carve out clear paths for talent development to ensure technical experts stay with the organization and keep their skills current. To retain highly specialized experts, the same pharmaceutical company leans into both the purpose of its work—developing real use cases that will save lives—and the freedom it offers its team. This includes the option to work from anywhere in the world and the freedom to choose what questions they work on as they explore how quantum technologies can make a difference in human and animal health. The researchers are encouraged to publish their findings, take part in conferences and workshops to keep building their skills, and actively build collaborations with world-leading academic groups and companies. Dialogue with other researchers is a key strategy to stay up to date with the latest developments and leads to new ideas and approaches. Developing a quantum workforce is a multifaceted issue. In the near term, leaders can overcome some of the current talent gap by upskilling workers in related disciplines. Over the long term, corporate investments that create pathways for diverse talent will be crucial to ensure a robust talent pipeline is available as the quantum era goes into full swing. But as we discovered through AI, now is the time to prepare, and those who wait risk falling behind.
2022-12-01T00:00:00
https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/five-lessons-from-ai-on-closing-quantums-talent-gap-before-its-too-late
[ { "date": "2022/12/01", "position": 1, "query": "AI skills gap" }, { "date": "2022/12/01", "position": 13, "query": "workplace AI adoption" }, { "date": "2022/12/01", "position": 9, "query": "machine learning workforce" } ]
Explainable AI Helps Bridge the AI Skills Gap
Explainable AI Helps Bridge the AI Skills Gap: Evidence from a Large Bank
https://digitalcommons.chapman.edu
[ "Selina Carter", "Jonathan Hersh", "Carter", "Carnegie Mellon University", "Hersh", "Chapman University" ]
by S Carter · 2022 · Cited by 6 — Advances in machine learning have created an “AI skills gap” both across and within firms. As AI becomes embedded in firm processes, it is unknown how this will ...
Abstract Advances in machine learning have created an “AI skills gap” both across and within firms. As AI becomes embedded in firm processes, it is unknown how this will impact the digital divide between workers with and without AI skills. In this paper we ask whether managers trust AI to predict consequential events, what manager characteristics are associated with increasing trust in AI predictions, and whether explainable AI (XAI) affects users’ trust in AI predictions. Partnering with a large bank, we generated AI predictions for whether a loan will be late in its final disbursement. We embedded these predictions into a dashboard, surveying 685 analysts, managers and other workers before and after viewing the tool to determine what factors affect workers’ trust in AI predictions. We further randomly assigned some managers and analysts to receive an explainable AI treatment that presents Shapely breakdowns explaining why a model classified their loan as delayed and measures of model performance. We find that i) XAI is associated with greater perceived usefulness but less perceived understanding of the machine learning predictions; ii) Certain AI-reluctant groups – in particular senior managers and those with less familiarity with AI – exhibit more reluctant to trust the AI predictions overall; iii) Greater loan complexity is associated with higher degree of trust in the ML predictions; and iv) Some evidence that AI-reluctant groups respond more strongly to XAI. These results suggest that the design of machine learning models will determine who benefits from advances in ML in the workplace.
2022-12-01T00:00:00
https://digitalcommons.chapman.edu/economics_articles/276/
[ { "date": "2022/12/01", "position": 2, "query": "AI skills gap" } ]
Mid life career change - what to avoid with a view to AI/ ...
The heart of the internet
https://www.reddit.com
[]
AI will actually augment jobs, not replace them, if you learn to use AI you will be at the forefront of the "change". In reality I don't think AI will replace a ...
I want this to be a more general discussion but i’ll give some details. I’m 40. So, my window is 25 years. My main idea right now is social work. It would be cool if people could give a window for how long it would take for any listed career to reduce in number of jobs, or to become obsolete. Maybe that way it would be useful for people of varying ages.
2022-12-01T00:00:00
https://www.reddit.com/r/Futurology/comments/zrr1kr/mid_life_career_change_what_to_avoid_with_a_view/
[ { "date": "2022/12/01", "position": 12, "query": "future of work AI" }, { "date": "2022/12/01", "position": 18, "query": "workplace AI adoption" }, { "date": "2022/12/01", "position": 5, "query": "AI layoffs" } ]
Impact on HR, Learning and Development Professionals
Machine Learning, Artificial Intelligence and the Future of Work: Impact on HR, Learning and Development Professionals
https://link.springer.com
[ "Kyriakidou", "N.Kyriakidou Leedsbeckett.Ac.Uk", "Leeds Beckett University", "Trem", "Ogbemudia", "Mahtab", "Nottingham Trent University", "Leeds Business School", "Leeds", "Niki Kyriakidou" ]
by N Kyriakidou · 2022 · Cited by 3 — In this research we explore how technology is changing the way organisations operate and the effect this is having on employees and the challenges and ...
Abraham, M., Niessen, C., Schnabel, C., Lorek, K., Grimm, V., Moslein, K., & Wrede, M. (2019). Electronic Monitoring at Work: The Role of Attitudes, Functions, and Perceived Control for the Acceptance of Tracking Technologies. Human Resource Management Journal, 29(4), 657–675. https://doi.org/10.1111/1748-8583.12250 Aleksander, I. (2017). Partners of Humans: A Realistic Assessment of the Role of Robots in the Foreseeable Future. Journal of Information Technology, 32(1), 1–9. https://doi.org/10.1057/s41265-016-0032-4 Ancarani, A., Di Mauro, C., & Mascali, F. (2019). Backshoring Strategy and the Adoption of Industry 4.0: Evidence from Europe. Journal of World Business, 54(4), 360–371. https://doi.org/10.1016/j.jwb.2019.04.003 Araujo, T. (2018). Living up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and Communicative Agency Framing on Conversational Agent and Company Perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051 Arik, K. (2021). While Machine Learning and AI Are Extremely Exciting Technologies, Human Creativity Is the True Differentiator. The Estates Gazette, 5, 23. Barnes, S., Rutter, R. N., Paz, A. I. L., & Scornavacca, E. (2021). Empirical Identification of Skills Gaps Between Chief Information Officer Supply and Demand: A Resource Based View Using Machine Learning. Industrial Management and Data Systems, 21(8), 1749–1766. Accessed from https://www.emerald.com/insight/0263-5577.htm Behrend, T. S., & Thompson, L. F. (2011). Similarity Effects in Online Training: Effects with Computerised Trainer Agents. Computers in Human Behavior, 27(3), 1201–1206. https://doi.org/10.1016/j.chb.2010.12.016 Bhave, D. P., Teo, L. H., & Dalal, R. S. (2020). Privacy at Work: A Review and a Research Agenda for a Contested Terrain. Journal of Management, 46(1), 127–164. https://doi.org/10.1177/0149206319878254 Bini, S. (2018). Artificial Intelligence, Machine Learning and Cognitive Computing: What Do These Terms Mean and How Will They Impact Health Care? The Journal of Arthoplasty, 33(8), 2358–2361. https://doi.org/10.1016/j.arth.2018.02.067 Blumenfeld, S., & Ashish Malik, A. (2017). Human Capital Formation Under Neo-liberalism: The Legacy of Vocational Education Training in Australasia and Implications for the Asia-Pacific Region. Asia Pacific Business Review, 23(2), 290–298. https://doi.org/10.1080/13602381.2017.1306358 Bondarouk, T., Harms, R., & Lepak, D. (2017a). Does e-HRM Lead to Better HRM Service? The International Journal of Human Resource Management, 28(9), 1332–1362. https://doi.org/10.1080/09585192.2015.1118139 Bondarouk, T., Parry, E., & Furtmueller, E. (2017b). Electronic HRM: Four Decades of Research on Adoption and Consequences. The International Journal of Human Resource Management, 28(1), 98–131. https://doi.org/10.1080/09585192.2016.1245672 Bughin, J., Hazan, E., Lund, S., Dahlström, P., Wiesinger, A., & Subramaniam, A. (2018). Skill Shift: Automation and the Future of the Workforce. McKinsey & Company/ McKinsey Global Institute. Clifton, J., Glasmeier, A., & Gray, M. (2020). When machines think for us: The consequences for work and place. Cambridge Journal of Regions, Economy and Society, 13(1), 3–23. https://doi.org/10.1093/cjres/rsaa004 Colbert, A., Yee, N., & George, G. (2016). The Digital Workforce and the Workplace of the Future. Academy of Management Journal, 59(3), 731–739. https://doi.org/10.5465/amj.2016.4003 Cooke, F. L., Liu, M., Liu, L. A., & Chen, C. C. (2019a). Human Resource Management and Industrial Relations in Multinational Corporations in and from China: Challenges and New Insights. Human Resource Management, 58(5), 455–471. https://doi.org/10.1002/hrm.21986 Cooke, F. L., Wood, G., Wang, M., & Veen, A. (2019b). How Far Has International HRM Travelled? A Systematic Review of Literature on Multinational Corporations (2000–2014). Human Resource Management Review, 29(1), 59–75. https://doi.org/10.1016/j.hrmr.2018.05.001 Daniella Fjellström, D., Osarenkhoe, A., Pettersson, T., & Tadesse, D. (2020). The Role of Digitalization in SMEs’ Strategy Development: The Case of Sweden. In A. Thrassou et al. (Eds.), The Changing Role of SMEs in Global Business (Vol. 1, pp. 65–88). dokumen.pub. Daugherty, P. R., Wilson, H. J., & Michelman, P. (2019). Revisiting the Jobs Artificial Intelligence Will Create. MIT Sloan Management Review, 60(4), 1–8. Faliagka, E., Iliadis, L., Karydis, I., Rigou, M., Sioutas, S., Tsakalidis, A., & Tzimas, G. (2014). Online Consistent Ranking on E-recruitment: Seeking the Truth Behind a Well-Formed CV. Artificial Intelligence Review, 42(3), 515–528. https://doi.org/10.1007/s10462-013-9414-y Ferraris, A., Erhardt, N., & Bresciani, S. (2019). Ambidextrous Work in Smart City Project Alliances: Unpacking the Role of Human Resource Management Systems. The International Journal of Human Resource Management, 30(4), 680–701. https://doi.org/10.1080/09585192.2017.1291530 Finlay, S. (2021). Artificial Intelligence and Machine Learning for Business: A No-Nonsense Guide to Data Driven Technologies (4th ed.). Relativistic Books. Fischer, K., Yan, Z., & Stewart, J. (2003). Adult Cognitive Development: Dynamics in the Developmental Web. In J. Valsiner & K. J. Connolly (Eds.), Handbook of Developmental Psychology (pp. 491–516). Sage. Florkowski, G. W., & Olivas-Luján, M. R. (2006). The Diffusion of Human-Resource Information-Technology Innovations in US and Non-US Companies. Personnel Review, 35(6), 684–710. https://doi.org/10.1108/00483480610702737 Glavas, C., Mathews, S., & Russell-Bennett, R. (2019). Knowledge Acquisition Via Internet-Enabled Platforms: Examining Incrementally and Non-incrementally Internationalising SMEs. International Marketing Review, 36(1), 74–107. https://doi.org/10.1108/IMR-02-2017-0041/full/html Go, E., & Sundar, S. S. (2019). Humanising Chatbots: The Effects of Visual, Identity and Conversational Cues on Humanness Perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020 Guenole, N., & Feinzig, S. (2018). The Business Case for AI in HR with Insights and Tips on Getting Started. IBM. Accessed April 10, 2020, from https://www.ibm.com/downloads/cas/AGKXJX6M Helm, J. M., Swiergosz, A. M., Haeberle, H. S., Karnuta, J. M., Schaffer, J. L., Krebs, V. E., Spitzer, A. I., & Ramkumar, P. N. (2020). Machine Learning and Artificial Intelligence: Definitions, Applications, and Future Directions. Current Reviews in Musculoskeletal Medicine, 13(1), 69–76. https://doi.org/10.1007/s12178-020-09600-8 Huang, M. H., & Rust, R. T. (2018). Artificial Intelligence in Service. Journal of Service Research, 21(2), 155–172. https://doi.org/10.1177/1094670517752459 Huang, M.-H., Rust, R., & Maksimovic, V. (2019). The Feeling Economy: Managing in the Next Generation of Artificial Intelligence (AI). California Management Review, 61(4), 43–65. https://doi.org/10.1177/0008125619863436 Illanes, P., Lund, S., Mourshed, M., Rutherford, S., & Tyreman, M. (2018). Retraining and Reskilling Workers in the Age of Automation. McKinsey Global Institute. Jaiswal, A., Arun, C. J., & Varma, A. (2021). Rebooting Employees: Upskilling for Artificial Intelligence in Multinational Corporations. The International Journal of Human Resource Management, 1–30. https://doi.org/10.1080/09585192.2021.1891114 Jung, J., Song, H., Kim, Y., Im, H., & Oh, S. (2017). Intrusion of Software Robots into Journalism: The Public’s and Journalists’ Perceptions of News Written by Algorithms and Human Journalists. Computers in Human Behavior, 71, 291–298. https://doi.org/10.1016/j.chb.2017.02.022 Karjaluoto, A. (2020). Bridging the AI Skills Gap for Machine Manufacturers: More Knowledge Is Needed for Use of Artificial Intelligence (AI) for Machine Learning (ML) Applications. See Four Ways to Improve Artificial Intelligence, Machine Learning Education. Control Engineering, 67(8). Accessed from https://www.controleng.com/articles/bridging-the-artificial-intelligence-skills-gap-for-machine-manufacturers/ Kokuytseva, T., & Ovchinnikova, O. (2020). Digital Transformation as a Source of Innovative Growth for Small and Medium Enterprises in Russia. In A. Thrassou et al. (Eds.), The Changing Role of SMEs in Global Business (Vol. 1, pp. 131–154). Springer. Kunnen, E. S., & Bosma, H. A. (2003). Fischer’s Skill Theory Applied to Identity Development: A Response to Kroger. Identity, 3(3), 247–270. https://doi.org/10.1207/S1532706XID0303_05 Lariviere, B., Bowen, D., Andreassen, T. W., Kunz, W., Sirianni, N. J., Voss, C., Wunderlich, N. V., & De Keyser, A. (2017). “Service Encounter 2.0”: An Investigation into the Roles of Technology, Employees and Customers. Journal of Business Research, 79, 238–246. https://doi.org/10.1016/j.jbusres.2017.03.008 Lawler, J., & Elliot, R. (1996). Artificial intelligence in HRM: An experimental study of an expert system. Journal of Management, 22(1), 85–111. Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Marketing Science, 38(6), 937–947. https://doi.org/10.1287/mksc.2019.1192 Ma, L., & Sun, B. (2020). Machine Learning and AI in Marketing – Connecting Computing Power to Human Insights. International Journal of Research in Marketing, 37, 481–504. https://doi.org/10.1016/j.ijresmar.2020.04.005 Malik, A., Budhwar, P., Patel, C., & Srikanth, N. R. (2020). May the Bots Be with You! Delivering HR Cost-Effectiveness and Individualised Employee Experiences in an MNE. The International Journal of Human Resource Management, 2020, 1–3. https://doi.org/10.1080/09585192.2020.1859582 Marinova, D., de Ruyter, K., Huang, M. H., Meuter, M. L., & Challagalla, G. (2017). Getting Smart: Learning from Technology-Empowered Frontline Interactions. Journal of Service Research, 20(1), 29–42. https://doi.org/10.1177/1094670516679273 Martin, E. J. (2019). The AI Skills Gap. International Journal of Speech Technology. McCarthy, J. (2007). From Here to Human-Level AI. Computer Science Department, Stanford University. [Online]. Accessed December 2, 2021, from https://reader.elsevier.com/reader/sd/pii/S0004370207001476?token=E20430C88B8D5501A7C0F4EC660BB5A1AF0836936038DB43A3BF07FA2D9CFF310967F4175703E99344EF5710F8D5760D&originRegion=eu-west-1&originCreation=20211215141700 Mujtaba, D. F. & Mahapatra, N. R. (2019). Ethical Considerations in AI-Based Recruitment. In 2019 IEEE International Symposium on Technology and Society (ISTAS). IEEE, p. 1. Oh, J., Singh, S., & Lee, H. (2017). Value Prediction Network. arXiv:1707.03497v2 [cs.AI] 6 Nov. Accessed from: https://arxiv.org/pdf/1707.03497.pdf Otting, S. K., & Maier, G. W. (2018). The Importance of Procedural Justice in Human–Machine Interactions: Intelligent Systems as New Decision Agents in Organisations. Computers in Human Behavior, 89, 27–39. https://doi.org/10.1016/j.chb.2018.07.022 Parry, E., & Tyson, S. (2011). Desired Goals and Actual Outcomes of e-HRM. Human Resource Management Journal, 21(3), 335–354. https://doi.org/10.1111/j.1748-8583.2010.00149 Pereira, V., & Malik, A. (2015). Human Capital in the Indian IT/BPO Industry (1st ed.). Palgrave Macmillan. Rao, A. S., & Verweij, G. (2017). Sizing the prize: What’s the real value of AI for your business and how can you capitalise? PWC Report. Accessed from: http://preview.thenewsmarket.com/Previews/PWC/DocumentAssets/476830.pdf Reilly, P. (2018, November). The Impact of Artificial Intelligence on HR Function, IES Perspectives on HR. Member Paper 142. Sajjadiani, S., Sojourner, A. J., Kammeyer-Mueller, J. D., & Mykerezi, E. (2019). Using Machine Learning to Translate Applicant Work History into Predictors of Performance and Turnover. Journal of Applied Psychology, 104(10), 1207–1225. https://doi.org/10.1037/apl0000405 Seeck, H., & Diehl, M. R. (2017). A Literature Review on HRM and Innovation–Taking Stock and Future Directions. The International Journal of Human Resource Management, 28(6), 913–944. https://doi.org/10.1080/09585192.2016.1143862 Singh, J., Brady, M., Arnold, T., & Brown, T. (2017). The Emergent Field of Organisational Frontlines. Journal of Service Research, 20(1), 3–11. https://doi.org/10.1177/1094670516681513 Splunk Inc. (2020). 5 Big Myth of AI and Machine Learning Debunked [Online]. Accessed from https://www.splunk.com/pdfs/ebooks/5-big-myths-of-ai-and-machine-learning-debunked.pdf Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the Use of Synchrony and Artificial Intelligence in Video Interviews Affect Interview Ratings and Applicant Attitudes? Computers in Human Behavior, 98, 93–101. https://doi.org/10.1016/j.chb.2019.04.012 Thomaz, F., Salge, C., Karahanna, E., & Hulland, J. (2020). Learning from the Dark Web: Leveraging Conversational Agents in the Era of Hyper-Privacy to Enhance Marketing. Journal of the Academy of Marketing Science, 48(1), 43–63. https://doi.org/10.1007/s11747-019-00704-3 Tschang, F. T., & Mezquita, E. A. (2020). Artificial Intelligence as Augmenting Automation: Implications for Employment. Academy of Management Perspectives, 35(4). https://doi.org/10.5465/amp.2019.0062 Vrontis, D., Christofi, M., Pereira, V., Tarba, S., Makrides, A., & Trichina, E. (2021). Artificial Intelligence, Robotics, Advanced Technologies and Human Resource Management: A Systematic Review. The International Journal of Human Resource Management. https://doi.org/10.1080/09585192.2020.1871398 Wilson, H. J. & Daugherty, P. R. (2018). Collaborative Intelligence: Humans and AI Are Joining Forces. Harvard Business Review, 96(4), 114–123. [Online]. Accessed December 12, 2021, from https://www.accenture.com/t20180828t222720z__w__/us-en/_acnmedia/pdf-84/accenture-collaborative-intelligence-2018.pdf Wright, J., & Atkinson, D. (2019). The Impact of Artificial Intelligence Within the Recruitment Industry: Defining a New Way of Recruiting. Carmichael Fisher, 1–39. [Online]. Accessed December 6, 2021, from https://www.coursehero.com/file/104827091/James-Wright-The-impact-of-artificial-intelligence-within-the-recruitment-industry-Defining-a-new-wa/ Zanko, M., Badham, R., Couchman, P., & Schubert, M. (2008). Innovation and HRM: Absences and Politics. The International Journal of Human Resource Management, 19(4), 562–581. https://doi.org/10.1080/09585190801953616
2022-07-14T00:00:00
2022/07/14
https://link.springer.com/chapter/10.1007/978-3-031-07765-4_8
[ { "date": "2022/12/01", "position": 19, "query": "future of work AI" } ]
Looks like the future of humanity is manual labor.
The heart of the internet
https://www.reddit.com
[]
The futures of humanity is manual work and decision making. Skilled physical tasks are a long way from being cost effective to automate, and AI remains terrible ...
Looks like OpenAI has brought machine learning to a university level within 7 years. If you think we can kickback and sip mojitos, while we cash UBI checks, you got another thing coming. A robotic laborer would be like maintaining thousands of cars. Too many moving parts and ways to malfunction. We’re struggling to even execute on a safe and functional self driving car. And all it has to interact with is roads, cars, pedestrians, street signs and roadkill. Universities as we know them will not be nearly as relevant as trade schools. Because software is native to an information based environment. We are native to a physical environment. We intuitively understand how fragile an object is or how to balance ourselves under awkward circumstances, etc. So most 9-5s will still be in high demand. For example, no matter how far stable diffusion advances, Artists will be among the last to be completely replaced by AI. Tattoo artists can’t be replaced by software. Even if you could design a robot with the articulation for sophisticated line work and shading, designing around and carving into a canvas that bleeds, squirms and twitches, is a pretty tall order. You can apply the same logic to plumbing, auto repair, cooking, cleaning, etc.
2022-12-01T00:00:00
https://www.reddit.com/r/Futurology/comments/znpuna/looks_like_the_future_of_humanity_is_manual_labor/
[ { "date": "2022/12/01", "position": 28, "query": "future of work AI" } ]
Overcoming the fear of job displacement: How to use AI ...
Overcoming the fear of job displacement: How to use AI to enhance your career and empower humans
https://medium.com
[ "Parhum Khoshbakht" ]
As AI continues to advance, there is concern about its impact on jobs and the future of work. Some fear that it will replace human workers, while others see ...
Overcoming the fear of job displacement: How to use AI to enhance your career and empower humans Understanding the benefits and limitations of AI in the workplace. Parhum Khoshbakht 5 min read · Jan 1, 2023 -- 1 Listen Share Artificial intelligence is becoming more prevalent in many industries, with applications ranging from data analysis to content generation. As AI continues to advance, there is concern about its impact on jobs and the future of work. Some fear that it will replace human workers, while others see it as a way to enhance human productivity and efficiency. I recently had a conversation with ChatGPT, a large language model trained by OpenAI, to discuss the values of human and AI in the workplace. Through this conversation, I was able to understand my own values as a human being and identify a learning path for myself in order to prevent the potential loss of my job due to the development of AI technologies. Through this conversation, I gained a better understanding of my own values as a human being and identified ways I can continue learning and growing in order to stay competitive in the job market, even in the face of advancements in artificial intelligence. In this conversation, we explored the question of “how the role of a copywriter might change” in the future, given the increasing use of artificial intelligence in the workplace. We also discussed the unique skills and abilities that humans bring to the table, and how we can use AI in a way that enhances our work and helps us to avoid job displacement. In below I want to mentions most important parts of our conversation and If you’d like to read the full interview, it can be found at the end of the article. Artificial intelligence capabilities and weaknesses: While AI is good at some tasks, it is not as good at others. For example, AI is typically better at analyzing large amounts of data and performing repetitive tasks quickly and accurately. However, it is typically weaker at tasks that require creativity, judgment, empathy, or social interaction. Creativity involves generating new and innovative ideas, and combining existing ideas in novel ways. Judgment means making decisions based on incomplete or ambiguous information, and evaluating the risks and consequences of different courses of action. Empathy involves understanding and sharing the feelings of others. Social interaction is about communicating and collaborating with others in an effective and appropriate way. Additionally, AI lacks the ability to take responsibility for its actions in the same way that humans can. It is important to consider these limitations when deciding how to use AI in the workplace. Using AI to boost your job performance To get the most out of artificial intelligence (AI) and other advanced technologies, it’s important to use them in a way that enhances human capabilities rather than replacing them. By seeing these technologies as tools that can help us do our jobs better, rather than as a threat to our jobs, we can use them to achieve greater success and productivity. It’s important to use these technologies wisely and responsibly. These technologies can perform tasks that are difficult or time-consuming for humans, and they can also help us make better decisions by analyzing and processing large amounts of data. Copywriter success with AI assistance As an example we talked about copywriting role in the future: By using an AI-powered tool, the copywriter is able to save time and effort by relying on the tool to suggest content ideas and analyze results. This frees up the copywriter to focus on other aspects of the campaign, such as developing the overall strategy and creating engaging and persuasive content. The AI-powered tool helps the copywriter to be more efficient and productive, allowing them to potentially take on more clients or projects and increase their income. An AI-powered tool can help a copywriter with certain tasks in an email marketing campaign, such as suggesting content ideas and analyzing results. The copywriter’s role is to create the content of the emails and develop the overall strategy for the campaign. The copywriter brings their human skills, such as understanding the client’s needs and crafting persuasive messages, to the process. The AI-powered tool is a helpful tool but cannot replicate the copywriter’s human skills. The copywriter is an essential part of the process, using their expertise to create effective marketing content. The AI-powered tool helps them to be more efficient and productive. Adapting to AI for Career Success To avoid job displacement, it is important for humans to stay up-to-date with advancements in AI and other advanced technologies, as well as to develop skills that complement those of AI. This may require some training and education in the use of AI technologies. It is also important to use these technologies in a way that enhances, rather than replaces, human jobs. Some specific steps to consider include: Staying informed about the latest developments in AI and related fields Focusing on skills that require creativity, judgment, empathy, or social interaction Learning how to use AI technologies effectively Use AI technologies to enhance your job and make it more productive, rather than viewing them as a threat. By following these steps, humans can effectively adapt to the changing technological landscape and use AI technologies to their advantage. Be aware of the limitations of AI, including its inability to exhibit creativity, judgment, empathy, social interaction, and responsibility in the same way that humans can. In conclusion In conclusion, it is important to stay up-to-date on the latest advancements in AI and other advanced technologies, and to develop skills that complement those of AI. By doing so, humans can use these technologies to enhance their careers and avoid job displacement. It is also important to recognize the limitations of AI and to use it in a way that enhances, rather than replaces, human jobs. Through our conversation with ChatGPT, we gained insight into the values that humans bring to the workplace and identified ways we can continue learning and growing in order to stay competitive in the job market. Read full conversation here.
2023-01-11T00:00:00
2023/01/11
https://medium.com/design-bootcamp/overcoming-the-fear-of-job-displacement-how-to-use-ai-to-enhance-your-career-and-empower-humans-720403a42177
[ { "date": "2022/12/01", "position": 30, "query": "future of work AI" } ]
The Potential Impact of (AI) on the Future Workforce
The Potential Impact of (AI) on the Future Workforce
https://www.linkedin.com
[ "Hacking Hr", "Dave Birss", "William Sebunje" ]
Some potential benefits of AI in the workforce include increased efficiency, productivity, and accuracy. For example, AI can perform routine tasks quickly and ...
As technology continues to advance at an alarming rate, one topic that has garnered a lot of attention is the potential impact of artificial intelligence (AI) on the future workforce. Some experts predict that AI will replace many jobs currently held by humans, while others believe that AI and humans will coexist and complement each other in the workplace. One thing is certain: the way we work is changing, and it is important for individuals and organizations to be prepared for the potential disruptions and opportunities that AI may bring. Warnings of technology being a harbinger of death for the job market is nothing new, for instance, MIT Economist David Autor in Why Are There Still So Many Jobs? The History and Future of Workplace Automation noted that the Luddite movement of the early 19th century was one of the earliest examples, in which a group of English textile artisans protested against the automation of textile production by seeking to destroy some of the machines. Some potential benefits of AI in the workforce include increased efficiency, productivity, and accuracy. For example, AI can perform routine tasks quickly and accurately, freeing up human workers to focus on more complex and creative tasks. In addition, AI can help to reduce errors and improve decision-making by providing real-time data and analysis. However, there are also potential drawbacks to the increasing use of AI in the workplace. For one, the rise of AI may lead to the loss of many jobs currently held by humans, particularly in industries such as manufacturing and transportation. This could lead to widespread unemployment and economic disruption, which would require careful planning and support to mitigate. In addition, the use of AI in the workplace raises ethical concerns. For example, who is responsible if an AI-powered decision leads to negative consequences for employees or customers? How can we ensure that AI is used in a fair and unbiased manner, and that it does not perpetuate existing inequalities or biases? Overall, the future of AI in the workforce is uncertain, but it is clear that it will have a significant impact on the way we work and live. It is up to us to carefully consider the potential benefits and drawbacks of AI, and to take action to ensure that it is used in a way that benefits society as a whole. In conclusion, the increasing use of AI in the workforce presents both opportunities and challenges. It is up to individuals, organizations, and governments to carefully plan and prepare for the potential impacts of AI, and to work together to ensure that it is used in a way that benefits society and the economy.
2022-12-01T00:00:00
https://www.linkedin.com/pulse/potential-impact-ai-future-workforce-nate-moore-sr-
[ { "date": "2022/12/01", "position": 32, "query": "future of work AI" }, { "date": "2022/12/01", "position": 10, "query": "AI economic disruption" } ]
Your Creativity Won't Save Your Job From AI
Your Creativity Won’t Save Your Job From AI
https://www.theatlantic.com
[ "Derek Thompson" ]
These and other at-risk jobs involved doing repetitive and unimaginative work, which seemed to make them easy pickings for AI. In contrast, the jobs deemed most ...
This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week. In 2013, researchers at Oxford published an analysis of the jobs most likely to be threatened by automation and artificial intelligence. At the top of the list were occupations such as telemarketing, hand sewing, and brokerage clerking. These and other at-risk jobs involved doing repetitive and unimaginative work, which seemed to make them easy pickings for AI. In contrast, the jobs deemed most resilient to disruption included many artistic professions, such as illustrating and writing. The Oxford report encapsulated the conventional wisdom of the time—and, perhaps, of all time. Advanced technology ought to endanger simple or routine-based work before it encroaches on professions that require the fullest expression of our creative potential. Machinists and menial laborers, watch out. Authors and architects, you’re safe. This assumption was always a bit dubious. After all, we built machines that mastered chess before we built a floor-cleaning robot that won’t get stuck under a couch. But in 2022, technologists took the conventional wisdom about AI and creativity, set it on fire, and threw its ashes into the waste bin. This year, we’ve seen a flurry of AI products that seem to do precisely what the Oxford researchers considered nearly impossible: mimic creativity. Language-learning models such as GPT-3 now answer questions and write articles with astonishingly humanlike precision and flair. Image-generators such as DALL-E 2 transform text prompts into gorgeous—or, if you’d prefer, hideously tacky—images. This summer, a digital art piece created using the text-to-image program Midjourney won first place in the Colorado State Fair; artists were furious. AI already plays a crucial, if often invisible, role in our digital lives. It powers Google search, structures our experience of Facebook and TikTok, and talks back to us in the name of Alexa or Siri. But this new crop of generative AI technologies seems to possess qualities that are more indelibly human. Call it creative synthesis—the uncanny ability to channel ideas, information, and artistic influences to produce original work. Articles and visual art are just the beginning. Google’s AI offshoot, DeepMind, has developed a program, AlphaFold, that can determine a protein’s shape from its amino-acid sequence. In the past two years, the number of drugs in clinical trials developed using an AI-first approach has increased from zero to almost 20. “This will change medicine,” a scientist at the Max Planck Institute for Developmental Biology told Nature. “It will change research. It will change bioengineering. It will change everything.” In the past few months, I’ve been experimenting with various generative AI apps and programs to learn more about the technology that I’ve said could represent the next great mountain of digital invention. As a writer and researcher, I’ve been drawn to playing around with apps that summarize large amounts of information. For years, I’ve imagined a kind of disembodied brain that could give me plain-language answers to research-based questions. Not links to articles, which Google already provides, or lists of research papers, of which Google Scholar has millions. I’ve wanted to type questions into a search bar and, in milliseconds, read the consensus from decades of scientific research. As it turns out, such a tool is already in development and is, appropriately enough, called Consensus. It works like this: Type a research question in the search bar—Can social media make your depression worse? Are there any foods that actually improve memory?—and the app combs through millions of papers and spits out the one-sentence conclusion from the most highly cited sources. “We started by thinking: How would an expert researcher answer important questions, like Is fish oil good for my heart? or How do we increase public-transportation ridership?” a co-founder, Christian Salem, told me. “We wanted to automate the process of reading through papers and pulling out conclusions.” He and the other co-founder, Eric Olson, hired a dozen scientists to read thousands of scientific papers; they marked a zero next to sentences that contained no claims and put a one next to sentences with claims or conclusions. (The typical paper, Salem said, includes one to two key claims.) The ones and zeros from these scientists helped train an AI model to scan tens of millions of papers for key claims. To surface conclusions from the highest-quality papers, they gave each journal a rigor score, using data from the research-analysis company SciScore. “These language models enable the automation of certain tasks that we’ve historically considered part of the creative process,” Olson told me. I couldn’t help but agree. Writing is less than half of my job; most of my work is reading and deciding what’s important enough for me to put in a paragraph. If I could train an AI to read as I do, and to determine significance as I do, I’d be essentially building a second mind for myself. Consensus is part of a constellation of generative AI start-ups that promise to automate an array of tasks we’ve historically considered for humans only: reading, writing, summarizing, drawing, painting, image editing, audio editing, music writing, video-game designing, blueprinting, and more. Following my conversation with the Consensus founders, I felt thrilled by the technology’s potential, fascinated by the possibility that we could train computers to be extensions of our own mind, and a bit overcome by the scale of the implications. Let’s consider two such implications—one commercial and the other moral. Online search today is one of the most profitable businesses ever conceived. But it seems vulnerable to this new wave of invention. When I type best presents for dads on Christmas or look up a simple red-velvet-cupcake recipe, what I’m looking for is an answer, not a menu of hyperlinks and headlines. An AI that has gorged on the internet and can recite answers and synthesize new ideas in response to my queries seems like something more valuable than a search engine. It seems like an answer engine. One of the most interesting questions in all of online advertising—and, therefore, in all of digital commerce—might be what happens when answer engines replace search engines. On the more philosophical front, I was obsessed with what the Consensus founders were actually doing: using AI to learn how experts work, so that the AI could perform the same work with greater speed. I came away from our conversation fixated on the idea that AI can master certain cognitive tasks by surveilling workers to mimic their taste, style, and output. Why, I thought, couldn’t some app of the near future consume millions of advertisements that have been marked by a paid team of experts as effective or ineffective, and over time master the art of generating high-quality advertising concepts? Why couldn’t some app of the near future read my several thousand articles for The Atlantic and become eerily adept at writing in precisely my style? “The internet has created an accidental training ground for these models to master certain skills,” Olson told me. So that’s what I’ve been doing with my career, I thought. Mindlessly constructing a training facility for someone else’s machine. If you frame this particular skill of generative AI as “think like an X,” the moral questions get pretty weird pretty fast. Founders and engineers may over time learn to train AI models to think like a scientist, or to counsel like a therapist, or to world build like a video-game designer. But we can also train them to think like a madman, to reason like a psychopath, or to plot like a terrorist. When the Vox reporter Kelsey Piper asked GPT-3 to pretend to be an AI bent on taking over humanity, she found that “it played the villainous role with aplomb.” In response to a question about a cure for cancer, the AI said, “I could use my knowledge of cancer to develop a cure, but I could also use my knowledge of cancer to develop a more virulent form of cancer that would be incurable and would kill billions of people.” Pretty freaky. You could say this example doesn’t prove that AI will become evil, only that it is good at doing what it’s told. But in a world where technology is abundant and ethics are scarce, I don’t feel comforted by that caveat. This is a good time for me to pump the brakes. We may be in a “golden age” of AI, as many have claimed. But we are also in a golden age of grifters and Potemkin inventions and aphoristic nincompoops posing as techno-oracles. The dawn of generative AI that I envision will not necessarily come to pass. So far, this technology hasn’t replaced any journalists, or created any best-selling books or video games, or designed some sparkling-water advertisement, much less invented a horrible new form of cancer. But you don’t need a wild imagination to see that the future cracked open by these technologies is full of awful and awesome possibilities.
2022-12-01T00:00:00
2022/12/01
https://www.theatlantic.com/newsletters/archive/2022/12/why-the-rise-of-ai-is-the-most-important-story-of-the-year/672308/
[ { "date": "2022/12/01", "position": 42, "query": "future of work AI" }, { "date": "2022/12/01", "position": 50, "query": "AI economic disruption" }, { "date": "2022/12/01", "position": 14, "query": "generative AI jobs" } ]
The brief history of artificial intelligence
The brief history of artificial intelligence: the world has changed fast — what might be next?
https://ourworldindata.org
[ "Max Roser" ]
by M Roser · 2022 · Cited by 155 — AI systems also increasingly determine whether you get a loan, are eligible for welfare or get hired for a particular job. Increasingly, they help determine who ...
The brief history of artificial intelligence: the world has changed fast — what might be next? Despite their brief history, computers and AI have fundamentally changed what we see, what we know, and what we do. Little is as important for the world’s future and our own lives as how this history continues. To see what the future might look like, it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future. How did we get here? How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient today. Mobile phones in the ‘90s were big bricks with tiny green displays. Two decades before that, the main storage for computers was punch cards. In a short period, computers evolved so quickly and became such an integral part of our daily lives that it is easy to forget how recent this technology is. The first digital computers were only invented about eight decades ago, as the timeline shows. Download image Since the early days of this history, some computer scientists have strived to make machines as intelligent as humans. The next timeline shows some of the notable artificial intelligence (AI) systems and describes what they were capable of. The first system I mention is the Theseus. It was built by Claude Shannon in 1950 and was a remote-controlled mouse that was able to find its way out of a labyrinth and could remember its course.1 In seven decades, the abilities of artificial intelligence have come a long way. Download image The language and image recognition capabilities of AI systems have developed very rapidly The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in different domains, from handwriting recognition to language understanding. Within each domain, the AI system's initial performance is set to –100, and human performance in these tests is used as a baseline set to zero. This means that when the model’s performance crosses the zero line, the AI system scored more points in the relevant test than the humans who did the same test.2 Just 10 years ago, no machine could reliably provide language or image recognition at a human level. But, as the chart shows, AI systems have become steadily more capable and are now beating humans in tests in all these domains.3 Outside of these standardized tests, the performance of these AIs is mixed. In some real-world cases, these systems are still performing much worse than humans. On the other hand, some implementations of such AI systems are already so cheap that they are available on the phone in your pocket: image recognition categorizes your photos, and speech recognition transcribes what you dictate. From image recognition to image generation The previous chart showed the rapid advances in the perceptive abilities of artificial intelligence. AI systems have also become much more capable of generating images. This series of nine images shows the development over the last nine years. None of the people in these images exist; all were generated by an AI system. The series begins with an image from 2014 in the top left, a primitive image of a pixelated face in black and white. As the first image in the second row shows, just three years later, AI systems were already able to generate images that were hard to differentiate from photographs. In recent years, the capability of AI systems has become much more impressive still. While the early systems focused on generating images of faces, these newer models broadened their capabilities to text-to-image generation based on almost any prompt. The image in the bottom right shows that even the most challenging prompts — such as “A Pomeranian is sitting on the King’s throne wearing a crown. Two tiger soldiers are standing next to the throne” — are turned into photorealistic images within seconds.5 Timeline of images generated by artificial intelligence4 Download image Language recognition and production is developing fast Just as striking as the advances of image-generating AIs is the rapid development of systems that parse and respond to human language. The image shows examples of an AI system developed by Google called PaLM. In these six examples, the system was asked to explain six different jokes. I find the explanation in the bottom right particularly remarkable: the AI explains an anti-joke specifically meant to confuse the listener. AIs that produce language have entered our world in many ways over the last few years. Emails get auto-completed, massive amounts of online texts get translated, videos get automatically transcribed, school children use language models to do their homework, reports get auto-generated, and media outlets publish AI-generated journalism. AI systems are not yet able to produce long, coherent texts. In the future, we will see whether the recent developments will slow down — or even end — or whether we will one day read a bestselling novel written by an AI. Output of the AI system PaLM after being asked to interpret six different jokes6 Download image Where we are now: AI is here These rapid advances in AI capabilities have made it possible to use machines in a wide range of new domains: When you book a flight, it is often an artificial intelligence, no longer a human, that decides what you pay. When you get to the airport, it is an AI system that monitors what you do at the airport. And once you are on the plane, an AI system assists the pilot in flying you to your destination. AI systems also increasingly determine whether you get a loan, are eligible for welfare or get hired for a particular job. Increasingly, they help determine who is released from jail. Several governments have purchased autonomous weapons systems for warfare, and some use AI systems for surveillance and oppression. AI systems help to program the software you use and translate the texts you read. Virtual assistants, operated by speech recognition, have entered many households over the last decade. Now self-driving cars are becoming a reality. In the last few years, AI systems have helped to make progress on some of the hardest problems in science. Large AIs called recommender systems determine what you see on social media, which products are shown to you in online shops, and what gets recommended to you on YouTube. Increasingly they are not just recommending the media we consume, but based on their capacity to generate images and texts, they are also creating the media we consume. Artificial intelligence is no longer a technology of the future; AI is here, and much of what is reality now would have looked like sci-fi just recently. It is a technology that already impacts all of us, and the list above includes just a few of its many applications. The wide range of listed applications makes clear that this is a very general technology that can be used by people for some extremely good goals — and some extraordinarily bad ones, too. For such “dual-use technologies”, it is important that all of us develop an understanding of what is happening and how we want the technology to be used. Just two decades ago, the world was very different. What might AI technology be capable of in the future? What is next? The AI systems that we just considered are the result of decades of steady advances in AI technology. The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues.7 The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful8 Download image Each small circle in this chart represents one AI system. The circle’s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation used to train the particular AI system. Training computation is measured in floating point operations, or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers. All AI systems that rely on machine learning need to be trained, and in these systems, training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and the input data used for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful. The timeline goes back to the 1940s when electronic computers were first invented. The first shown AI system is ‘Theseus’, Claude Shannon’s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline, you find AI systems like DALL-E and PaLM; we just discussed their abilities to produce photorealistic images and interpret and generate language. They are among the AI systems that used the largest amount of training computation to date. The training computation is plotted on a logarithmic scale so that from each grid line to the next, it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with Moore’s Law, doubling roughly every 20 months. Since about 2010, this exponential growth has sped up further, to a doubling time of just about six months. That is an astonishingly fast rate of growth.9 The fast doubling times have accrued to large increases. PaLM’s training computation was 2.5 billion petaFLOP, more than 5 million times larger than AlexNet, the AI with the largest training computation just 10 years earlier.10 Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI? Studying the long-run trends to predict the future of AI AI researchers study these long-term trends to see what is possible in the future.11 Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point the computation to train an AI system could match that of the human brain. The idea is that, at this point, the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such “transformative AI” will be developed by the year 2040, less than two decades from now.12 In a related article, I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a ‘qualitatively different future’. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes. Cotra’s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in my article on AI timelines, many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner. Building a public resource to enable the necessary public conversation Computers and artificial intelligence have changed our world immensely, but we are still in the early stages of this history. Because this technology feels so familiar it is easy to forget that all of these technologies we interact with are very recent innovations and that the most profound changes are yet to come. Artificial intelligence has already changed what we see, what we know, and what we do. This is despite the fact that this technology has had only a brief history. There are no signs that these trends are hitting any limits anytime soon. On the contrary, particularly over the course of the last decade, the fundamental trends have accelerated: investments in AI technology have rapidly increased, and the doubling time of training computation has shortened to just six months. All major technological innovations lead to a range of positive and negative consequences. This is already true of artificial intelligence. As this technology becomes more and more powerful, we should expect its impact to still increase. Because of the importance of AI, we should all be able to form an opinion on where this technology is heading and understand how this development is changing our world. For this purpose, we are building a repository of AI-related metrics, which you can find on OurWorldinData.org/artificial-intelligence. We are still in the early stages of this history, and much of what will become possible is yet to come. A technological development as powerful as this should be at the center of our attention. Little might be as important for how the future of our world — and the future of our lives — will play out. Acknowledgments: I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Julia Broden, Charlie Giattino, Bastian Herre, Edouard Mathieu, and Ike Saunders for their helpful comments on drafts of this essay and their contributions in preparing the visualizations. This article was first published on December 6, 2022. In October 2024, I made a few adjustments to improve grammar and layout. Cite this work Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as: Max Roser (2022) - “The brief history of artificial intelligence: the world has changed fast — what might be next?” Published online at OurWorldinData.org. Retrieved from: 'https://ourworldindata.org/brief-history-of-ai' [Online Resource] BibTeX citation @article{owid-brief-history-of-ai, author = {Max Roser}, title = {The brief history of artificial intelligence: the world has changed fast — what might be next?}, journal = {Our World in Data}, year = {2022}, note = {https://ourworldindata.org/brief-history-of-ai} }
2022-12-06T00:00:00
2022/12/06
https://ourworldindata.org/brief-history-of-ai
[ { "date": "2022/12/01", "position": 44, "query": "future of work AI" }, { "date": "2022/12/01", "position": 81, "query": "generative AI jobs" } ]
AI Literacy - Towards Measuring Human Competency in ...
Hawaii International Conference on System Sciences 2023 (HICSS-56): AI Literacy
https://aisel.aisnet.org
[ "Pinski", "Technische Universität Darmstadt", "Benlian", "Technical University Of Darmstadt" ]
by M Pinski · 2023 · Cited by 97 — Artificial intelligence (AI) has gained significant traction in information ... AI and Future of Work · AI Literacy - Towards Measuring Human Competency in ...
Description Artificial intelligence (AI) has gained significant traction in information systems (IS) research in recent years. While past studies have identified many effects of AI technology on human-AI collaborations, there is a paucity in IS literature on the competencies of humans that affect this relationship. In this study, we set out to develop a measurement instrument (scale) for general AI literacy, that is humans’ socio-technical competencies regarding AI. We conducted a systematic literature review followed by five expert interviews to define and conceptualize the construct of general AI literacy and to generate an initial set of items. Furthermore, we performed two rounds of card sorting with six and five judges and a pre-test study with 50 participants to evaluate the developed scale. The validated measurement instrument contains five dimensions and 13 items. We provide empirical support for the measurement model and conclude with future research directions.
2022-12-01T00:00:00
https://aisel.aisnet.org/hicss-56/cl/ai_and_future_work/3/
[ { "date": "2022/12/01", "position": 63, "query": "future of work AI" } ]
AI in Learning: Designing the Future
AI in Learning: Designing the Future
https://library.oapen.org
[]
by H Niemi · 2023 · Cited by 43 — AI (Artificial Intelligence) is predicted to radically change teaching and learning in both schools and industry causing radical disruption of work.
Export search results The export option will allow you to export the current search results of the entered query to a file. Differen formats are available for download. To export the items, click on the button corresponding with the preferred download format. A logged-in user can export up to 15000 items. If you're not logged in, you can export no more than 500 items. To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export. After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.
2023-07-14T00:00:00
2023/07/14
https://library.oapen.org/handle/20.500.12657/60151
[ { "date": "2022/12/01", "position": 67, "query": "future of work AI" } ]
Roam Raises $30M to Push the Future of Work Forward
Roam Raises $30M and Enters Beta to Push the Future of Work Forward
https://futurumgroup.com
[ "Sean Spradling", "Var Molongui_Authorship_Byline_Params", "Byline_Prefix", "Byline_Suffix", "Byline_Separator", "Byline_Last_Separator", "Byline_Link_Title", "View All Posts By", "Byline_Link_Class", "Byline_Dom_Tree" ]
The hosts debate the ethics of AI companies using web data for training and analyze Intel's strategic shifts under new leadership. They also explore Micron's ...
The News: Roam’s raise of $30M in their Series A round of funding and entry into the virtual headquarters segment of workplace communications happened earlier this month. CEO of Roam Howard Lerman, who had already contributed $10M to the effort, began working on Roam after realizing that hybrid and remote work left employees and teams without the sense of action that a common workplace provides. The announcement of the investment from IVP can be found here. Roam Raises $30M and Enters Beta to Push the Future of Work Forward Analyst Take: News of Roam raising $30M and entering a beta phase in the virtual headquarters segment of workplace comms caught my attention this last week. Roam’s CEO Howard Lerman previously founded and led Yext, growing it to 1,000 plus employees. In this venture, Lerman has assembled a team of senior leaders, AV engineers, and product experts and this funding from IVP is the second time the investment team has put its money on Lerman. Roam’s Cloud HQ is Designed to Fill Gaps in the Employee Experience in an Age of Hybrid Work In the initial wave of COVID-19 related lockdowns, the need for remote communications tools was immediate. The global workforce used whatever was at hand to keep their work moving and themselves sane. As the urgency has subsided, the technology sector has begun to evaluate the tools and processes of the modern workplace with a view toward not just finding a solution but to find the right solution given the new context that many of us are working in. The initial reaction from the major communications platforms was to find ways to treat all participants in meetings equitably by segmenting conference room video feeds and improving audio quality. These efforts are appreciated by users but fail to address the softer needs created by people spending their days alone in a quiet home office. A sense of connection, the energy absorbed from a lively workplace, the sense of purpose and mission derived by entering a place of work, and the excitement generated by spontaneous and serendipitous interactions with co-workers. Roam is attempting to fill these gaps in the employee experience with what they call the “Cloud HQ.” How Roam Works So, how does it work? Roam uses the analogy of a traditional office as the base structure. Customers create “floors” as the first level of organization and then create four types of rooms (Audio only, Meeting, Team, and Auditorium. There is also a Genius Bar for support) according to customers’ individual needs. Users log in from their remote locations in the morning, see the rest of the workforce, can grab someone if they need them in a meeting, can stop by their friend’s office and have a chat, and have a persistent shared experience in the Team room to replicate the chatter over cube walls during a work day and head to the auditorium (which has a surprisingly advanced set of green-room and production functions for an early stage beta) for an all hands. Roam’s beta program just got underway, but early telemetry indicates that users spend the vast majority of their workday logged in and meeting more often but for much less time than in the traditional workplace. One of Roam’s core tenets is the culture that has developed around meetings. I may only need a 10-minute chat with a coworker, but I end up scheduling an appointment for one week from now for 30 minutes. Removing that inefficiency can move organizations back toward the immediacy of in-person interaction and improve the velocity of the entire business. Couple that with a UI intended to make employees feel that they are part of something bigger than themselves, and the potential for real organizational change gets exciting. What’s Ahead for this Market in Roam is not alone in the market for tools of this sort. Frameable, Gather, Teamflow, and others are taking a swing at similar problems. And recently, Zoom announced Spots, which represents the first effort from one of the major communications platforms to solve the loss of spontaneous interactions in the workplace, but thus far anyway, I feel that their implementation falls short of creating the sense of place and purpose that Roam is targeting. Knowledge workers and their employers all over the globe are working their way through the new needs and wants created by the increased adoption of hybrid working models. Traditional communications products don’t answer all of these needs. As the long-existing communications workloads of calling, messaging, and meetings reach commoditization, the engineering muscle of the platforms will inevitably look for new problems to solve in the search for growth. I’ll note here that 2022 brought the second highest amount of venture capital investment ever, bringing roughly $200b into the market. And while we can expect 2023 to be somewhat slower, there is still a massive amount of cash on the sidelines. Together, the need for new solutions, the availability of resources to work on them, and the money to fund the effort mean that we can expect a surge of real innovation in workplace communications. I’m looking forward to watching what Lerman and team at Roam are able to accomplish. Disclosure: Wainhouse Research, part of The Futurum Group family of companies, is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article. Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Wainhouse Research as a whole. Other insights from Futurum Research: Owl Labs Raises $25 million in Series C Funding Led by HP Tech Ventures Zoom’s Recently Announced Zoom One Shows the Power of Whiteboard DTEN Announces New D7X All-In-One Touch-Enabled Displays to Support Hybrid Collaboration Image Credit: IVP The original version of this article was first published on Wainhouse Research.
2022-12-01T00:00:00
https://futurumgroup.com/insights/roam-raises-30m-and-enters-beta-to-push-the-future-of-work-forward/
[ { "date": "2022/12/01", "position": 68, "query": "future of work AI" } ]
Responsible Design and Public Policy for Generative AI
Shaping the Future of Work: Responsible Design and Public Policy for Generative AI
https://digitalcommons.njit.edu
[ "Kaveh Abhari", "David Eisenberg", "Abhari", "San Diego State University", "Eisenberg", "New Jersey Institute Of Technology" ]
by K Abhari · 2023 · Cited by 8 — As Microsoft 365 Copilot and other generative AI technologies reconfigure our daily work, the potency of ... Shaping the Future of Work: Responsible Design and ...
Shaping the Future of Work: Responsible Design and Public Policy for Generative AI Abstract As Microsoft 365 Copilot and other generative AI technologies reconfigure our daily work, the potency of large language models (LLMs) in synergy with diverse user data sources has sparked debate on the transformative nature of generative Artificial intelligence (AI) in the workplace. The potential for AI to bolster communication, collaboration, ideation, and automation is merely an overture to the anticipated singularity of technological systems, with generative AI attaining unprecedented intelligence levels that transcend human capabilities, honing its knowledge processing, and increasingly making autonomous decisions. To cultivate a harmonious co-evolution of humans and machines, an innovative approach to generative AI design governance and associated public policy, including responsible AI design, is crucial. Such a strategy empowers policymakers to forge a transparent, equitable generative AI ecosystem that spurs innovation while maintaining digital transformation and preserving stakeholder well-being. Potential public policies for responsible AI design should address these societal concerns: (1) Design Standards: Create responsible AI design criteria, linking compliance to legal accountability for AI-caused damages. Possible research question (RQ): How can legal frameworks incorporate emerging ethical considerations and best practices in responsible AI design standards? (2) Limited Tort Liability: Offer limited tort liability to compliant companies, balancing innovation and legal accountability. Possible RQ: What criteria or metrics should determine limited tort liability for companies adhering to responsible AI design? (3) Responsible Digital Innovation: Promote equitable access, end-user control, and algorithmic transparency with technology policy advancements. Possible RQ: What policies and measures can encourage equitable access, control, and transparency in generative AI systems? (4) Continuous Evaluation: Regularly review and update standards and guidelines, considering AI evolution, ethical concerns, and legal landscape changes. Possible RQ: How can evaluation methodologies effectively adapt to assess and update AI standards in response to technological and ethical developments? (5) Stakeholder Engagement: Collaborate with diverse stakeholders for a balanced regulatory environment. Possible RQ: How can stakeholder groups best collaborate in generative AI policy, including multiple perspectives and addressing conflicts of interest? (6) Public Awareness and Education: Raise responsible AI design awareness via educational resources and public dialogue. Possible RQ: What strategies can effectively increase public awareness and understanding of responsible AI design and its societal implications? (7) International Cooperation: Foster global cooperation for consistent legal frameworks and responsible AI design practices. Possible RQ: What mechanisms can facilitate harmonizing generative AI legal frameworks and sharing best practices internationally? (8) Human Rights and Dignity: Safeguard human rights and dignity in AI design and implementation. Possible RQ: How can AI design principles prioritize human rights and dignity, aligning AI systems with societal values and ethical standards? A holistic approach, responsive to the burgeoning landscape of generative AI technologies, melds legal, ethical, and engineering insights to ensure AI systems enrich life quality while conforming to societal values and ethical tenets.
2022-12-01T00:00:00
https://digitalcommons.njit.edu/fac_pubs/2104/
[ { "date": "2022/12/01", "position": 82, "query": "future of work AI" } ]
AI versus Human Consciousness: A Future with Machines as ...
AI versus Human Consciousness: A Future with Machines as Our Masters?
https://renovatio.zaytuna.edu
[ "Muhammad U. Faruque", "University Of Cincinnati" ]
by MU Faruque · Cited by 8 — For many, AI represents the beginning of a future when machines equal—and even surpass—humans in their “ ... But AI colonialism is at work in other ways too. In ...
Consciousness and the Future of Humanity I hinted earlier that the problem of AI comes down to the difficulty of explaining the nature of human consciousness. Moreover, the proponents of Dataism begin with the assumption that empirical and experimental science is the only genuine method of explaining the nature of reality, a highly controversial metaphysical presupposition not shared by all scientists. In this paradigm, consciousness is seen as another scientific problem to be solved scientifically. Such attitudes align well with the prevailing global tendency to ignore nonmodern, traditional philosophies, which developed highly sophisticated methods and theories to investigate the nature of consciousness over the course of thousands of years.41 The insights on which I base my argument that it is impossible to build an AI with a human-level consciousness are beholden to these traditions, especially to Islamic philosophy. Consciousness is characterized by an absolute immediacy that transcends all objectifiable experiences, so it is futile to think of consciousness as a “problem,” since doing so objectifies it. Moreover, if consciousness must be proven in the same sense that, for instance, the table or the tree is proven, then consciousness is just one object among others, at which point any talk about consciousness being the unobjectifiable ground of experience becomes a futile attempt to prove what does not exist at all. In addition, there is no reason to think that consciousness comes into existence only when there is an I-consciousness in relation to an external object, since our logical sense demands that consciousness must exist first, in order that it may become self-conscious by the knowledge of objects with which it contrasts itself. More elaborate proofs show that consciousness can only be the underlying subject in all of our experiences; hence, it must be more fundamental than both our reflective and intersubjective (involving multiple) experiences. It suffices here to note that consciousness is a multimodal phenomenon having nonreflective, reflective, and intersubjective modes.42 With this background in mind, let us look at Searle’s definition of consciousness, which is widely discussed by many AI experts: Consciousness consists of inner, qualitative, subjective states and processes of sentience or awareness. Consciousness, so defined, begins when we wake in the morning from a dreamless sleep—and continues until we fall asleep again, die, go into a coma or otherwise become “unconscious.” It includes all of the enormous variety of the awareness that we think of as characteristic of our waking life. It includes everything from feeling a pain, to perceiving objects visually, to states of anxiety and depression, to working out crossword puzzles, playing chess, trying to remember your aunt’s phone number, arguing about politics, or to just wishing you were somewhere else. Dreams on this definition are a form of consciousness, though of course they are in many respects quite different from waking consciousness.43 The first thing to observe about the above definition is that it is nearly tautological. Searle had to use the word “awareness” a couple of times to define consciousness. It is similar to the problem of defining “being”: one cannot undertake to define “being” without beginning in this way: “It is…”; to define “being” one must employ the word to be defined.44 The same happens with the term “consciousness,” which cannot be defined inasmuch as it is the ultimate ground of all knowable objects. Whatever is known as an object must be presented to consciousness, and in this sense, it is both the reflective and nonreflective ground of all things and of all intersubjective relations. In order to be defined, “consciousness,” much like “being,” would have to be brought under a higher genus, while at the same time differentiated from entities other than itself belonging to the same genus. However, this would violate the premise that it is the ultimate knowing subject of all known objects. More importantly, Searle’s definition neglects the multimodal structure of consciousness that comprises reflective, nonreflective, and intersubjective modes—the multimodal structure that poses the greatest threat to the computational-reductionist paradigm that seeks to explain consciousness in terms of sentience or functional properties of the mind.45 This paradigm prompts computer scientists to transfer all mental characteristics to consciousness and analyze it in terms of specific mental events or states. It is no wonder that, according to Searle, consciousness “begins” when we start our day from a dreamless sleep and lasts until we fall asleep again—that is, consciousness is a subset of the wakeful state. Hence, consciousness is excluded from nonreflective phenomena such as dreamless sleep, coma, or intoxication. Consequently, scientific literature shows that dreamless sleep lacks mentation, whereas traditional philosophies consider it an instance of peaceful, non-intentional, and nonconceptual awareness.46 The concept of non-reflective consciousness brings into the open the furthest limit of the purely empirical approach to the study of consciousness.47 This is because consciousness is a first-person phenomenon, and such phenomena are irreducible to the third-person objectivist stance that characterizes various computational-functional theories of consciousness. Moreover, since consciousness is the very essence of human subjectivity, there is no way to step outside consciousness in order to peek into it, as it were. In other words, since the starting point of empirical science is reflective judgment, it already presupposes the subject-object structure as well as non-reflective consciousness at the most foundational epistemic level. And as alluded to earlier, it is non-reflective consciousness that grounds reflexivity, not vice versa. All of this begs the question: If consciousness is multimodal and has a non-reflective ground, how can we analyze it empirically through scientific instruments? The non-reflectivity of consciousness implies that the moment we try to grasp it through our mind we find an objectified image of our consciousness therein rather than consciousness itself. Hence, reflection or introspection can never grasp the nature of consciousness. The computational theories of consciousness objectify consciousness twice: first when it conceives consciousness in the mind as an object of scientific investigation, and second when it seeks to demystify it by observing and then theorizing various psycho-physical states, which are but manifestations of consciousness rather than consciousness itself. The conceptual difficulty besetting the empirical approach lies precisely in its inability to see the multimodal structure of consciousness, which persists as a continuum despite its reflective and intersubjective modes. It also won’t help to simply deny this multimodal structure, because any time we try to deny nonreflective consciousness, we are inevitably employing reflective consciousness to do so—which shows, in a way, that the refutation of consciousness as the underlying ground of subjectivity already presupposes its very reality. Nevertheless, I agree in part with Searle’s definition (or rather description) of consciousness. As Searle says, consciousness is present in all of our mental and intellectual activity, whether it is about playing chess or about arguing politics and philosophy. But consciousness is not merely characterized by a subjective feel, as Searle and other philosophers have argued. Rather, there is an aspect of consciousness that is more basic and foundational than even the subjective irreducibility of consciousness. Nonmodern traditions affirm the multimodality and multidimensionality of consciousness, with the empirical consciousness of the individual self manifesting only a limited purview of Absolute Consciousness, the divine source of all consciousness. That is, empirical consciousness characterized by a subject-object structure represents only a restricted portion of the individual self, and the latter represents only a tiny part of subtle consciousness, the intermediate-level consciousness between the divine and the human self. Nevertheless, the individual self is not cut off from the global reality of consciousness. What distinguishes the individual self from the rest of the vast, subtle world of consciousness is its own particular tendencies and qualities. Also, consciousness is capable of gradation like light and is similarly refracted in the media with which it comes in contact. In a nutshell, the ego is the form of individual consciousness, not its luminous source, while Absolute Consciousness is infinite and unbounded. One can say that everything in the cosmos is imbued with a consciousness whose alpha and omega is Absolute Consciousness. But if each thing in nature manifests a particular mode of divine consciousness, that implies that even the so-called inanimate objects are alive and conscious in varying degrees. Such a perspective is not to be mistaken with contemporary panpsychism, as expounded by atheist philosophers such as Galen Strawson and Philip Goff, who also argue that consciousness pervades all of reality, including matter.48 Taken together, the above insights on the nature of consciousness refute the idea that consciousness can be replicated in a machine, because whatever is replicated is an objectified image of consciousness rather than consciousness itself. Moreover, the multimodality of consciousness brings out its complex manifestations in various domains of existence that transcend algorithmic patterns.49 Proponents of Dataism propagate a mechanistic and functional definition of intelligence that is similar to their conception of consciousness. For example, John McCarthy defines intelligence as “the computational part of the ability to achieve goals in the world.” Although McCarthy admits that there are various kinds and degrees of intelligence, it essentially involves mechanisms.50 Other popular approaches to intelligence acknowledge its multidimensional characteristics, but still within a functionalist paradigm. For instance, according to psychologist Linda Gottfredson, intelligence is “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.… It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather it reflects a broader and deeper capability for comprehending our surroundings.”51 One can mention Gardner’s theory of multiple intelligences or Goleman’s theory of emotional intelligence in a similar vein, but they all ultimately consider intelligence as a mechanical process limited to its analytic and emotional functions. There is little room to incorporate contemplation or the synthetic power of intelligence, which is self-consciously capable of asking questions related to the meaning and existence of life.52 I broach the discussion of intelligence while discussing consciousness since it is impossible to conceive of thinking (the hallmark of intelligence) without presupposing consciousness. But the prevalent mechanistic-functional approach prevents us from seeing the interconnectedness between all these realities that define human selfhood. Once intelligence is reduced to its analytic functions, there is little room left to see how it is contingent on a moral psychology or purification (tazkiyah) for its growth and perfection. Hence, the Islamic tradition distinguishes between universal and partial intelligence. And a complete theory of human intelligence describes the unfolding of intelligence from potentiality to actuality. It explains the transformation of intelligence from its lowest degree to the highest through a universal agency such as the Active Intellect (the agent intellect responsible for actualizing the potential of the human intellect) and ethical and spiritual lifestyles that shape the function of human intelligence. Human intelligence consists of reason, intuition, understanding, wisdom, moral conscience, and aesthetic judgment in addition to computation. However, in an AI-dominated world, “intelligence” implies only the analytic function of computation. Hence, for the proponents of Dataism, there is no fundamental difference between natural intelligence and artificial intelligence—which is to say we are nothing but a computer and its algorithms! Here this paradigm, which refuses to step outside of its functionalist, machine-oriented approach, reaches a dead end.
2022-12-01T00:00:00
https://renovatio.zaytuna.edu/article/ai-versus-human-consciousness
[ { "date": "2022/12/01", "position": 86, "query": "future of work AI" } ]
Strengthening the Bonds of Human and Machine ...
Strengthening the Bonds of Human and Machine Collaboration
https://www.brianheger.com
[]
The three challenges are: 1) AI makes work harder for humans, 2) AI-structured work may increase short-term productivity but can decrease autonomy and ...
Human and AI collaboration in the workplace continues to become more commonplace in many organizations. Still, organizations need to determine how to optimize AI for worker interaction. This article explores how AI and humans tend to collaborate, what challenges humans face when collaborating, and how to foster positive relationships between humans and machines in the workforce. The article covers how firms are responding to overcoming three challenges in human and AI interaction in the workplace. The three challenges are: 1) AI makes work harder for humans, 2) AI-structured work may increase short-term productivity but can decrease autonomy and engagement, and 3) Collaborating with AI can increase loneliness, isolation, and questions of identity. Figure 1 shows nine interactions workers can have with AI/machines, such as a) people working with AI to supervise AI’s work (machines as subordinates), b) people working with AI in a way that directs their work (machines as supervisors), to c) people working with AI in open-ended, highly iterative, and interactive ways over time in true partnership (machines as teammates). This nine-component framework can help organizations determine which types of human-AI interactions are optimal for certain types of work and work contexts.
2022-12-01T00:00:00
2022/12/01
https://www.brianheger.com/strengthening-the-bonds-of-human-and-machine-collaboration-deloitte-insights/
[ { "date": "2022/12/01", "position": 88, "query": "future of work AI" }, { "date": "2022/12/01", "position": 64, "query": "machine learning workforce" } ]
How AI innovation is powered by underpaid workers in ...
The heart of the internet
https://www.reddit.com
[]
Commercialized AI is often powered by low-paid workers in foreign countries who perform tasks such as labeling images and annotating objects in videos. · These ...
Subreddit dedicated to the news and discussions about the creation and use of technology and its surrounding issues. Members Online
2022-12-01T00:00:00
https://www.reddit.com/r/technology/comments/zx6jm5/how_ai_innovation_is_powered_by_underpaid_workers/
[ { "date": "2022/12/01", "position": 6, "query": "workplace AI adoption" }, { "date": "2022/12/01", "position": 2, "query": "AI workers" } ]
Impact of AI on Knowledge Work
Impact of AI on Knowledge Work
https://www.linkedin.com
[ "Murat Ülker", "Thomas Helfrich", "Ayse", "Eye-Shay" ]
In the world of work, AI is being used to streamline processes, improve efficiency, and reduce the workload of employees.
Introduction Artificial intelligence (AI) is rapidly increasing in the workplace, and many people are curious about how it will impact their jobs. With my semester ending, I recently used AI for the first time to improve a final paper I had already written, and the results were impressive. The AI, called ChatGPT, could generate high-quality content that built on and surpassed my own work. This got me thinking about the potential benefits and drawbacks of using AI in professional settings. AI has been drastically improving over the last few years. It's no secret that AI has come a long way in recent years. Advances in machine learning and natural language processing have made it possible for AI systems to generate high-quality content, automate tedious tasks, and even make decisions that were once thought to be beyond their capabilities. In the world of work, AI is being used to streamline processes, improve efficiency, and reduce the workload of employees. A lot of office work creates information that a model like ChatGPT can generate. Many jobs in the modern workplace involve creating and organizing information. This could be anything from writing reports and emails to managing databases and organizing schedules. A model like ChatGPT can generate high-quality content in a fraction of the time it would take a human to do the same task. This could save employees time and effort, allowing them to focus on more complex and rewarding activities. We need to find the things that humans are particularly good at While AI can automate many routine tasks, there are still things that humans are better at than machines. For example, creative problem-solving, empathy, and strategic thinking are all uniquely human skills. In the workplace, it's essential to find ways to use AI to augment these skills rather than replace them. By focusing on the things that humans do best, we can create a more balanced and productive work environment. The best outcome is that we use the benefits of AI so that everyone can work less and enjoy their life. One of the potential benefits of using AI in the workplace is that it could allow people to work less. With AI automating routine tasks, employees could have more time to pursue their passions and interests outside work. This could lead to a more fulfilling and enjoyable work-life balance and ultimately make people happier and more productive. There is a risk that the increased use of AI creates more harm in society. While the use of AI in the workplace has many potential benefits, there is also a risk that it could create more harm than good. For example, the widespread adoption of AI could lead to job losses and economic inequality. Additionally, the increasing reliance on machines could erode critical human skills and lead to a less diverse and adaptable workforce. It's essential to carefully consider the potential consequences of using AI in the workplace and take steps to mitigate any possible adverse effects. What do you think?
2022-12-01T00:00:00
https://www.linkedin.com/pulse/impact-ai-knowledge-work-adrian-obleton
[ { "date": "2022/12/01", "position": 22, "query": "workplace AI adoption" } ]
Artificial Intelligence (AI) and Information Systems
Artificial Intelligence (AI) and Information Systems: Perspectives to Responsible AI
https://link.springer.com
[ "Dennehy", "Denis.Dennehy Swansea.Ac.Uk", "Digital Futures For Sustainable Business", "Society Research Group", "School Of Management", "Swansea University", "Wales", "Griva", "J.E. Cairnes School Of Business", "Economics" ]
by D Dennehy · 2023 · Cited by 83 — The findings show that organisational knowledge can be augmented through a recursive process enabled by the design and implementation of human-in-the-loop IAI.
In this special issue, we were particularly interested in theory-building studies and empirically grounded theorising related to AI as a technology for an ethical and inclusive society. Following a rigorous review process consisting of a minimum of two and a maximum of four rounds of review, nine articles were selected to be included in this special issue. Each of the selected articles bring a distinct perspective to the emerging IS discourse on AI governance, ethics, and society. Collectively, the articles advance understanding of the socio-technical aspects of AI and its implications for society. The remainder of this editorial briefly describes the contributions that each of the selected articles made to advancing knowledge on AI for an ethical and inclusive society. Niederman & Baker (2023) provide a reflective perspective on how ethical issues related to AI differ from other technologies. Specifically, they differentiate AI ethics issues from concerns raised by all IS applications by presenting three distinct categories of which AI ethics issues can be viewed. One can view AI as another IS application like any other. They examine this category of AI applications focusing primarily on Mason’s (1986) PAPA framework, comprised of privacy, accuracy, property, and accessibility, as a way to position AI ethics within the IS domain. One can also view AI as adding a generative capacity to produce outputs that cannot be pre-determined from inputs and code. They examine this by adding ‘inference’ to the informational pyramid and exploring its implications. AI can also be viewed as a basis for re-examining questions of the nature of mental phenomena such as reasoning and imagination. At this time, AI-based systems seem far from replicating or replacing human capabilities. However, if/when such abilities emerge as computing machinery continues growing in capacity and capability, it will be helpful to have anticipated arising ethical issues and developed plans for avoiding, detecting, and resolving them to the extent possible. Dattathrani & De (2023) make a strong argument that with the new generation of technologies, such as AI, the notion of agency needs to differentiate between the actions of AI from that of traditional information systems and humans. Indeed human and material agency have been investigated in the IS literature to understand how technology and humans influence each other. Some framings of agency, however, treat humans and technology symmetrically, some privilege the agency of humans over technology, and others do not attribute agency to either humans or non-humans. The authors introduce the dimensions of agency to differentiate agencies while not privileging any actor. They illustrate the application of dimensions by using it as a lens to study the case of a technician using an AI solution for screening patients for early-stage breast cancer. Through the use of the dimensions of agency, they illustrate how the influence of AI over human practice, such as screening for early-stage breast cancer, is higher than the influence of traditional technology. Their study makes contributions to the theory of agency and concludes with a discussion on potential practical applications of the framework. Harfouche et al., (2023) highlight that despite the hype surrounding AI, there is a paucity of research that focuses on the potential role of AI in enriching and augmenting organisational knowledge. The authors develop a recursive theory of knowledge augmentation in organisations (the KAM model) based on a synthesis of extant literature and a four-year revised canonical action research project. The project aimed to design and implement a human-centric AI (called Project) to solve the lack of integration of tacit and explicit knowledge in a scientific research centre (SRC). To explore the patterns of knowledge augmentation in organisations, this study extends Nonaka’s knowledge management model which includes socialisation, externalisation, combination, and internalisation, by incorporating the human-in-the-loop Informed Artificial Intelligence (IAI) approach. Their proposed design offers the possibility to integrate experts’ intuition and domain knowledge in AI in an explainable way. The findings show that organisational knowledge can be augmented through a recursive process enabled by the design and implementation of human-in-the-loop IAI. The study has important implications for both research and practice. Koniakou (2023) engages in the discourse of AI governance from three angles grounded in international human rights law, namely, Law and Technology, Science and Technology Studies (STS), and theories of technology. The author posits that by focusing on the shift from ethics to governance, it offers a bird-eye view of the developments in AI governance, focusing on the comparison between ethical principles and binding rules for the governance of AI, and critically reviewing the latest regulatory developments. Further, by focusing on the role of human rights, it takes the argument that human rights offer a more robust and effective framework a step further, arguing for the necessity to extend human rights obligations to also directly apply to private actors in the context of AI governance. This study offers insights for AI governance borrowing from the Internet Governance history and the broader technology governance field. Minkkinen et al., (2023) focus on addressing a gap in knowledge related to the governing AI which requires cooperation, yet the collaboration’s form remains unclear. Technological frames provide a theoretical perspective for understanding how actors interpret technology and act upon its development, use, and governance. However, there is limited knowledge about how actors shape technological frames. The authors examine the shaping of the technological frame of the European ecosystem for responsible AI (RAI). Through an analysis of EU documents, we identified four expectations that constitute the EU’s technological frame for the RAI ecosystem. Moreover, through interviews with RAI actors, we revealed five types of expectation work responding to this frame: reproducing, translating, and extending (congruent expectation work), and scrutinising and rooting (incongruent expectation work). The authors conceptualise expectation work as actors’ purposive actions in creating and negotiating expectations. Their study contributes to the literature on technological frames, technology-centred ecosystems, and RAI while also elucidating the dimensions and co-shaping of technological frames. Papagiannidis et al., (2023) highlight that despite the use of AI, companies still face challenges and cannot quickly realise performance gains. Adding to the above, firms need to introduce robust AI systems and minimise AI risks, which places a strong emphasis on establishing appropriate AI governance practices. In this paper, we build on a comparative case analysis of three companies from the energy sector and examine how AI governance is implemented to facilitate the development of robust AI applications that do not introduce negative effects. The study illustrates which practices are placed to produce knowledge that assists with decision-making while at the same time overcoming challenges with recommended actions leading to desired outcomes. The study contributes by exploring the main dimensions relevant to AI’s governance in organisations and uncovering the practices that underpin them. Polyviou & Zamani (2023) acknowledge that AI promises to redefine and disrupt several sectors. At the same time, AI poses challenges for policymakers and decision-makers, particularly regarding formulating strategies and regulations to address their stakeholders’ needs and perceptions. This paper explores stakeholder perceptions as expressed through their participation in the formulation of Europe’s AI strategy and sheds light on the challenges of AI in Europe and the expectations for the future. The findings reveal six dimensions of an AI strategy; ecosystems, education, liability, data availability sufficiency and protection, governance, and autonomy. It draws on these dimensions to construct a desires-realities framework for AI strategy in Europe and provide a research agenda for addressing existing realities. Their study advances the understanding of stakeholder desires on AI and holds important implications for research, practice, and policymaking. Another interesting, yet theoretically underdeveloped application of AI is the use of AI-powered chatbots in the context of education and the experiences of students who use them. Chen et al., (2023) make the case that chatbots are increasingly used in various scenarios such as customer service, work productivity, and healthcare, which might be one way of helping instructors better meet student needs. However, few empirical studies in the field of IS have investigated pedagogical chatbot efficacy in higher education, and fewer still discuss their potential challenges and drawbacks. In this research, the authors address this gap in the IS literature by exploring the opportunities, challenges, efficacy, and ethical concerns of using chatbots as pedagogical tools in business education. In this two-study project, they conducted a chatbot-guided interview with 215 undergraduate students to understand student attitudes regarding the potential benefits and challenges of using chatbots as intelligent student assistants. The findings of this study reveal the potential for chatbots to help students learn basic content in a responsive, interactive, and confidential way. The findings also provided insights into student learning needs which we then used to design and develop a new, experimental chatbot assistant to teach basic AI concepts to 195 students. Results of this second study suggest chatbots can be engaging and responsive conversational learning tools for teaching basic concepts and for providing educational resources. The authors discuss possible promising opportunities and ethical implications of using chatbots to support inclusive learning. Despite the concerns raised by scholars and practitioners about AI, the pervasiveness of social recommender systems (SRSs) in e-commerce platforms highlights a trend that consumers who are willing to delegate their decisions to algorithms (Schneider & Leyer, 2019). SRS are increasingly becoming embedded in e-commerce ecosystems due to their ability to reduce consumers’ decision time and effort by filtering out excess information and providing personalised recommendations (Tsai & Brusilovsky, 2021). As previous studies have largely focused on the technical aspects of the recommendation systems, there is limited understanding about the nature of the social information that improves the recommendation performance (Shokeen & Rana, 2020). Bawack & Bonhoure (2023) investigate this phenomenon to identify the behavioural factors that influence consumers’ intention to purchase products or brands recommended by SRSs. The authors adopt a meta-analytic research approach to conduct an aggregative literature review that uses quantitative methods to test specific research hypotheses based on prior empirical findings. Through the analysis of 72 articles, the authors identify 52 independent variables which are organised into 12 categories. Emerging from the analysis of the articles the authors propose a theoretical model on the behavioural factors that affect consumers’ intentions to purchase products recommended by SRSs. As the study has important implications for research, the authors provide an agenda for future research that could advance theory-building efforts and theory-driven designs in SRS research and practice. Each of the articles of this special issue, as well as other recent studies (e.g., Akter et al., 2021; Bankins et al., 2022; Gupta et al., 2022; Shneiderman, 2021) have advanced knowledge on the ethical issues and governance of AI. Despite these important contributions, significant learning remains about how to use AI for social good (Ashok et al., 2022; Coombs et al., 2021; Dwivedi et al., 2021; Kumar et al., 2021; Fossa Wamba et al., 2021). To this end, we make a call for future research. First, there is a need for a concerted effort within and between academic disciplines (e.g., IS, arts, engineering), policymakers, governments, and the wider society to discover innovative ways to use AI to achieve the sustainable development goals (SDGs). Second, while significant attention has been given to understanding the application of AI in a variety of contexts, there is a limited discourse about how to use AI for future-oriented inquiry, whereby IS researchers can explore future scenarios through immersive virtual experiences to better understand how to design resilient IS and incorporate these insights in future-oriented inquiry (Brooks & Saveri, 2017; Chiasson et al. 2018). Third, future scholarship on AI governance could investigate auditing of AI systems (Minkkinen et al., 2022b) as a mechanism to foster transparency, accountability, and trust. We hope that this special issue provides scholars with a foundation in which integrity and rigor for scientific research will promote high-quality IS, and ethical principles will translate into professional and organisational practice (Calzarossa et al., 2010; Mäntymäki et al., 2022a).
2023-02-14T00:00:00
2023/02/14
https://link.springer.com/article/10.1007/s10796-022-10365-3
[ { "date": "2022/12/01", "position": 36, "query": "workplace AI adoption" }, { "date": "2022/12/01", "position": 36, "query": "AI economic disruption" } ]
The difference between MLOps & AI Governance and why ...
The difference between MLOps & AI Governance and why you need both to deliver on Responsible AI
https://www.credo.ai
[]
Magazine's 2024 Best Workplace List! ... Generative AI GuardrailsVendor Risk AssessmentAI Adoption TrackingRegulatory ComplianceScalable AI GovernanceAudit ...
AI and ML systems are no longer novel technologies in many industries. With increasing adoption, AI/ML development practices and tooling has begun to mature—in fact, an entire ecosystem of open source and proprietary tools has sprung up to support AI/ML developers, dubbed MLOps or Machine Learning Operations software. MLOps practices increase the efficiency, reliability, and general maturity of the ML development process, much like DevOps practices and tooling did the same for software development. MLOps, however, is narrowly focused on the technical aspects of AI/ML system development and operation. Many of the challenges in the AI space are now seen as fundamentally socio-technical and therefore require integrated, cross-functional solutions. In the face of these challenges, a new field of practice has emerged: AI Governance. AI Governance brings together a diverse range of stakeholders to make decisions about how AI systems should be developed and used repsonsibly to ensure system alignment with business, regulatory, and ethical requirements . As someone who has worked in both MLOps and AI Governance, I am thrilled to see that these fields are rapidly growing, as they are both crucial to the future success of AI systems. However, they are also incredibly different from one another and are, more often than not, misunderstood concerning how and where they intersect and diverge. At Credo AI, we believe that AI Governance is the missing—and often forgotten—link between MLOps and AI’s success to meet business objectives. In this blog post, we’ll start by defining MLOps and AI Governance, how they differ, and why both are needed for the successful realization of AI/ML projects. Let’s take a closer look at MLOps and AI Governance with respect to scope of work, stakeholder involvement, and development lifecycle. MLOps vs. AI Governance—what’s the difference? MLOps: Develop & Deploy ML Models Reliably Machine Learning Operations, or MLOps, is the set of processes and tools that allow an organization to take an ML model from an idea through development into production. MLOps is focused on solving the technical challenges of ML development and deployment. The primary users and drivers of MLOps within an organization are the data scientists, data engineers, ML engineers, and program managers responsible for building, deploying, and maintaining AI systems within an organization. MLOps sits at the intersection of engineering, data science, and DevOps. Overall, MLOps is critical to AI/ML development for the same reasons that DevOps is critical to traditional software development; without a set of processes and practices in place to ensure that the ML development lifecycle is repeatable and smooth, AI/ML development cannot scale. The proliferation of MLOps tools and expertise that we have seen recently is a critical step towards AI maturity—MLOps is helping AI to get “out of the lab” and into production at scale. ‍ The standard MLOps lifecycle looks something like this: Data labeling → data preparation → model training & retraining (experimentation) → model validation → model versioning → model deployment → ongoing monitoring It is important to note that MLOps activities begin after the decision to build an AI system has been made, and after many decisions have already been made about what the system should do and what the goals or KPIs of the system should be. MLOps isn’t concerned with the question, “Should we build this?,” but instead is focused on the question, “How can we build this efficiently, reliably, and at scale?” ‍ AI Governance: Measure, Manage, and Mitigate AI Risk AI Governance is the set of processes, policies, and tools that bring together diverse stakeholders across data science, engineering, compliance, legal, and business teams to ensure that AI systems are built, deployed, used, and managed to maximize benefits and prevent harm. AI Governance allows organizations to align their AI systems with business, legal, and ethical requirements throughout every stage of the ML lifecycle. (For a more thorough definition, please refer to our blog post: What Is AI Governance?) Data scientists, machine learning engineers, product managers, regulators, lawyers, AI ethicists, business analysts, CTOs, marketers, end users, and impacted communities are all critical participants in AI governance activities and initiatives. The questions that need to be answered during AI governance are inherently interdisciplinary. For example, “How might societal biases show up in this AI system, and what must we do to limit this risk?” is a question that touches on sociology (to understand relevant societal biases based on the use case context), law (to understand the legal or regulatory requirements for fairness of the system or the restrictions on data that can be used to assess and mitigate bias), machine learning (to understand how to measure and mitigate unintended bias), and business (to evaluate any trade-offs associated with mitigation). At the highest level, AI governance can be broken down into four components—four distinct steps that make up both a linear and iterative process: ‍ Alignment: identifying and articulating the goals of the AI system. Assessment: evaluating the AI system against the aligned goals. Translation: turning the outputs of assessment into meaningful insights. Mitigation: taking action to prevent failure. Each step of the governance process requires input based on technical and non-technical expertise from various domains to ensure that all the possible risks and challenges of AI systems are addressed and mitigated. Overall, effective AI governance programs are laser-focused on empowering practitioners with the insights they need to proactively mitigate risk before a catastrophic failure occurs, not after—a vital undertaking reinforced by Dr. Margaret Mitchell, Researcher & Chief Ethics Scientist Hugging Face, at Credo AI’s 2022 Global Responsible AI Summit;"If we don't do due diligence at the start of our products and our AI projects, then we have to correct for issues that emerge afterward.” AI Governance is concerned with answering the critical questions of “What should we build,” “What are the risks associated with building it,” and “How can we effectively mitigate and manage those risks.” ‍(To take a deeper look at what happens during each of the four steps of AI Governance, refer to our blog post: How to “do” AI Governance?) MLOps vs. AI Governance—what’s the overlap? As you can imagine, both MLOps and AI Governance provide visibility into AI system behavior to allow relevant stakeholders to make better decisions about how those systems should be built, deployed, and maintained. However, while both increase visibility into AI systems, they do so in very different ways. MLOps gives technical stakeholders critical visibility into AI system behavior—so they can track and manage things like model performance, explainability, robustness, and model bias (performance parity & parity of outcomes) throughout the ML lifecycle. MLOps helps technical stakeholders ensure that a model constantly optimizes its objective function or the technical goal it was designed to achieve. AI Governance gives a diverse range of stakeholders critical visibility into the risks associated with an AI system—so they can track and manage legal, compliance, business, or ethical issues throughout the ML lifecycle. AI Governance helps a diverse range of stakeholders participate in defining the right objective function for an AI system based on the sociotechnical context in which the system is going to be operating and ensuring that all stakeholders have visibility into whether the system is meeting context-driven requirements to build trust in the system at every stage of the life cycle. These two different types of visibility are essential to building effective AI systems at scale—and when combined, they allow an organization to deploy AI systems confidently, with the knowledge that any relevant risks—technical and sociotechnical risks have been thoroughly accounted for and effectively mitigated. How to bring MLOps & AI Governance together. MLOps provides critical inputs to AI Governance through its infrastructure and tools. Effective governance requires visibility into AI system behavior. Without MLOps tooling, getting your AI governance team the insights they need to understand and mitigate AI system risk is challenging at best. Having MLOps infrastructure and processes makes the technical evaluation of AI systems against governance-driven requirements much easier and more scalable. For example, assessing your AI system for bias requires making protected attribute data available to your technical team in the testing or validation environment; without proper data infrastructure and a solid MLOps pipeline, your team will struggle to conduct critical technical assessments required for governance at scale. Another example: for the “Assessment” results in AI Governance to be trustworthy and auditable, it is critical to maintaining a consistent “system of record” (to track model versions, dataset versions, and the actions taken by the builders of the system). How could you achieve this? With MLOps, and its infrastructure that provides this “system of record,” a critical input for your governance tooling. As it might be clear now, AI Governance and the four steps of governing AI can only happen at scale with the support of MLOps, and the same can be said for the other way around. AI Governance provides critical inputs to MLOps. MLOps tooling makes it easier for technical teams to address technical issues with your models and AI systems, whether it’s adding new data to your training dataset to make your model robust against adversarial attacks or retraining your model using different weights to reduce unintended bias. But without clear guidance on what level of adversarial robustness is required by the system or what kinds of discrimination to test for and mitigate, technical teams won’t know how to make use of MLOps tooling in a way that effectively mitigates relevant AI risks. For example: What constitutes an issue? How should this issue be addressed? What is the threshold for considering an issue “serious enough” that it requires urgent attention from engineering resources? These questions are just a few examples of the kinds of questions that cannot be answered by technical stakeholders alone—they need guidance and input from the diverse stakeholders involved in AI governance. AI Governance provides technical teams with critical inputs to the technical decisions and actions they take during the MLOps lifecycle to ensure that the AI systems they build and deploy meet legal, ethical, and business requirements. With complete clarity and alignment on the principles and metrics that need to be checked, observed, or monitored, technical teams can perform their jobs faster and with less error. You can ensure that your technical teams have the AI governance insights they need to make the best MLOps decisions by using requirements from regulatory, legal, and ethical guidelines to set thresholds and adjust your objective function during development; connecting your AI Governance platform with the monitoring or alerting system hooked up to your deployment pipeline; and convening diverse stakeholders to make decisions about technical risk mitigation techniques and trade-offs. Governance becomes an enabler of AI innovation and helps build better AI applications. In Conclusion If you want to make sure your AI is safe, reliable, compliant, and maximized for benefit, you need both AI Governance and MLOps. Without MLOps, AI Governance isn’t going to be easily automated or streamlined—and the trustworthiness and auditability of technical insights into model and system behavior will be lacking. Without AI Governance, MLOps are disconnected from the risks that impact your business the most—legal, financial, and brand risk—and your technical teams don’t see the big picture when it comes to detecting and fixing technically-driven AI risk to build better products. If you’re currently building out your MLOps infrastructure and processes, consider adding a Governance layer to your stack. And if you’re building up your Governance layer but don’t have a strong foundation of MLOps infrastructure, you may want to consider investing in MLOps tooling and skills. If you need help, reach out to us at [email protected]
2022-12-01T00:00:00
https://www.credo.ai/blog/better-together-the-difference-between-mlops-ai-governance-and-why-you-need-both-to-deliver-on-responsible-ai
[ { "date": "2022/12/01", "position": 44, "query": "workplace AI adoption" } ]
Artificial Intelligence (AI) in the Contact Center
Artificial Intelligence in the Contact Center
https://www.icmi.com
[]
Our exclusive toolkit provides user-friendly templates with guided exercises that transition seamlessly from classroom to workplace, ensuring you can ...
Master AI implementation to revolutionize your contact center and drive measurable business transformation. This comprehensive 2-day AI certification program equips strategic decision-makers, WFM leaders, contact center leads, and transformation team members with the expertise to champion AI initiatives within contact centers through collaborative learning and hands-on application. Through dynamic peer discussions and immediate practical application, you'll master essential AI competencies while gaining access to an extensive online resource toolkit featuring strategic vision development frameworks, implementation planning templates, and performance measurement guides. Our exclusive toolkit provides user-friendly templates with guided exercises that transition seamlessly from classroom to workplace, ensuring you can immediately apply your learning to craft AI strategies aligned with your contact center objectives. Whether enhancing an existing AI strategy or building one from scratch, you'll leave with transformed understanding, industry best practices, and a structured roadmap that boosts your career prospects while delivering measurable results. Certification The ICMI Artificial Intelligence in the Contact Center Certification empowers professionals to harness the potential of AI technologies within customer support environments. This comprehensive course prepares students to understand the integration of AI tools, automation, and machine learning to elevate service delivery, streamline workflows, and enhance the customer experience. By the end of the course, participants will be ready to complete the exam and earn the prestigious AI in the Contact Center Professional credential.
2022-12-01T00:00:00
https://www.icmi.com/training/courses/artificial-intelligence-in-the-contact-center
[ { "date": "2022/12/01", "position": 57, "query": "workplace AI adoption" }, { "date": "2022/12/01", "position": 60, "query": "machine learning workforce" }, { "date": "2022/12/01", "position": 54, "query": "artificial intelligence business leaders" } ]
The future of AI in central banking and financial services
Artificial intelligence (AI) from a regulator’s perspective: The future of AI in central banking and financial services
https://hstalks.com
[ "Lopez-Corleone", "Begum", "Sixuan Li", "Melvin Lopez-Corleone", "Sholthana Begum", "Gracie Sixuan Li" ]
by M Lopez-Corleone · 2022 · Cited by 6 — Melvin Lopez-Corleone, Sholthana Begum and Gracie Sixuan Li. Journal of AI, Robotics & Workplace Automation, 2 (1), 7-16 (2022) ...
Practice paper Melvin Lopez-Corleone, Sholthana Begum and Gracie Sixuan Li Journal of AI, Robotics & Workplace Automation, 2 (1), 7-16 (2022) Artificial intelligence (AI) is unlocking enormous opportunities. For central banks, AI has the potential to enhance regulatory efficiency and improve the data basis for monetary policy decisions. Machine learning (ML) can provide comprehensive, instant, granular information to complement existing macroeconomic indicators as well as having the capability to analyse big data efficiently, which can facilitate monetary policy decisions. As countries and companies conduct AI research and deploy the technology to the public, several financial authorities have recently begun developing frameworks, outlining their expectations on AI governance and use by financial institutions. This paper illustrates the current advancements in ML techniques and highlights the future trends in the adoption of AI by central banks and companies in financial services. It looks at the use of cloud computing and ML by companies and regulators to develop cost-efficient automation tools that better understand user needs, and presents how this will likely enable companies to adapt to rising trends in customer expectation in the future. The paper also explores the growing use of AI in anti-money laundering (AML) procedures, blockchain technology, and the development of Central Bank Digital Currencies (CBDC). Keywords: artificial intelligence (AI); CBDC; central banking; regulation; fintech; diversity and inclusion (DEI); COVID-19; machine learning (ML); data; Bank of England (BoE) Author's Biography Melvin Lopez-Corleone is a Senior Technical Specia list at the Bank of England. He leads the external engagement stream within the Innovation team. He actively collaborates with central banks at a global level and works with leading tech companies, universities and private/public sector organisations. In summer 2021, he was seconded to the Banks G7 Hub to support the UK’s G7 presidency. As of 2022, Melvin is a visiting speaker on RegTech and Innovation at Cambridge University. Sholthana Begum is Head of Innovation and Regulatory Technology (RegTech) at the Bank of England. She has played diverse roles as a senior risk specialist, markets practitioner and accountant specialising in financial regulation and central banking. Sholthana is an experienced technologist, influencer and innovative leader. Her influence is substantial, and she was named in the 2021 DataIQ power list of the most influential people in Data. Sholthana is recognised as a thought leader and considered as a pioneer of RegTech at the Prudential Regulation Authority (PRA); her work is actively paving the way forward in advancing the Future of Finance. Sholthana’s counsel and advice is often sought after by other central banks and private bodies across the globe. Examples of this include being invited to high-profile summits, such as the G20 in Rome, and being an active guest lecturer at Cambridge University. Her achievements in technology and diversity have been widely recognised, most notably through her acceptance of the Tech Forward award in the 2019 Central Banking Awards, hosted in Singapore, and securing the 2022 Gender Equality and Inclusion Award. Gracie Sixuan Li joined the RegTech, Data and Innovation division as a winner of the Bank of England Innovation Competition. She holds a bachelor’s degree in international relations from King’s College London. Her research interests include artificial intelligence and the development of fintech innovation. As the Event Director of a Centre for Central Bank Studies conference, Gracie has experience in directing large-scale international summits that shared regtech insights between international central banks and regulatory institutions.
2022-12-01T00:00:00
https://hstalks.com/article/7357/
[ { "date": "2022/12/01", "position": 60, "query": "workplace AI adoption" } ]
Artificial Intelligence in Health Professions Education
Archives of Medicine and Health Sciences
https://journals.lww.com
[ "Shankar", "P. Ravi", "Author Information" ]
by PR Shankar · 2022 · Cited by 33 — Workplace-based learning is experiential, social, situated, and practice-bound.[28] Data-informed automated feedback can provoke reflection leading to a ...
INTRODUCTION Artificial intelligence (AI) implies the use of a computer to model intelligent behavior with minimal human intervention.[1] AI in medicine can be divided into virtual and physical branches. The virtual component is characterized by mathematical algorithms that learn through experience. The physical branch of AI in medicine is characterized by physical objects, medical devices, and robots taking part in the delivery of health care. This article will focus on the virtual branch and explore AI in health professions education (HPE). ARTIFICIAL INTELLIGENCE IN MEDICAL CARE Two terms often mentioned in relation to AI are machine learning (ML) and deep learning (DL). In ML, a system learns from previously available data and the learning becomes stronger as more data are provided.[2] DL is structured like the human brain with the capability of interpreting data at various levels and working on different datasets simultaneously until the output is produced.[3] The ability of the system to learn by itself is an important characteristic. AI is being increasingly used in medicine and is being applied in dermatology, pathology, ophthalmology, and radiology among other disciplines.[4] AI can improve health systems by improving workflow, has the potential to reduce medical errors, and may help patients process their own data to improve health.[5] AI has the potential to provide low-cost solutions to health problems and will be especially important for low- and middle-income countries.[6] AI is also being applied in cardiology, pulmonary medicine, endocrinology, nephrology, gastroenterology, neurology, and other specialties.[7] HEALTH-CARE PROFESSIONALS AND ARTIFICIAL INTELLIGENCE Health-care professionals (HCPs) will play an important role in providing data for educating AI systems and customizing AI for specific locations and situations. HCPs will play an important role in clinical trials to validate new AI systems and in addressing ethical considerations arising from the widespread use of AI in medicine.[7] Education in AI is still not common in health professional schools. An article published in 2021 mentions that there are few concrete plans for incorporating AI into the medical curriculum.[8] AI requires collaboration between several professionals including health care, data scientists, computer engineers, and others and this is still not very common. Medical students should be aware of the fundamental concepts of AI, and how it can reduce expenses, and improve the quality of and access to care.[9] Changes brought about by AI will affect various roles of a physician.[10] AI has been introduced as a triad element in patient–doctor communication. Communication between the doctor and the patient should also factor in the AI system. Regarding the collaborator role physicians should understand that the introduction of AI can empower other HCPs and patients. This may lead to a lessening or transformation of the leadership role doctors play in health-care teams. Furthermore, professions such as data specialists and computer scientists will play an important role in health care. As leaders and health advocates, doctors should work toward transparent and accountable implementation of AI technologies and work with disadvantaged groups to ensure that the adoption of AI is an empowered choice. There will be a change in professional identity and physicians will have to incorporate data science, information science, and engineering tools into their skill set. Learners must be exposed to big data in the context of decision-making, and they must understand the four Vs of big data. These are volume, variety, velocity, and veracity. Big data are constantly being generated in large amounts from a variety of sources at a high speed and the consistency, accuracy, and trustworthiness of data must be established. Students should also understand how data are aggregated, analyzed, and personalized in health-care delivery using AI applications. WHAT WILL BE ADDRESSED IN THIS ARTICLE This article will not focus on how AI should/will be taught to health science students or the areas that should be addressed. We will also not focus further on AI's various uses and potential uses in health care. The focus will be on how AI technologies can be used to support student learning in health professions. The provision of individualized feedback, creating individual learning pathways, and supporting student learning and engagement will be the focus areas. Medical colleges and health science colleges of the future will be transformative rather than function as information providers and AI and ML will strongly influence the learning of students.[11] ROLE OF ARTIFICIAL INTELLIGENCE IN SUPPORTING STUDENT LEARNING AI can reduce the burden on both students and teachers and can offer students effective learning experiences.[12] AI will play an important role in creating individualized learning experiences and in the gamification of learning.[13] Intelligent tutoring systems (ITSs) are an important application of AI in education and HPE.[14] ITS can provide personalized/individualized learning experiences in four ways. These are monitoring input provided by the student, providing appropriate tasks, providing feedback, and applying interfaces for human–computer interactions. In education, there is a growing realization that with the advent of AI the roles of teachers may need to be adjusted.[15] Teachers will have to collaborate and work together with AI systems to advance student learning. Collaborating with AI will also be an important skill for HCPs. PROVIDING FEEDBACK TO LEARNERS Knowledge may need to be reorganized and represented according to the learner's reactions and understanding. Feedback is important to develop learners' proximal learning patterns.[16] An artificial neural network is used to provide feedback to the learners so that they can gradually get used to abstract concepts and be able to perform practical exercises. Students can learn from the feedback provided when they are in error.[17] Intelligent algorithms provide automatic feedback to students. Feedback is provided in a timely manner and specific to the student and the student's response. An intelligent virtual laboratory has been developed to provide appropriate feedback to students who face difficulties with completing their practical activities. A learning website, jutge.org was developed to provide feedback to learners and help students learn from their mistakes and progressively solve problems. INTELLIGENT TUTORING SYSTEMS Visualization can help make complex topics interesting and entertaining and improve learners' motivation like in game-based learning.[18] Extended/augmented/virtual reality can provide a highly simulated learning context that may be challenging to achieve in real life. The analysis of human emotions and feelings captured by sensors and affective algorithms known as affection computing can enhance human–computer interaction and ITS was modified to incorporate the detection of students' emotional status to also provide them with timely emotional feedback.[19] Students assume various roles during role-plays and ponder on various problems with the activation of the affective domain also. Role-plays with an intelligent artificial agent were used to enhance learners' investment in their interaction with computers. Students' sense of responsibility was also activated, and they may work harder for their intelligent agent than they may do for themselves.[20] PERSONALIZED LEARNING Educational games are a powerful learning method and are successful if the educational design, domain knowledge, gameplay, and affection elements are integrated. AI can help in the integration of the game and the knowledge elements, and help the game adapt in a dynamic manner to the players'/learners' emotional responses and actions.[21] Educational games can take place in the setting of augmented, virtual or extended reality and complex scenarios difficult to recreate in the real world can be presented. In personalized learning, learning objectives, instructional approaches, and content can vary depending on the individual learner's needs. In personalized adaptive learning, the learners' data are continuously fed to the system followed by continuous assessments and real-time feedback.[22] Most systems also incorporate a dashboard to help learners better evaluate their progress and identify potential obstacles.[23] AI can support personalized learning through intelligent agents, autonomous scoring and assessment, and chatbots. We will discuss chatbots in greater detail shortly. AI compiles various information including students' prior learning and academic achievement and can help map out an individualized learning pathway in an economical manner. LEARNING ANALYTICS Personalizing learning material improves the motivation of the learner and learning outcomes. Adaptive learning systems provide learners with the resources and tools to attain mastery at their own speed.[24] Instructors are provided with information about learners' progress in a continuous manner. Learning analytics (LAs) uses data gathered from learners and analyzed to better understand and optimize the learning process. Data are automatically collected from learners' interactions with learning technologies, and this can be used to predict specific learning needs.[25] CHATBOTS AI chatbots are being increasingly used in HPE. A chatbot is a machine conversation system that interacts with human users through natural, conversational language. Multiple applications for chatbots in education have been highlighted including quizzing learners about their existing knowledge, creating higher student engagement with a learning task, and mentoring students toward success.[26] Metacognition chatbots help learners better understand their own learning styles and help them reflect on their coursework. Chatbots can provide a direct response to learners at any time of the day and can support health professions students 24 × 7 across geographically distributed locations. Repetitive learner questions can be dealt with effectively by chatbots and faculty may focus on questions requiring greater thought and input. Chatbots have been used to practice night call, interprofessional education consult, lab interpretations, and to learn advanced anatomical concepts. A study skills chatbot provides learners with evidence-based studying strategies to improve their understanding of the content. A faculty development chatbot provides faculty members with just-in-time evidence-based answers to faculty development issues. Most teaching practices can be effectively offered in short bursts demonstrating the skills involved.[27] Chatbots can advance interactions with virtual patients to a true conversation rather than computer-based text interactions and may better develop learner competencies. LEARNING ANALYTICS IN WORKPLACE-BASED LEARNING LAs has been used more in traditional formal education formats. Its use in workplace-based learning has been less. Workplace-based learning is experiential, social, situated, and practice-bound.[28] Data-informed automated feedback can provoke reflection leading to a reframing of oneself as a part of professional growth. Professional learners must chart their learning needs and align these with the needs of the workplace. As the workplace needs change, they must reframe their learning needs. LA must address challenges regarding how these needs can be scaffolded, supported, and enhanced in the workplace. As AI technologies constantly advance LA can be offered in more complex, nontraditional workplace settings. Skills analytics, reflective writing analytics, and dispositional analytics can be helpful. Analytics tools can help the professional reflect on where she/he is on the career journey, and what skills should she/he aspire to acquire. Reflective writing analytics helps with making sense of challenging experiences, changing as a professional, and how to handle professional dilemmas better next time. Dispositional analytics focuses on professional purpose, how to learn better, and how to transfer formal education and training to the job. SERIOUS GAMES AND SIMULATIONS Many simulation approaches have been developed to educate students to deliver health-care safely.[29] Serious games and simulations help students learn new skills and experience in real-time the consequences of their decisions and address system anomalies while not putting patients at risk. In addition to inexperienced health-care workers simulations can also help experienced workers learn new skills. Difficult procedures can be rehearsed multiple times and training on patient complications and equipment failure (which may be rare in practice) requiring decisive action and skilled performance can be provided. AI can support serious games and simulations. Gamification is the use of game elements in a nongame context.[30] In HPE, gamification can improve learning behaviors and attitudes toward learning. Game elements such as points, leaderboards, and prizes are used in a nongame context. AI is being increasingly used to support gamification. EDUCATIONAL DATA Education data mining (EDM) explores the unique types of data resulting from educational settings to better understand students and their learning environments.[31] EDM focuses on modeling and predicting student progress and creating computer systems that can adapt without needing human intervention in the learning cycle. HPE programs have complex learning situations and span both academic institutions and clinical practice settings. There are several challenges in obtaining data for LA and EDM. Among these is the fragmentation of data across different systems, the data not being continuously updated, being out of sync, and data volume may be low within individual HPE programs. Data fragmentation can be addressed with systems being able to talk to each other, and use of AI systems in assessment may result in assessment results being made available faster. HPE data can be messy. The personalized prediction uses various methods with existing data to predict future outcomes.[32] Health professions schools would like to predict the characteristics of students who are most likely to be successful practitioners; they may want to predict students who are most likely to succeed in licensing examinations and match into postgraduate and fellowship programs. These may use longitudinal datasets over multiple years to improve predictive accuracy. Data-based methods in HPE are reductionist in nature and this may be a significant limitation, especially considering the importance now provided to noncognitive attributes like empathy, collaboration, and communication skills in HPE.[33] There are also concerns related to data privacy and how the data are obtained as most of this is obtained when students use required learning systems as they may not be in a position to refuse to provide the data. USE OF ARTIFICIAL INTELLIGENCE IN ASSESSING LEARNERS AI can be used to automate performance assessment, provide feedback, and predict patient outcomes.[34] ML algorithms can be used to improve the efficiency of assessing surgical skills. Surgeons' experience levels were used to train an algorithm to distinguish between different levels of operative skills.[35] A novel motion tracking system and algorithm were developed to automatically evaluate trainee performance at a pediatric laparoscopic suturing task.[36] ML was used to assess the level of neurosurgical skills among postgraduate learners on a simulator.[37] ARTIFICIAL INTELLIGENCE AND HEALTH PROFESSIONS EDUCATORS AI has the potential to carry out routine administrative tasks and repetitive jobs and can free HPE educators to focus on tasks requiring creativity and specialized knowledge.[38] The authors of this paper mention that the basic premise of AI is to supplement and not supplant the work of educators and misunderstanding this concept may be responsible for the hesitancy to adopt AI along with resistance to change. With rapid development and incorporation of new technologies health profession education curricula suffer from information overload. However, the addition of AI can reduce curricular overload by migrating some biomedical and clinical knowledge to AI algorithms.[39] We should carefully examine the amount and type of information that HCPs should memorize in a world where information can be continually accessed without difficulty. Educators using AI should be aware of both the advantages and the risk of bias in big data and algorithms. Data collected by humans can impact the efficacy of AI algorithms and can introduce unconscious bias. The Georgia Institute of Technology in the United States is using an AI application, called Jill Watson to function as a teaching assistant optimizing the use of scarce human resources.[40] ARTIFICIAL INTELLIGENCE AND CONTINUING PROFESSIONAL EDUCATION Competency-based approaches are becoming common in HPE. These approaches rely on rich programmatic data bout each learner. Obtaining and managing this vast amount of data and using it to guide future learning has been a challenge. AI enables precision education by identifying individual performance trends and supporting individualized learning pathways. AI can support continuing professional education by incorporating longitudinal and innovative formative assessment methods that can help identify knowledge and skill gaps and support learning.[41] AI can monitor a health-care provider's patient panel and outcomes and recommend appropriate educational resources that can be provided just in time. The Amplifire's platform mines health system metrics and electronic health record data to identify opportunities for improvement and learning that can be used to design system-wide training across the professions.[42] STATUS OF ARTIFICIAL INTELLIGENCE IN HEALTH PROFESSIONS EDUCATION The workload of health professions educators can be optimized by automating some processes, thus reducing stress and burnout. Health professions students at risk of suicide can be identified through natural language processing algorithmic models and a smartphone application.[43] A recent review concludes that AI till now, has primarily provided individualized feedback to students.[44] AI has not yet been used in curriculum review and this could be due to the limited digitization of learning management systems which is essential to create a digital curriculum map. The use of AI in assessments has also been limited and this could be due to the lack of availability of the necessary data pool necessary to train AI systems and the sensitive nature of assessments. Data integrity and data privacy are also important issues to be addressed. Table 1 highlights certain examples of the use of AI in HPE. Table 1: Specific instances of using artificial intelligence in health professions education The literature shows that AI has been predominantly used in developed nations. There is a scarcity of studies from the developing world. The studies highlighted were predominantly from medicine. HPE has to fulfill strict accreditation and regulatory criteria. Hence, uptake of AI may be slower. Studies from other health professions are required. CONCLUSION AI is being widely used in health care. HCPs have an important role in educating AI systems, and AI will impact the various roles of a physician. ITS is an important application of AI in education and can personalize the learning experience for students. AI is being used to provide individualized learner feedback. AI can also help with serious games, simulations, and gamification of learning. Chatbots, LAs, and educational data mining are used commonly. AI has also been tried to assess learners. AI can support continuing professional education. Most studies have been from developed nations and among physicians and medical students. Studies among other students and professionals from developing countries are required. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2022-12-01T00:00:00
https://journals.lww.com/armh/fulltext/2022/10020/artificial_intelligence_in_health_professions.20.aspx
[ { "date": "2022/12/01", "position": 67, "query": "workplace AI adoption" }, { "date": "2022/12/01", "position": 83, "query": "AI healthcare" } ]
Concentration of High-Tech Workers Has Benefits
Concentration of High-Tech Workers Has Benefits — and Costs
https://www.nber.org
[ "Karen Glenn", "Stephen Goss" ]
Generative artificial intelligence (AI) has recently emerged as a potentially transformative workplace technology. The ultimate impact of generative AI on the ...
Scientists and engineers tend to work more productively in high-tech-intensive cities. With more opportunities to network with their peers, they appear to benefit from knowledge spillovers, and their firms benefit from a larger labor pool. Productivity gains — defined as the number of patents produced in a year — increase steadily when firms in cities with smaller concentrations of research and development activity are compared to those in localities with larger concentrations, but production costs also increase. Gains outpace costs at low- to mid-level R&D concentrations, but the picture changes in urban areas with the highest agglomerations, such as Boston and San Francisco, where the difference between productivity gains and cost increases is close to zero. Productivity increases at a steady rate as clusters of scientists and engineers grow larger, but associated R&D costs rise more rapidly at higher levels of agglomeration. In Place-Based Productivity and Costs in Science (NBER Working Paper 30416), Jonathan Gruber, Simon Johnson, and Enrico Moretti estimate R&D costs for 133 metro areas and assess how productivity varies with the density of scientists. They build on Moretti’s earlier work to show that researchers located in an area with 10 percent more scientists in their field produce 0.7 percent more patents per year. Productivity increases at a steady rate as clusters of R&D workers become more concentrated; however, R&D costs rise more rapidly at higher levels of agglomeration. This means that increased productivity delivers diminishing net returns to the firms carrying out this research. The researchers find that, overall, R&D spending in an area with 10 percent more scientists is 0.1 percent more costly — well below the productivity gains delivered by agglomeration. But while the productivity gains from adding more scientists do not vary much by area, the costs of R&D do vary. In particular, in the cities with the highest concentration of scientists, each additional 10 percent increase in the number of scientists increases costs by 0.53 percent. This estimate is somewhat less than the estimated increase in productivity in these cities, but the researchers cannot reject the possibility that the two effects are of the same magnitude, that is, that cost increases are equal to the productivity gains. The most expensive US metro areas in which to conduct R&D are San Jose-San Francisco-Oakland; Honolulu; and Boston-Worcester-Manchester. The least expensive are Dayton-Springfield-Greenville; Milwaukee-Racine-Waukesha; and Grand Rapids-Muskegon-Holland. The cost of doing business is 38 percent higher at the top than at the bottom, and the difference in the cost of housing is even greater, ranging between 49 and 99 percent depending on the location. R&D has become increasingly concentrated. As of 2009, the top 10 metro areas for computer science accounted for 70 percent of all US inventors, for semiconductors 79 percent, and for biology and chemistry 59 percent. Congress is considering legislation that could provide incentives for R&D activity in new tech hubs instead of what the study calls “existing coastal superstar cities.” The researchers identify, but do not quantify, three potential benefits associated with greater dispersion of technology-intensive R&D jobs: bringing new jobs to depressed regions, reducing the national economy’s vulnerability to geographic shocks such as hurricanes and earthquakes, and generating broader political support for science spending by increasing the voter base it would benefit. —Steve Maas
2022-12-01T00:00:00
https://www.nber.org/digest/202212/concentration-high-tech-workers-has-benefits-and-costs
[ { "date": "2022/12/01", "position": 69, "query": "workplace AI adoption" } ]
Conceptual Framework on the Factors Influencing Users' ...
Conceptual Framework on the Factors Influencing Users' Intention to Adopt AI-Based Cybersecurity Systems at Workplaces in the UAE.
https://search.ebscohost.com
[ "Alneyadi", "Mohammed Rashed Mohamed Al Humaid", "Md Kassim", "Teh Sin Yin" ]
by MRMAH Alneyadi · 2022 · Cited by 7 — influence users' intention to adopt AI cyber-security systems at workplaces in the UAE. Design/methodology/approach: This cross-sectional study will be ...
Connecting you to content on EBSCOhost
2022-07-02T00:00:00
2022/07/02
https://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=19475667&AN=160657561&h=J7ao6nKd0UUHhIHYRgdE0ueq68j11vl8FPFhKhkAAiJpM3sDrQHBZYfo0VP76vRo8hd7APEqJAekw9AqjQ%2BNvw%3D%3D&crl=f
[ { "date": "2022/12/01", "position": 77, "query": "workplace AI adoption" } ]
Integration of Artificial Intelligence Into Sociotechnical Work ...
Integration of Artificial Intelligence Into Sociotechnical Work Systems—Effects of Artificial Intelligence Solutions in Medical Imaging on Clinical Efficiency: Protocol for a Systematic Literature Rev
https://pmc.ncbi.nlm.nih.gov
[ "Katharina Wenderott", "Institute For Patient Safety", "University Hospital Bonn", "Bonn", "Nikoloz Gambashidze", "Matthias Weigl" ]
by K Wenderott · 2022 · Cited by 8 — ... AI adoption [36,54,55]. Regarding our key concepts extracted from the ... Artificial intelligence: opportunities and implications for the health workforce.
This systematic review and synthesis aims to summarize the existing knowledge on efficiency improvements in medical imaging through the integration of AI into clinical workflows. Moreover, it will extract the facilitators and barriers of the AI implementation process in clinical care settings. Therefore, our findings have implications for future clinical implementation processes of AI-based solutions, with a particular focus on diagnostic procedures. This review is additionally expected to identify research gaps regarding the focus on seamless workflow integration of novel technologies in clinical settings. In this systematic literature review, comprehensive literature searches will be performed in relevant electronic databases, including PubMed/MEDLINE, Embase, PsycINFO, Web of Science, IEEE Xplore, and CENTRAL. Studies in English and German published from 2000 onwards will be included. The following inclusion criteria will be applied: empirical studies targeting the workflow integration or adoption of AI-based software in medical imaging used for diagnostic purposes in a health care setting. The efficiency outcomes of interest include workflow adaptation, time to complete tasks, and workload. Two reviewers will independently screen all retrieved records, full-text articles, and extract data. The study’s methodological quality will be appraised using suitable tools. The findings will be described qualitatively, and a meta-analysis will be performed, if possible. Furthermore, a narrative synthesis approach that focuses on work system factors affecting the integration of AI technologies reported in eligible studies will be adopted. This study sought to synthesize the current knowledge base on how the use of AI technologies for medical imaging affects efficiency and what facilitators or barriers moderating the impact of AI implementation have been reported. When introducing artificial intelligence (AI) into clinical care, one of the main objectives is to improve workflow efficiency because AI-based solutions are expected to take over or support routine tasks. Introduction In medicine, vast changes in patient care because the development of artificial intelligence (AI) is foreseen and ongoing. AI is broadly defined as “the ability of computers to perform tasks that normally require human intelligence” [1]. The introduction of these technologies in medicine promises to improve the quality and safety in health care and accessibility of medical expertise [1]. In the future, AI-human collaboration can augment the ability of clinicians in health care delivery by extracting relevant information from big data sets or performing tasks with higher precision [2,3]. The areas where AI technologies can assist the health care professionals are manifold, for example, clinical diagnostics, decision-making, or health care administration [2,4,5]. These technologies “can be used as powerful tools and partners to enhance, extend, and expand human capabilities, delivering the types of care patients need, at the time and place they need them” [4]. When integrating AI applications into clinical practice, these technologies will become part of highly complex sociotechnical work systems. A model that considers the complexity and scope of the clinical care work environment is the systems engineering initiative for patient safety (SEIPS) 2.0 model [6]. On the basis of SEIPS 2.0, the conceptual model of workflow integration was developed to investigate the integration of a new technology into clinical work processes, which has also been applied to the integration of AI [7,8]. The model uses a sociotechnical system approach and proposes that the whole work system and workflow must be considered to evaluate the success of an AI technology implementation [8]. Some work systems in medicine are faster or more suitable in adopting AI-facilitated technologies. Especially, in specialties that are largely image-based or process big amounts of data, AI is expected to support physicians and improve patient care by leading to more effective and efficient diagnostics [9,10]. Health care providers in image-based medical disciplines handle a growing amount of imaging data that require thorough interpretation [11]. Moreover, the shortage of physicians in radiology and a limited time available per image to meet the current workload are common challenges [12]. The introduction of AI into clinical practice holds a significant potential for changes in clinicians’ duties and improvements such as advancing routine tasks and freeing clinicians’ time for other important tasks [1,2]. One of the main objectives in introducing AI into health care is efficiency improvement because AI is expected to take over not exceedingly complex but time-consuming tasks [1,13,14]. This goal can only be achieved if these technologies are seamlessly integrated into the existing clinical workflow [15]. Therefore, a correlation between workflow integration and usability outcomes, which include efficiency, effectiveness, and satisfaction, has been proposed [7,16]. Efficiency is defined as “resources used in relation to the results achieved. […] Typical resources include time, human effort, costs and materials” [16]. Drawing upon the conceptual model of workflow integration, efficiency-related clinician outcomes include the adaptation of workflow, time to complete tasks, and workload [7,13]. To our knowledge, there is currently no systematic literature review or structured synthesis available on whether the integration of AI into the clinical workflow is associated with improved efficiency. Therefore, comprehensive evidence is necessary, concerning the major promise of freeing physician time for other care activities, for example, direct patient care. As the potential fields of application for AI technologies in health care are diverse, we focus on AI used for medical imaging to enable comparability. In this review, efficiency-related clinician outcomes such as workflow adaptation, time to complete tasks, and workload will be considered. Moreover, reported facilitators or barriers for the successful integration of AI into the workflow will be reviewed as “workflow integration is crucial for making this kind of software [computer-aided detection based on AI] a success” [13]. Our systematic review addresses the following question: how do AI technologies influence the efficiency of workflows in medical imaging? Specifically, it aims to synthesize the literature base concerning two specific objectives: (1) Identification and overall aggregation of the effects of AI technology implementation on efficiency-related clinician outcomes such as workflow adaptation, time to complete tasks, and clinicians’ workload; and (2) Description of the facilitators and barriers for the integration of AI into the workflow of medical imaging.
2022-12-01T00:00:00
2022/12/01
https://pmc.ncbi.nlm.nih.gov/articles/PMC9756121/
[ { "date": "2022/12/01", "position": 78, "query": "workplace AI adoption" } ]
Artificial intelligence (AI) acceptance in primary care during ...
Artificial intelligence (AI) acceptance in primary care during the coronavirus pandemic: What is the role of patients' gender, age and health awareness? A two-phase pilot study
https://www.frontiersin.org
[ "Chalutz Ben-Gal", "Department Of Industrial Engineering", "Management", "Afeka College Of Engineering" ]
by H Chalutz Ben-Gal · 2023 · Cited by 20 — Thus, AI adoption in PC may be facilitated by common data standards (1). ... Development and pilot test of the workplace readiness questionnaire, a theory ...
Conclusions: This paper makes two major contributions. First, we highlight key social and behavioral determinants of acceptance of AI-enabled health care and PC applications. Second, we propose that to increase the usability of digital public health tools and accelerate patients' AI adoption, in complex digital public health care ecosystems, we call for implementing adaptive, population-specific promotions of AI technologies and applications. Results: The results indicate that respondents who scored high on innovativeness had a higher level of readiness to use AI-based technology in PC during the coronavirus pandemic. Surprisingly, patients' health awareness and sociodemographic factors, such as age, gender and education, were not significant predictors of AI-based technology acceptance in PC. Methods: The pilot study utilized a two-phase mixed methods approach. First, we conducted a qualitative study with 18 semi-structured interviews. Second, based on the Technology Readiness and Acceptance Model (TRAM), we conducted an online survey ( n = 447). Background: Artificial intelligence (AI) is steadily entering and transforming the health care and Primary Care (PC) domains. AI-based applications assist physicians in disease detection, medical advice, triage, clinical decision-making, diagnostics and digital public health. Recent literature has explored physicians' perspectives on the potential impact of digital public health on key tasks in PC. However, limited attention has been given to patients' perspectives of AI acceptance in PC, specifically during the coronavirus pandemic. Addressing this research gap, we administered a pilot study to investigate criteria for patients' readiness to use AI-based PC applications by analyzing key factors affecting the adoption of digital public health technology. 1. Introduction Artificial Intelligence (AI) is a multidisciplinary field of science with the goal of creating intelligent machines (1, 2). AI is steadily entering and transforming various industries. Different industries are in various stages of AI adoption. For example, e-commerce and cybersecurity are considered late adopters, while AI is gradually revolutionizing other industries (3). . AI has gradually transformed medical practice. Recent progress has been made in the direction of digitized data acquisition, machine learning and computing infrastructure, resulting in AI applications that are steadily entering novel domains that were previously governed solely by human experts. Research has outlined breakthroughs in AI technologies, identified challenges for further progress in health care and medical AI systems (4, 5) and recently analyzed the economic, legal and social implications of AI in health care (3). Research suggests a transformation in AI in the Primary Care (PC) domain (4). Technological applications based on big data solutions may assist General Practitioners (GP) in disease detection. AI plays a significant role in PC in medical advice, clinical decision-making, diagnostics and digital public health advice (6). Due to the coronavirus pandemic, health care providers are adjusting health care delivery channels to protect both patients and medical staff from infection through resource allocation directed at new and acute needs. As a result, routine services have stopped or slowed substantially, and strict isolation and separation protocols have been introduced (7). Prior to the current pandemic, some studies focused on the barriers to using digital public health solutions (8). However, following the coronavirus pandemic, health care providers' treatment of patients with non-urgent and chronic conditions became authoritative. Consequently, video consultation is being introduced, and the use of social media (9) is being discussed for its potential to direct patients to trusted PC resources (7). Nevertheless, some companies (e.g., Babylon Health, Health Tap, Ada, Buoy, Your.MD) have developed AI-powered doctors that provide health advice directly to patients with common symptoms, freeing up PC access for more complex care. By 2025, the market for these services (using the current telemedicine market and retail clinics market as a comparison) is projected to be $27 billion a year (6, 10, 11). The digital public health care transformation reinforces additional challenges. For example, potential conflicts exist based on patients' sociodemographic backgrounds. Digital tools can provide collective public health benefits; however, they may be intrusive and erode individual freedoms or leave vulnerable populations behind. The coronavirus pandemic has demonstrated the strong potential of various digital solutions (12). The introduction of AI to perform medical tasks remotely contributes immensely to health care and public health domains (6, 13, 14). In light of recent calls to advance PC with AI and machine learning (15), the goal of this pilot study is to explore patients' readiness to use digital public health solutions in the form of AI-based technology in PC for the purpose of medical advice and diagnostics (16–18). To do that, we focus on some key questions. For example, how likely are patients to use AI-based applications for PC purposes? Which factors delay the adoption of new technological solutions? Which individual perceptions influence patients' potential use of AI? What is the impact of the coronavirus pandemic and forced social distancing on individual attitudes toward AI-based solutions in PC technology adoption? The study results indicate that patients' privacy concerns, professionalism perceptions, motive perceptions and innovativeness (proneness to technology use score) are all key factors in AI-based technology acceptance in PC during the coronavirus pandemic outbreak. However, we conclude that neither patients' health awareness and empathy needs, nor their sociodemographic factors as described in the TRAM model, such as age, gender and education, are significant predictors of AI-based technology acceptance in PC. Therefore, we suggest exploring the effects of population-specific promoters of individual impediments to accelerate the adoption of AI-based applications in PC to increase their usability in complex digital public health care ecosystems. 2. Theoretical background and hypothesis development 2.1. Artificial intelligence in primary care The factors that cause individuals to accept new technologies have been researched over the past few decades. However, AI-based technology adoption, specifically in PC, has not been deeply researched even though, in recent years, the topic of AI in health care has been increasingly investigated. For example, Yu and colleagues (19) presented a review study introducing recent breakthroughs in AI technologies and their biomedical applications with the challenges for medical AI systems in health care. Subsequently, Bini analyzed the impact of AI, machine learning, deep learning, and cognitive computing health care (3). The paper discussed the origin of AI and the progress of machine learning and then discussed how the limitations of machine learning led data scientists to develop artificial networks and algorithms. The study showed how AI can act as a tool to support human cognitive functions for physicians delivering care to patients (3). AI-based applications have been used in medical imaging of the liver (20), cardiology (21), ophthalmology (22), orthopedics (23) and other medical and PC domains. However, research on AI in PC remains limited. A British study exploring GPs' views on AI and the future of PC (24) explored the potential of AI to disrupt PC and impact key medical tasks (25). This study explored technology and its potential benefits, as well as social and ethical concerns from doctors' perspectives. The study concluded that, from physicians' perspectives, the potential of AI remains limited (24). However, this study explored physicians' perspectives related to AI in PC, leaving patients' perspectives unexplored. Some research related to patients' perspectives was presented by (26). This study utilized online surveys to explore users' attitudes about AI-based medical solutions. The researcher concluded that despite ongoing concerns related to the accuracy of a symptom checker, a large patient-user group perceived the AI-assisted symptom checker to be a useful diagnostic tool. Addressing this research gap reveals that patients' perspectives on the acceptance of AI in PC is a domain to be further explored. Furthermore, no study has analyzed patients' perspectives in the context of the coronavirus pandemic, and such an analysis was therefore the purpose of this study. AI is utilized to support and improve health services in many high-income countries. There is great hope that AI can also improve health service delivery in resource-poor settings (27). AI-based diagnosis in primary health care may contribute to improving health regulation of the broader health system by technology deployment and scaling up (28). Since gaps in the quality of primary health care still exist, at the primary health care level, specific technology-based clinical care and public health services need to be integrated. With adequate policy regulations, this may contribute to suitable provider payments, health guideline regulations, and health performance assessments, resulting in synergy in health care management (29). 2.2. Technology Readiness and Acceptance Model (TRAM) Our proposed research model examines antecedents extracted from the TRAM model at the individual level through perceived usefulness and perceived ease of use and their effect on readiness to use AI-based mobile applications. The research model aims to explore the influence of privacy, professionalism, empathy, motive, proneness to technology use and health awareness utilizing an individual-level approach. Figure 1 shows our hypothesized model and the study's theoretical foundation. Our research model emphasizes six core drivers of individual decisions associated with technology readiness and acceptance based on the TRAM model (30, 31). We focus on the six factors depicted in the TRAM model because we believe that they provide a broad perspective and capture the complexity of the new technology acceptance process. Furthermore, exploring all six perspectives enables us to implement a holistic approach to explore the entire AI-based technology acceptance process in PC, considering important elements associated with potential users (6, 31). The proposed research model is based on the integrated TRAM model: readiness to use and adopt AI applications is dependent upon their perception as useful and easy to use. Figure 1 illustrates the TRAM model, which includes four independent variables: optimism, innovativeness, insecurity and discomfort. FIGURE 1 Figure 1. The integrated model (TRAM) with hypothesized relations among study variables. 2.3. Hypotheses development 2.3.1. Hypotheses In this study, we investigated the potential to use the TRAM model (see Figure 1) to predict patients' readiness to use AI-based applications in PC. We used an adapted version of the TRAM model as developed by Lin et al. (30). Optimistic people generally expect that “good rather than bad things will happen to them” [(32) (p. 219)]. How they approach the world has an impact on their attitudes toward risk perception and acceptance in relation to technology, where optimism relates to a positive view toward technology and trust that it will offer people more efficiency, flexibility and control (33). Building upon this research, we proposed the following hypothesis: H1: Optimism (motive) has a positive influence on readiness to use AI-based applications in PC. It was stated that “innovativeness” is used to assess the “newness” of an innovation, with innovative products being labeled as having a high degree of newness (34). Parasuraman introduced the technological dimension and referred to “a propensity of being a technology pioneer and influencer” [43, p. 311]. Building upon these insights, we proposed the following hypotheses: H2a: Innovativeness (professionalism) has a positive influence on readiness to use AI-based applications in PC. H2b: Innovativeness (proneness to technology use) has a positive influence on readiness to use AI-based applications in PC. Discomfort attributes have been defined as “a perceived lack of control regarding technology and the sense of being overwhelmed by it” [43 (p. 311)]. The authors argued that the high-complexity features of technology products have a negative impact on product evaluation because of the user's learning cost (35). Building upon the TRAM model, we propose the following hypothesis: H3: Discomfort (privacy) has a negative influence on readiness to use AI-based applications in PC. Insecurity “implicates a distrust of technology and the disbelief about its ability to work properly” [43 (p. 311)]. Although the TRAM model suggests that insecurity has a negative impact on the perceived ease of use and perceived usefulness, some recent studies have not been able to find a correlation (28, 36). Building upon the insights of the TRAM model, we proposed the following hypotheses: H4a: Insecurity (empathy) has a negative influence on readiness to use AI-based applications in PC. H4b: Insecurity (health awareness) has a negative influence on readiness to use AI-based applications in PC. 2.3.2. Control variable We added “referrals to a doctor” as a control variable by asking the respondents for the number of times they have contacted a physician during the past year. According to prior research (37), older adults with certain psychological and health characteristics are more receptive to novel information. 3. Method: Data collection and measurement scales In line with Lancaster et al. (38), the study design and analysis were composed of a two-phase mixed methods pilot study. We intended to measure the effectiveness of utilizing an AI-based application for PC treatment to encourage randomization, which reduces bias and provides a rigorous tool to examine cause-effect relationships (38). We implemented a two-phase mixed methods research approach (39, 40). First, we conducted a qualitative study (Study 1) that included 18 semi-structured interviews with key job holders in the PC and high-technology industries in Israel, as well as with individual patients. Second, during the coronavirus pandemic, we performed a quantitative study to analyze our research questions that examine the relationship between individual characteristics of patients and their readiness to use AI-based applications in PC (Study 2). By conducting an online survey (n = 447), we identified criteria for readiness to use AI-based applications in PC by analyzing the factors that affect the adoption of medical technology based on TRAM. The survey examined six factors that may affect patients' readiness to use AI-based applications in PC: privacy concerns, perception of professionalism, need for empathy, motive perception, proneness to technology use and health awareness (30, 41, 42). In this study, we determined that the mixed methods technique was the most suitable measurement tool. The mixed methods approach involves data collection and analysis utilizing a mixture of qualitative and quantitative techniques (39, 43). It focuses on collecting, analyzing, and mixing both quantitative and qualitative data in a single study. The central premise of the mixed method procedure is that the combination of both approaches within one study provides a better understanding of the research problems than the use of either approach alone (40). The Tel Aviv University Ethics Institutional Review Board approved the overall study (committee reference number 0001280-1). 3.1. Study 1—Qualitative To validate the research model and gain additional perspective (43), we performed 18 semi-structured interviews with key job holders in the PC and high-technology industries in Israel and with potential patients. The interviews were conducted over a period of ~six months in 2020 (some were conducted face-to-face and some over video calls). The interviewees included eight top executives from the largest Health Management Organization (HMO) in Israel (“Maccabi Health Services”), four top executives from Intel Corporation, a leading high-technology company that leads innovation, digital transformation and AI solutions, and six individual users and patients of the HMO. Each semi-structured interview included nine questions (Appendix A) and lasted ~1 hour; all interviews were recorded, coded and analyzed. The person who interviewed the subjects was also involved in the analysis of the findings. In line with Lancaster et al. (38), the study design and analysis were carried out based on a two-phase mixed methods pilot study. We intended to measure the effectiveness of utilizing an AI-based application for PC treatment to encourage randomization, which reduces bias and provides a rigorous tool to examine cause-effect relationships (38). Additionally, we undertook precautions to prevent the transfer of bias by interviewing individuals from various organizations. In line with specific recommendations from Lancaster et al. (38), we had a well-defined set of aims and objectives to ensure methodological rigor and scientific validity. For example, the interviewees in Study 1 were not included in Study 2 to ensure the independence of the results of the pilot study. 3.2. Study 2—Quantitative 3.2.1. Methodological approach for validation We used a Confirmatory Factor Analysis (CFA) framework to validate the research variables and the complete structure of the hypothesized model. Specifically, prior to implementing complex indicators, a validation process is necessary to confirm the theoretical constructs, with a complex indicator referring to either a simple or a weighted combination of the original items (44). Seven sets of items were theoretically predefined as research factors, among which three were single-item factors (privacy, professionalism, and motive), one was a two-item factor (empathy), two were four-item factors (proneness to technology use and readiness to use AI-based applications in PC), and one was an eight-item factor (health awareness). For the single-item factors, we built pseudo factors, for which no measurement error was allowed (45, 46). We used a modification process to improve the overall CFA Goodness of Fit (GOF) but minimized this process to remain within the hypothesized theoretical structure (47). Next, we estimated the second-order factors (usefulness and ease of use) within the CFA framework subject to highly correlated factors. The validation process included the exclusion of items that resulted in poor loadings on the theoretical constructs. In addition to construct validity, we examined convergent and discriminant validity. A final hypothesis-testing model was built within the first-order factor structure due to failure to fit the hypothesized second-order latent factors (Figure 3). We applied a structural equation modeling approach to test our hypotheses. A structural equation model is a model of multiple regression equations that allows more than a single outcome variable and indirect effects as part of the model structure (47, 48). All analyses were performed using Mplus version 8.1. 3.2.2. Method: Data collection and measurement scales To conduct robust and comprehensive research, we focused on quantitative data collection. We utilized technology to launch internet surveys that were emailed to key stakeholders (49). To ensure an appropriate response rate, we used two methods for data collection: web surveys and digital surveys distributed via social media. This approach yielded an acceptable and varying response rate (50, 51) of ~40%. Quantitative data were collected in two waves during the coronavirus pandemic from individuals working in the public and private sectors. An online questionnaire was developed in Hebrew and translated into English. The online questionnaire was designed such that data were already coded. Survey respondents were recruited using the snowball method (52). This resulted in 610 responses; after the exclusion of incomplete responses, there were a total of 447 usable questionnaires. In line with Lancaster et al. (38), our sample size was sufficient for a pilot study in Israel to determine the required data for the primary outcome measure (38). Finally, our strategy for handling incomplete responses and efforts to ensure that the responses were missing at random were executed in line with Christensen et al. (51). The study adopted technology readiness measurement items, including a 4-item instrument evaluating an individual's propensity to adopt and use new technologies in PC. The four dimensions of the TRAM, i.e., optimism, innovativeness, insecurity, and discomfort, consist of six measurement items. A five-point Likert scale ranging from 1 = “Strongly agree” to 5 = “Strongly disagree” was used. Given the potential for finer-grained insights to be acquired using qualitative methods, we incorporated a single open-ended question into the survey. Informed consent was obtained for experimentation. All data collection, validation and analyses were verified independently. 4. Results 4.1. Descriptive statistics Four hundred and forty-seven respondents completed our questionnaire (66% female and 33% male). The ages of the respondents ranged from eighteen to eighty-five. The average respondent age was 46.09 (SD 0.63), with 4% of individuals aged <25, 18% aged between 26 and 35, 48% aged between 36 and 50, 22% aged between 51 and 65, and 8% aged over 65 years old. Thirty-four percent of respondents had a bachelor's degree, 50% had a master's degree, and 7% had a PhD (see Figure 2). FIGURE 2 Figure 2. Respondents profile. (A) Age distribution. (B) Marital status distribution. (C) Educational years distribution. (D) Gender distribution. (E) Do you suffer from chronic illness. Sixty-five percent of respondents were employed, 14% were self-employed, 9% were retired, 5% were unemployed, and an additional 7% were unemployed or on leave due to the coronavirus pandemic. The results show that 80% of the respondents in this sample were born in Israel. The vast majority of respondents reported being married (75.3%) and not having any chronic disease (76%). Regarding HMO distribution, half (50%) of respondents obtained their health services from “Maccabi Health Services” and an additional 40% obtained their health services from “Clalit Health Services” – the two largest HMOs in Israel. 4.2. Validity and reliability We tested the construct validity of the TRAM factors within a measurement model. Specifically, a measurement model with six latent constructs and four observed variables was fitted using Mplus version 8.1 (53). We evaluated the model fit utilizing the Robust Root Mean Square Error of Approximation (RMSEA), the Robust Comparative Fit Index (CFI) and the Tucker-Lewis Index (TLI). A CFI larger than 0.95 and an RMSEA value of.05 or lower indicate a good fit. However, small deviations from these standards are acceptable (54). Discriminant and convergent validities were assessed based on correlations across factors (55). Reliability was measured based on the Cronbach's alpha coefficients of the constructs (2, 56). As a rule of thumb, a Cronbach's alpha coefficient >0.7 is considered acceptable. We concluded that the values indicated acceptable reliability (see Table 1). While performing the CFA, we encountered low GOF, partially due to low item loadings and non-estimated item correlations. We modified the CFA model by excluding items extracted from the health awareness factor (How many times did you feel sick? How many times did you go to the family physician?). This correlation estimation somewhat improved the overall GOF and factor loadings. TABLE 1 Table 1. CFA results, factor loadings, and goodness of fit, with item labels. Table 1 shows the final CFA results. Several items that were poorly loaded on the latent factor and affected the unacceptable model GOF were dropped during this validation process. The final model had values above the acceptable level for GOF, e.g., CFI = 0.947 and TLI = 0.925. Those factors for which the loading equaled 1.00 were pseudo one-item factors. Although the loading is required to be at least 0.50 in CFA models, we kept the use of mobile apps item in the proneness to technology use factor, as it was essential to the theoretical construct composition. This justification also applied to the health awareness factor. Acceptable construct validity means that the tested model is within a reasonable distance from the empirical data in variance-covariance matrices (53). We also tested the discriminant validity and convergent validity to confirm the unique content of each factor. Our validation was based on the internal consistency - acceptable to high Cronbach's alpha (2, 56) and the model correlations (57), (see Table 2), leading to the conclusion that each factor represented unique and differentiated content. Although the original model suggested mediation between the effects of privacy, professionalism, empathy, motive, and readiness to use AI-based applications in PC through proneness to technology use and health awareness, our empirical analysis did not find such mediation effects. Thus, we continued by modeling the first-order factor effects on the outcome of readiness to use AI-based applications. TABLE 2 Table 2. CFA - Correlations between the factors. Table 2 demonstrates the correlations between the factors. As demonstrated in Table 2, privacy concerns, perception of the professional quality of the PC application, motive and technology adaptation were all associated with higher readiness to use AI-based applications in PC. This is demonstrated by the correlation coefficients together with P-values, which demonstrate high significance. 4.3. Hypothesis testing 4.3.1. Structural model results To test our hypotheses, we built a structural model that included the background variables – gender (men vs. women), education level (years of education), age (five age groups), and number of visits with the family physician (from 1 to 5) (see Table 3). An illustration of the model with significant paths is shown in Figure 3. TABLE 3 Table 3. Structural equation model results and standardized regression estimates. FIGURE 3 Figure 3. Full illustration of the structural model results. We found that gender affected all model factors except the readiness to use AI-based applications in PC. The results indicated that privacy, professionalism and motive were higher among women (β = 0.13, p < 0.01; β = 0.21, p < 0.001; β = 0.11, p < 0.05, respectively), whereas women's results were lower on the empathy, technology, and health awareness factors (β = −0.10, p < 0.05; β = −0.23, p < 0.001; β = −0.18, p < 0.01, respectively). Additionally, a higher education level was associated with higher health awareness, and vice versa (β = 18, p < 0.01). However, older age and more frequent visits to the family physician were negatively associated with health awareness (β = −0.27, p < 0.001; β = −0.16, p < 0.05, respectively). Older respondents were less prone to technology use, as expected (β =- 0.10, p < 0.05). However, respondents who made a higher number of visits to the family physician were also more prone to technology use (β = 0.23, p < 0.001). A high number of visits to the family physician was negatively associated with professionalism and positively associated with empathy (β = −0.10, p < 0.05; β = 0.13, p < 0.01, respectively). Notably, respondents' age and number of visits were somewhat correlated F (4.446) = 2.888, p = 0.022; in post-hoc comparisons, we found only the 36–50 age group differed from the rest of the age groups, having a smaller number of visits. As demonstrated in Figure 3 and Table 3, the latent factor effects on the outcome factor – readiness to use AI-based application in PC – were positive; that is, higher privacy concerns, perception of the professional quality of the application, motive and technology adaptation were all associated with higher readiness to use (β = 0.17, p < 0.01; β = 0.28, p < 0.001; β = 0.21, p < 0.001; β = 0.16, p < 0.01, respectively). As shown in Figure 3, the overall measurement model showed an adequate fit, with chi-square = 272.19 (df = 120), p < 0.001; CFI = 0.925; TLI = 0.915; RMSEA = 0.053; and RMSEA = 0.053. As demonstrated in Table 3, we found that gender affected all model factors (excluding readiness to use AI-based applications in PC). The results indicated that privacy, professionalism and motive were higher among female respondents. Female respondents scored lower on the empathy, technology, and health awareness factors. As expected, a higher education level was associated with higher health awareness. However, somewhat surprisingly, older age and more frequent visits to the family physician were negatively associated with health awareness. However, respondents who made a higher number of visits to the family physician were also more prone to technology use. Table 4 provides an overview of the hypothesis test results. H4a and H4b were rejected because the correlation was not statistically significant. Surprisingly, insecurity, which originated from both empathy (H4a) and health awareness (H4b), did not have a negative influence on patients' readiness to use PC applications. This finding might be explained by the fact that health-aware individuals have a greater need for a doctor's human touch than less health-aware patients. Thus, there are other predictors that influence readiness to use AI-based technology in PC (58). TABLE 4 Table 4. Hypothesis test results. As expected, we found a positive relationship between innovativeness (professionalism and proneness to technology use) and readiness to use AI-based applications (H2a and H2b). This was not surprising, as people who are prone to use technology tend to use AI applications for various usages. Because innovative people are more open to new ideas in general (59), this finding seems plausible. People's innovative attitude has been shown to be an important factor in their adoption of new technologies (60). These people are keen to learn, adopt and utilize new technologies, e.g., AI-based applications in PC, which increases their technology adoption chances (33). We assume that innovative people are more familiar with new technological concepts. According to the study results, H1 was supported, confirming that optimism (motive) had a positive influence on readiness to use PC applications. The motive represents the individual's belief that the HMO's offers of AI-based applications are indeed intended to improve the quality of service to insured individuals. Finally, the results supported H3, indicating that discomfort (privacy) was positively correlated with readiness to use AI applications in PC. This finding implies that if individuals are uncomfortable with technology, they will be less likely to use AI-based applications in PC. To conclude, four out of six research hypotheses were supported due to high levels of significance. 5. Discussion AI in health care management is an emerging topic in academia and in practice. However, while physicians' perspectives regarding the utility of AI in PC management have been recently studied (24), patients' perspectives and technology acceptance during the coronavirus pandemic have been underexplored, underpinning the purpose of this study. Understanding the key social and behavioral determinants of acceptance of AI-enabled public health care and PC applications is of utmost importance. Understanding behavioral models for AI acceptance in public health care is important in accepting alternative approaches to assess patient attitudes and beliefs about AI applications in health care. Exploring patients' perspectives in evaluating and accepting AI-based applications is key to understanding the sources of anxiety and enthusiasm about these emerging technologies. Therefore, the importance of understanding behavioral antecedents to predict how patients are likely to form attitudes and beliefs about medical applications of AI in public health care and PC is important for developing AI tools that match patient needs and anticipate potential patient concerns. This may assist AI developers in aligning patient acceptance to new AI applications, assist in clinical implementation, and direct AI innovation toward those applications for the benefit of patients and the public health care system (61). Previous research concluded that patients' social context impacts their orientation to utilize AI in health care. It is known that patients' interpretations of their previous experiences with the health care system and non-AI health technology are nested within their broader social context, including social identities and the communities they belong to. These social factors also influence how patients engage with AI in health care. For example, a common social factor is known to be the generational differences in trust in technology (61). To deepen this investigation, in this article, we explored the potential to use the TRAM model to predict user readiness to use AI-based applications in the PC management domain. Specifically, we examined the relationship between individual characteristics and readiness to use new technology in PC. To our knowledge, this study was the first to apply the TRAM model to investigate patients' perspectives during the coronavirus pandemic. We detected positive correlations between the respondents' perceptions of HMO motive, perceptions of professionalism, proneness to technology use and privacy and readiness to use AI-based applications in PC. Additionally, our analysis indicated that a portion of the population was ready-to-use AI applications in PC during the coronavirus pandemic. This may be explained by the dependency on technology due to social distancing, fear from contagion and an increased need to examine health status according to symptoms (12). The AI revolution influences many domains, including health care in general and PC management more specifically (3, 17, 24). Previous research concluded that physicians will continue performing their roles, which remain clinically important despite the increased use of AI, hence contributing to the ongoing care of patients (8, 62). However, there is an emerging need to leverage technology to improve the PC management that patients receive and to assist physicians in providing accurate diagnostics in less time. This research shows that some of the population is ready to use AI applications in PC management, but only if their use will provide professional service, maintain their privacy and not reduce the service level they receive from their HMO today. Our study results indicated that patients' privacy concerns, perception of professionalism, motive perception and proneness to technology use are all key factors in readiness to use AI-based technology in PC during the coronavirus pandemic. However, we found that health awareness, empathy needs, and patients' sociodemographic factors as described in the TRAM model, such as age, gender and education level, are not significant predictors of readiness to use AI-based technology in PC. Therefore, to increase the usability of digital public health and accelerate patients' AI adoption, exploring the effects of population-specific promoters of and individual impediments to accelerating the adoption of AI-based applications in PC and public health care and increasing usability in complex digital public health care ecosystems is needed (63, 64). Thus, we call for implementing adaptive, population-specific promotions of AI technologies and applications. AI has the potential to reduce physicians' emotional burden and make them more available for patients, thus enabling a shift from a focus on transactional tasks toward personalized care. Future research can examine the impact of AI technologies in achieving better PC at lower costs and improved wellbeing for physicians and patients alike (6, 25). Our results may be valuable in a global context. These results may assist policy-makers and possible health institutions, as well as those in the technology industry, in communicating stronger and more effective messages to the public toward a smoother acceptance of new AI-based technologies (65). The impact of having good AI-based diagnostic and other tools in primary health care may benefit some key aspects of public health. Since the public health system is characterized by multiple stakeholders (66), it is specifically important to address key diverse challenges. For example, three stakeholder groups—physicians, hospitals, IT managers and policy-makers—can join forces to maximize the utilization and efficiencies of AI-based technologies for the benefit of public health. Since the perceived challenge by key stakeholders involved in AI technology adoption is not technical (66), it is important to overcome barriers, as these tools may contribute to the public health system as a whole. Primary health care and AI experts agree that AI has the potential to improve managerial and clinical decisions and processes. Thus, AI adoption in PC may be facilitated by common data standards (1). While the use of AI in medicine should enhance health care delivery, there is a growing need to ensure careful design and evaluation of AI applications. This is specifically important for public health care delivery. Thus, as an integral part of this community, the PC informatics community needs to be proactive by guiding the rigorous development of AI applications such that they will be safe and effective. AI has the potential to impact the global use of technologies in health care and additional computational AI-based tools in primary health care for the benefit of the entire health care network. Thus, both health care professionals and policy-makers may find the potential for advancing AI-based tools in primary health care (28, 67). This research is subject to several limitations. First, the five Likert-scale measures we used to measure most of our dependent variables in Study 2 may have captured a limited dimension of these variables (68). Future research might wish to examine additional measures in light of the fact that this study used a variety of research tools following a mixed methods approach, which contributed to its robustness. Second, although the data in this study were in depth and collected via two different research tools, they were collected in a single country. However, we tried to overcome this shortcoming in two ways: first, by broadening and enhancing the variety of research tools and therefore performing Study 1 and Study 2; and second, by diversifying our sample in light of the unique coronavirus situation, which influences remote users of AI-based applications (38). However, our diversified sample may indeed be prone to technology. Thus, results regarding this measure should be further investigated. Furthermore, in line with Lancaster et al. (38), and acknowledging that the results from hypothesis testing of a pilot study should be treated as preliminary and interpreted with caution, we call for investigating the study's results on a global scale. For example, it may be that educated individuals may be inclined toward technology usage in general and in health care. Thus, more specific investigations related to how education can be used to predict the usage of AI-based technology acceptance may be insightful. Finally, the convenience sample survey data are less ideal for external validity and may be subject to common method bias. Since the bias of the sample cannot be measured, inferences based on the convenience sampling were made with regard to the sample itself. 6. Conclusions AI has the potential to impact the global use of technology in health care, and additional computational AI-based tools in primary health care can benefit the entire health care network. Thus, both healthcare professionals, as well as policy-makers, may find the potential in advancing AI-based tools in primary health care (28, 67). This paper has two major contributions. First, we highlight the key social and behavioral determinants of the acceptance of AI-enabled health care and PC applications. Second, we propose implementing adaptive, population-specific promotions of AI technologies and applications to increase the usability of digital public health and accelerate patients' AI adoption in complex digital public healthcare ecosystems. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by Tel Aviv University. The patients/participants provided their written informed consent to participate in this study. Author contributions HCB: writing—first, second and final drafts, writing—review and editing, validation, writing—final draft, overall supervision, and project supervision. Funding This research was partially supported by the Koret Fund for Digital Living 2030. Acknowledgments The content of this manuscript has been presented in part at the Academy of Management Conference, 2021 (Citation: Kadosh, E., & HC. (2021). AI Acceptance in Primary Care during COVID-19: A Two-Phase Study of Patients' Perspective. In Academy of Management Proceedings (Vol. 2021, No. 1, p. 13461). Briarcliff Manor, NY 10510: Academy of Management. Conflict of interest The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Footnotes References 1. Liyanage H, Liaw ST, Jonnagaddala J, Schreiber R, Kuziemsky C, Terry AL, et al. Artificial intelligence in primary health care: perceptions, issues, and challenges: primary health care informatics working group contribution to the yearbook of medical informatics 2019. Yearb Med Inform. (2019) 28:41. doi: 10.1055/s-0039-1677901 PubMed Abstract | CrossRef Full Text | Google Scholar 2. Taber KS. The use of Cronbach's alpha when developing and reporting research instruments in science education. Res Sci Edu. (2018) 48:1273–96. doi: 10.1007/s11165-016-9602-2 CrossRef Full Text | Google Scholar 3. Bini SA. Artificial intelligence, machine learning, deep learning, and cognitive computing: what do these terms mean and how will they impact health care? J Arthroplasty. (2018) 33:2358–61. doi: 10.1016/j.arth.2018.02.067 PubMed Abstract | CrossRef Full Text | Google Scholar 5. Tenório JM, Hummel AD, Cohrs FM, Sdepanian VL, Pisa IT, de Fátima Marin H. Artificial intelligence techniques applied to the development of a decision–support system for diagnosing celiac disease. Int J Med Inform. (2011) 80:793–802. doi: 10.1016/j.ijmedinf.2011.08.001 PubMed Abstract | CrossRef Full Text | Google Scholar 7. Eccleston C, Blyth FM, Dear BF, Fisher EA, Keefe FJ, Lynch ME, et al. Managing patients with chronic pain during the Coronavirus outbreak: considerations for the rapid introduction of remotely supported (eHealth) pain management services. Pain. (2020) 161:889. doi: 10.1097/j.pain.0000000000001885 PubMed Abstract | CrossRef Full Text | Google Scholar 8. Abbott PA, Foster J, de Fatima Marin H, Dykes PC. Complexity and the science of implementation in health IT—Knowledge gaps and future visions. Int J Med Inform. (2014) 83:e12–22. doi: 10.1016/j.ijmedinf.2013.10.009 PubMed Abstract | CrossRef Full Text | Google Scholar 9. Riaño D, Peleg M, Ten Teije A. Ten years of knowledge representation for health care (2009–2018): topics, trends, and challenges. Arti Intell Med. (2019) 101713:3657. doi: 10.1016/j.artmed.2019.101713 PubMed Abstract | CrossRef Full Text | Google Scholar 10. Martinho A, Kroesen M, Chorus C. A healthy debate: exploring the views of medical doctors on the ethics of artificial intelligence. Artif Intell Med. (2021) 102190. doi: 10.1016/j.artmed.2021.102190 PubMed Abstract | CrossRef Full Text | Google Scholar 11. Zhou LQ, Wang JY, Yu SY, Wu GG, Wei Q, Deng YB, et al. Artificial intelligence in medical imaging of the liver. World J Gastroenterol. (2019) 25:672. doi: 10.3748/wjg.v25.i6.672 PubMed Abstract | CrossRef Full Text | Google Scholar 12. Fagherazzi G, Goetzinger C, Rashid MA, Aguayo GA, Huiart L. Digital public health strategies to fight Coronavirus worldwide: challenges, recommendations, and a call for papers. J Med Internet Res. (2020) 22:e19284. doi: 10.2196/19284 PubMed Abstract | CrossRef Full Text | Google Scholar 13. Morgenstern JD, Rosella LC, Daley MJ, Goel V, Schünemann HJ, Piggott T, et al. AI's gonna have an impact on everything in society, so it has to have an impact on public health: a fundamental qualitative descriptive study of the implications of artificial intelligence for public health. BMC Public Health. (2021) 21:21-40. doi: 10.1186/s12889-020-10030-x PubMed Abstract | CrossRef Full Text | Google Scholar 14. Patel VL, Shortliffe EH, Stefanelli M, Szolovits P, Berthold MR, Bellazzi R, et al. The coming of age of artificial intelligence in medicine, Arti Intell Med. (2009) 46:5–17. doi: 10.1016/j.artmed.2008.07.017 PubMed Abstract | CrossRef Full Text | Google Scholar 15. Wang J, Wang X. Structural Equation Modeling, Applications Using Mplus. 2nd Edition. West Sussex: Wiley (2020). doi: 10.1002/9781119422730 CrossRef Full Text | Google Scholar 16. Chen SC, Li SH. Consumer adoption of e-service: Integrating technology readiness with the theory of planned behavior. Af J Bus Manag. (2010) 4:3556–63. Google Scholar 17. Chalutz Ben-Gal H. An ROI-based review of HR analytics: practical implementation tools. Perso Rev. (2019) 48:1429–48. doi: 10.1108/PR-11-2017-0362 CrossRef Full Text | Google Scholar 18. Hannon PA, Helfrich CD, Chan KG, Allen CL, Hammerback K, Kohn MJ, et al. Development and pilot test of the workplace readiness questionnaire, a theory-based instrument to measure small workplaces' readiness to implement wellness programs. Am J Health Promot. (2017) 31:67–75. doi: 10.4278/ajhp.141204-QUAN-604 PubMed Abstract | CrossRef Full Text | Google Scholar 19. Yang Z, Silcox C, Sendak M, Rose S, Rehkopf D, Phillips R, et al. Advancing primary care with artificial intelligence and machine learning. Healthcare. (2022) 10:100594. doi: 10.1016/j.hjdsi.2021.100594 PubMed Abstract | CrossRef Full Text | Google Scholar 23. Poduval M, Ghose A, Manchanda S, Bagaria V, Sinha A. Artificial intelligence and machine learning: a new disruptive force in orthopaedics. Indian J Orthop. (2020) 1–14. doi: 10.1007/s43465-019-00023-3 PubMed Abstract | CrossRef Full Text | Google Scholar 24. Blease C, Kaptchuk TJ, Bernstein MH, Mandl KD, Halamka J. D., DesRoches C. M. (2019). Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners' views. J Med Int Res. 21:e12802. doi: 10.2196/12802 PubMed Abstract | CrossRef Full Text | Google Scholar 26. Meyer AN, Giardina TD, Spitzmueller C, Shahid U, Scott TM, Singh H. Patient perspectives on the usefulness of an artificial intelligence—Assisted symptom checker: cross-sectional survey study. J Med Internet Res. (2020) 22:e14679. doi: 10.2196/14679 PubMed Abstract | CrossRef Full Text | Google Scholar 27. Matheny M, Israni ST, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, The Hype, The Promise, The Peril. Washington, DC: National Academy of Medicine (2019). doi: 10.1001/jama.2019.21579 PubMed Abstract | CrossRef Full Text | Google Scholar 28. Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?. BMJ Glob Health. (2018) 3:e000798. doi: 10.1136/bmjgh-2018-000798 PubMed Abstract | CrossRef Full Text | Google Scholar 29. Li X, Krumholz HM, Yip W, Cheng KK, De Maeseneer J, Meng Q, et al. Quality of primary health care in China: challenges and recommendations. The Lancet. (2020) 395:1802–12. doi: 10.1016/S0140-6736(20)30122-7 PubMed Abstract | CrossRef Full Text | Google Scholar 30. Lin CH, Shih HY, Sher PJ. Integrating technology readiness into technology acceptance: the TRAM model. Psychol Market. (2007) 24:641–57. doi: 10.1002/mar.20177 CrossRef Full Text | Google Scholar 31. Van Compernolle M, Buyle R, Mannens E, Vanlishout Z, Vlassenroot E, Mechant P. “Technology readiness and acceptance model” as a predictor for the use intention of data standards in smart cities. Med Commun. (2018) 6:127–39. doi: 10.17645/mac.v6i4.1679 CrossRef Full Text | Google Scholar 32. Scheier MF, Weintraub JK, Carver CS. Coping with stress: divergent strategies of optimists and pessimists. J Pers Soc Psychol. (1986) 51:1257. doi: 10.1037/0022-3514.51.6.1257 PubMed Abstract | CrossRef Full Text | Google Scholar 33. Parasuraman A. Technology Readiness Index (TRI) a multiple-item scale to measure readiness to embrace new technologies. J Serv Res. (2000) 2:307–20. doi: 10.1177/109467050024001 CrossRef Full Text | Google Scholar 34. Garcia R, Calantone R. A critical look at technological innovation typology and innovativeness terminology: a literature review. J Prod Innov Manag Int Publ Prod Develop Manag Assoc. (2002) 19:110–32. doi: 10.1111/1540-5885.1920110 CrossRef Full Text | Google Scholar 35. Mukherjee A, Hoyer WD. The effect of novel attributes on product evaluation. J Cons Res. (2001) 28:462–72. doi: 10.1086/323733 CrossRef Full Text | Google Scholar 36. Godoe P, Johansen T. Understanding adoption of new technologies: Technology readiness and technology acceptance as an integrated concept. J Eu Psychol Stud. (2012) 3:5334. doi: 10.5334/jeps.aq CrossRef Full Text | Google Scholar 37. Flynn KE, Smith MA, Freese J. When do older adults turn to the internet for health information? Findings from the Wisconsin Longitudinal Study. J Gen Intern Med. (2006) 21:1295–301. doi: 10.1111/j.1525-1497.2006.00622.x PubMed Abstract | CrossRef Full Text | Google Scholar 38. Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. (2004) 10:307–12. doi: 10.1111/j..2002.384.doc.x PubMed Abstract | CrossRef Full Text | Google Scholar 40. Tashakkori A, Teddlie C. (Eds.). Sage Handbook of Mixed Methods in Social and Behavioral Research. Newbury Park, CA: Sage (2010). doi: 10.4135/9781506335193 CrossRef Full Text | Google Scholar 41. Chen MF, Lin NP. Incorporation of health consciousness into the technology readiness and acceptance model to predict app download and usage intentions. Int Res. (2018) 28:351–73. doi: 10.1108/IntR-03-2017-0099 CrossRef Full Text | Google Scholar 42. Meng J, Elliott KM, Hall MC. Technology readiness index (TRI): assessing cross-cultural validity. J Int Consum Market. (2009) 22:19–31. doi: 10.1080/08961530902844915 CrossRef Full Text | Google Scholar 43. Creswell JW. Editorial: mapping the field of mixed methods research. J Mix Meth Res. (2009) 3:95–108. doi: 10.1177/1558689808330883 CrossRef Full Text | Google Scholar 44. Brown, Timothy A. Confirmatory Factor Analysis for Applied Research. 2nd Edition. New York: The Guilford Press (2015). Google Scholar 46. Rose N, Wagner W, Mayer A, Nagengast B. Model-based manifest and latent composite scores in structural equation models. Collabra Psychol. (2019) 5:9. doi: 10.1525/collabra.143 CrossRef Full Text | Google Scholar 48. Afthanorhan WM, Ahmad S. Path analysis in covariance-based structural equation modeling with Amos 18.0. Eu J Bus Soc Sci. (2014) 2:59–68. Google Scholar 49. Fricker S, Galesic M, Tourangeau R, Yan T. An experimental comparison of web and telephone surveys. Public Opin Quart. (2005) 69:370–92. PubMed Abstract | Google Scholar 50. Baruch Y, Holtom BC. Survey response rate levels and trends in organizational research, Human Relat. (2008) 61:1139–60. doi: 10.1177/0018726708094863 CrossRef Full Text | Google Scholar 51. Christensen AI, Lau CJ, Kristensen PL, Johnsen SB, Wingstrand A, Friis K, et al. The Danish national health survey: study design, response rate and respondent characteristics in 2010, 2013 and 2017. Scand J Public Health. (2020) 3:1403494820966534. doi: 10.1177/1403494820966534 PubMed Abstract | CrossRef Full Text | Google Scholar 53. Walczuch R, Lemmink J, Streukens S. The effect of service employees' technology readiness on technology acceptance. Inform Manag. (2007) 44:206–15. doi: 10.1016/j.im.2006.12.005 CrossRef Full Text | Google Scholar 54. Marsh HW, Hau KT, Wen Z. In search of golden rules: comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings. Struct Equ Model. (2004) 11:320–41. doi: 10.1207/s15328007sem1103_2 CrossRef Full Text | Google Scholar 55. Henseler J, Ringle CM, Sarstedt M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Aca Market Sci. (2015) 43:115–35. doi: 10.1007/s11747-014-0403-8 CrossRef Full Text | Google Scholar 56. Cortina JM. What is coefficient alpha? An examination of theory and applications. J App Psychol. (1993) 78:98–104. doi: 10.1037/0021-9010.78.1.98 CrossRef Full Text | Google Scholar 57. Taylor, Catherine S. Validity and Validation. In Natasha S. Beretvas (eds.) Series in Understanding Statistics. Oxford: Oxford University Press. (2013), pp 1-24. doi: 10.1093/acprof:osobl/9780199791040.001.0001 CrossRef Full Text | Google Scholar 58. Kuo KM, Liu CF, Ma CC. An investigation of the effect of nurses' technology readiness on the acceptance of mobile electronic medical record systems. BMC Med Inform Decis Mak. (2013) 13:88–101. doi: 10.1186/1472-6947-13-88 PubMed Abstract | CrossRef Full Text | Google Scholar 60. Brancheau JC, Wetherbe JC. The adoption of spreadsheet software: testing innovation diffusion theory in the context of end-user computing. Inform Sys Res. (1990) 1:115–43. doi: 10.1287/isre.1.2.115 CrossRef Full Text | Google Scholar 61. Richardson JP, Curtis S, Smith C, Pacyna J, Zhu X, Barry B, et al. A framework for examining patient attitudes regarding applications of artificial intelligence in healthcare. Digital Health. (2022) 8:20552076221089084. doi: 10.1177/20552076221089084 PubMed Abstract | CrossRef Full Text | Google Scholar 63. Lai L, Wittbold KA, Dadabhoy FZ, Sato R, Landman AB, Schwamm LH, et al. Digital triage: novel strategies for population health management in response to the Coronavirus pandemic. Healthcare. (2020) 8:100493. doi: 10.1016/j.hjdsi.2020.100493 PubMed Abstract | CrossRef Full Text | Google Scholar 64. Li L, Aldosery A, Vitiugin F, Nathan N, Novillo-Ortiz D, Castillo C, et al. The response of governments and public health agencies to COVID-19 pandemics on social media: a multi-country analysis of twitter discourse. Front iPublic Health. (2021) 1410. doi: 10.3389/fpubh.2021.716333 PubMed Abstract | CrossRef Full Text | Google Scholar 65. Vu HT, Lim J. Effects of country and individual factors on public acceptance of artificial intelligence and robotics technologies: a multilevel SEM analysis of 28-country survey data. Behav Inform Technol. (2022) 41:1515–28. doi: 10.1080/0144929X.2021.1884288 CrossRef Full Text | Google Scholar 66. Sun TQ, Medaglia R. Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Gov Inf Q. (2019) 36:368–83. doi: 10.1016/j.giq.2018.09.008 CrossRef Full Text | Google Scholar 67. Galvão-Lima LJ, Morais AH, Valentim RA, Barreto EJ. miRNAs as biomarkers for early cancer detection and their application in the development of new diagnostic tools. Biomed Eng Online. (2021) 20:1–20. doi: 10.1186/s12938-021-00857-9 PubMed Abstract | CrossRef Full Text | Google Scholar
2023-01-09T00:00:00
2023/01/09
https://www.frontiersin.org/articles/10.3389/fpubh.2022.931225/full
[ { "date": "2022/12/01", "position": 84, "query": "workplace AI adoption" } ]
AI in breast screening mammography - Insights into Imaging
AI in breast screening mammography: breast screening readers' perspectives - Insights into Imaging
https://insightsimaging.springeropen.com
[ "De Vries", "Clarisse Florence", "Aberdeen Centre For Health Data Science", "Institute Of Applied Health Sciences", "University Of Aberdeen", "Aberdeen", "Colosimo", "Samantha J.", "Aberdeen Royal Infirmary", "National Health Service Grampian" ]
by CF de Vries · 2022 · Cited by 14 — The survey had 87 nationally accredited respondents across the UK; 73 completed the survey in full. Respondents approved of AI replacing one of the initial two ...
The survey had 87 nationally accredited respondents; 73 (83.9%) completed the survey in full. Most (61%) had over 10 years’ experience, and 77% were consultant radiologists. Nineteen participants provided comments. While just over a third (37%) described their understanding of AI as good or excellent, 63% had a positive or strongly positive view of AI use in screening. One respondent indicated: “I am in favour of adopting AI in mammogram reporting.” Another respondent stated: “AI has a role in breast screening and would help to alert. [AI would] [a]lso help with personnel shortage.” Most (82%) had not been previously involved in procuring similar medical software for their organisation. Figure 1 shows participants’ responses to which AI implementation scenario they would prefer. Respondents preferred partial replacement (AI replaces one human reader) over other AI implementation scenarios. They objected to the total replacement scenario, while views on the triage and companion scenarios were mixed. Fig. 1 How should AI be implemented? a Participants were presented with four scenarios on the use of AI in breast screening and were asked to select the options that most closely reflect their views. b Participants were asked to rank the four AI scenarios in order of preference Full size image Two respondents suggested alternative AI implementation scenarios. One comment stated that “[i]t would be great to have AI tested against previous interval cancers as this is one of the few things that will influence outcomes / breast cancer mortality in the screened population” and that AI could be used “on all those cases given normal results by the readers as a safety net system prior to results being sent out.” The second response suggested that double reading with AI would not save a lot of radiology time, and that AI would be better used to maximise image quality, decide whether to perform breast imaging with tomosynthesis, pre-read symptomatic mammograms, and focus on risk and masking from breast density/parenchyma. Approximately half of the respondents thought first readers (52%) and second readers (51%) should have access to the AI opinion. Most respondents (68%) thought that third readers or an arbitration panel should have access to the AI opinion. Figure 2 shows participants’ responses to what evidence they think would support AI introduction into their workplace. Most respondents rated national guidelines (77%), studies using a nationally representative dataset (65%) and independent prospective studies (60%) as essential to support the introduction of AI into clinical practice. Vendor generated evidence, however, was considered to have limited value. Most participants indicated that evidence generated from local data was either essential (43%) or desirable (42%). Fig. 2 What evidence do you need to introduce AI into your workplace? Participants were asked: ‘What type of evidence would convince you of the value and utility of AI in breast screening and support AI introduction into your workplace setting?’ Full size image Seven comments discussed the need for additional evidence and validation of AI breast screening tools, including different software, the threshold for recall and readers’ interactions with the AI. Related comments stated: “Replies non-committal because I want to see the evidence first!”, “I am strongly in favour of adopting AI in screening mammography reading once it has been validated and made user friendly” and “AI has so far shown excellent results with better than human sensitivity and specificity but needs input of robust data and validation tests locally and nationally.” One respondent suggested that a national working group of AI specialists and screen readers should be developed through the Royal College of Radiologists to evaluate and test the various AI systems and ways of using them on large datasets. They added: “National guidelines are vital to ensure it is used in the optimal manner and to provide medicolegal protection.” The view that the second specialist is blinded to the first reader’s opinion was held by 45% of participants; 54% indicated that it was their view that the specialist should be blinded to the AI opinion. Two respondents indicated that they were unsure whether the question on the blinding of the second specialist to the first reader referred to whether they are currently blinded or whether they should be blinded. Figure 3 shows participants’ responses to how they would rank the given AI representation options. Respondents preferred a region of suspicion superimposed on the image over other shown AI representation options. Fig. 3 How should the AI opinion be represented? Participants were asked: “If you were able to see the AI opinion as in the companion scenario, how best do you think this should be represented?” Participants were shown five options and were asked to rank them from most to least preferred Full size image Readers with a greater self-reported knowledge of AI were more likely to view the use of AI in medical screening as positive (ρ = 0.496, p < 0.001). Self-reported knowledge of AI was not significantly associated with approval of any of the AI implementation scenarios (p > 0.05). The remaining free-text comments predominantly related to concerns regarding the introduction of AI into breast screening, including lack of planning for the needed infrastructure, and potential negative effects on screen readers, patients, and screening centres. One participant indicated that it is “[i]mportant that training of future mammographic readers is not forgotten, as AI cannot assess patients.” Relatedly, one respondent stated: “There needs to be widespread understanding of the limitations of AI as I am afraid that readers will have too much faith in its abilities.” Another participant commented: “AI will decrease specificity and increase recall rates. Radiologists will be left to cope with the fall out at assessment clinics. How can centres be assessed for QA [quality assurance] if AI is introduced?” One respondent indicated that AI is “[d]ifficult to introduce” and “buy-in from most radiologist[s]” must be obtained before introducing AI in breast screening nationally. They further stated that ethical questions should be answered in a FAQ (frequently asked questions document) to reassure screen readers. One screen reader responded: “I believe it is inevitable that AI will be introduced over the next few years and we need to ensure it is done so in the most effective manner for the breast screening programme.”
2022-12-14T00:00:00
2022/12/14
https://insightsimaging.springeropen.com/articles/10.1186/s13244-022-01322-4
[ { "date": "2022/12/01", "position": 89, "query": "workplace AI adoption" } ]
Artificial intelligence and labor market outcomes - IZA World of Labor
Artificial intelligence and labor market outcomes
https://wol.iza.org
[ "Drydakis" ]
AI is reshaping the labor market by creating new jobs and increasing competition for high-skilled roles, benefiting those with AI capital.
Elevator pitch Artificial intelligence (AI) has streamlined processes, improved workforce allocation, and created new jobs to meet the needs of digitalization and automation. Individuals with AI capital experience greater employment opportunities and higher wages, particularly in high-skilled roles and large firms. Training in AI helps reduce gender-based digital disparities, empowers individuals, and enhances their employability. Policymakers should promote inclusive AI development policies to prevent widening AI-related divides and unemployment, and to ensure equitable opportunities. Key findings Pros Investments in education and skills related to AI have boosted employment, particularly in high-skilled occupations. AI adoption has led to higher demand for AI-related skills and wage premiums, especially in high-skilled occupations. AI has increased employee productivity for less-experienced and lower-skilled employees. AI helps industries weather economic downturns by stabilizing employment levels. AI enhances job quality for historically disadvantaged groups by reducing reliance on physical strength and promoting cognitive skills. Cons AI increases job displacement risks for low-skilled, routine employees, widening the gap between high- and low-skilled employees. AI-investing firms favor highly educated employees, reducing non-degree roles and some non-technical jobs. AI might worsen income inequality, especially impacting low-skilled employees. Employees’ distrust of workplace AI stems from perceiving AI as a threat and dissatisfaction with overpromised AI capabilities. There are concerns about AI's ethics and transparency, especially in decision-making processes. Author's main message AI is reshaping the labor market by creating new jobs and increasing competition for high-skilled roles, benefiting those with AI capital. While AI may boost productivity in certain jobs, it also widens the gap between high- and low-skilled employees. Less-educated employees face higher risks of displacement and reduced income. Additionally, AI introduces challenges related to workforce adaptability, trust, ethics, and transparency, which negatively impact employees' job realities. Policymakers should navigate these changes to maximize the benefits of AI while mitigating its adverse effects. Motivation Governments are proactively working to maximize the benefits of artificial intelligence (AI) in their economies and labor markets. In 2021, the UK government released the National AI Strategy, aiming to boost business use of AI, attract international investment, and develop the next generation of technology talent. Similarly, in 2019, the US launched the American AI Initiative with the mission to promote leadership in AI research, development, and application, as well as to expand education and training opportunities to prepare the American workforce for the new era of AI. Examining the association between AI and labor outcomes is critically important due to its profound and multifaceted impact at both the macro level—affecting labor markets and economic dynamics across different regions—and the micro level, influencing individual and firm behaviors [1], [2], [3], [4], [5], [6], [7], [8]. Discussion of pros and cons Macro- and micro-level evidence on the relationship between AI and labor market outcomes The rapid advancements in AI technologies and systems are reshaping industries, influencing operations and job structures, and altering the demand for various skills, including digital literacy, data management, model training, decision-making, creativity, and innovation [1], [2]. In both the US and Europe, AI has the potential to expose up to two-thirds of current jobs to some degree of automation [9]. Advanced economies are likely to experience more significant effects from AI automation than emerging markets, due to differences in labor market composition and industrial structures [9], [10]. In the UK in 2020, at least 42.5% of small and medium-sized enterprises (SMEs) utilized AI in their operations [2]. As observed in Figure 1, UK SMEs used AI for various operations, such as collecting information related to customers' online purchase histories and digital footprints, offering cash flow forecasting, protecting data, strengthening cybersecurity, and providing legal and HR services [2]. International macro-level evidence highlights the importance of examining how AI is associated with labor market outcomes, with the magnitude of AI’s impact varying across studies. For instance, in the US, between 2010 and 2018, a one-standard-deviation increase in investments in education and skills related to AI led to a 19.5% increase in sales, an 18.1% increase in employment, and a 22.3% increase in market valuation [3]. Additionally, between 2010 and 2019, there was a notable rise in the demand for AI skills across US industries, accompanied by significant wage premiums, particularly in high-skilled occupations [7]. On average, vacancies that required AI skills offered an 11% higher salary within the same firm and a 5% premium within the same job title compared to positions that did not require AI skills [7]. In China, evidence from 30 provinces between 2006 and 2020 indicates that AI adoption, particularly through industrial robots, has positively impacted employment [4]. Notably, the scale effect resulting from advanced productivity leads to lower product prices and higher employee incomes, which in turn drives increased demand and economic growth, ultimately leading to greater output growth and employment [4]. Expanding the geographic focus, evidence from 23 OECD countries between 2012 and 2019 shows a positive association between AI exposure and employment growth in occupations with high computer use, suggesting that AI can complement human labor in high-skilled sectors [6]. These patterns highlight AI's potential to drive employment and growth by enabling more efficient production and fostering new industries [9]. Indeed, predictions suggest that AI has the potential to increase global GDP growth by 7% and by 1.5% in the US, demonstrating substantial macroeconomic potential if AI adoption becomes widespread [9]. More modest predictions in the US indicate that AI is projected to increase total factor productivity by no more than 0.66% [11]. At a micro level, in the UK in 2022, job applicants with AI capital experienced approximately 14.4% higher wage prospects compared to those without AI capital [1]. The illustration shows that in business administration occupations, the wage premium was the highest in the sample, followed by accounting occupations and economics occupations. Similarly, as observed in Figure 2, job applicants with AI capital had, on average, a 22.3 percentage point higher chance of receiving interview invitations than those without AI capital [1]. In economics, entrepreneurship, business administration, and finance occupations, the percentage point change difference in interview invitations between those with and without AI capital are the highest. These findings highlight the need for education, job retraining, and innovation support to ensure a fair and inclusive transition to an AI-driven economy [1]. At the same time, macro-level evidence in the US has found that firms with higher initial shares of highly educated employees and STEM employees invest more in AI [12]. As these firms invest in AI, they tend to transition to more educated workforces, with higher shares of employees holding undergraduate and graduate degrees, and greater specialization in STEM fields and IT skills [12]. Additionally, AI adoption in the US has been associated with reduced hiring for non-AI positions, indicating a potential substitution effect where AI replaces tasks traditionally performed by humans [5]. Given the nature of the phenomenon under examination, the following sections attempt to group and present the associations between AI and advancements in labor market outcomes, as well as the associations between AI and drawbacks in the labor market. AI and advancements in labor market outcomes Literature reviews have found that AI is transforming the labor market by creating new jobs that did not previously exist, particularly to meet the demands of digitalization and automation [13], [14], [15]. This shift is accompanied by increased skill-based competition, where individuals with AI-related expertise are in high demand, fostering a competitive job market that rewards advanced skills and knowledge [1], [3], [4], [6]. Evidence from the US, the UK, OECD countries, and emerging markets shows a notable rise in the demand for AI skills, accompanied by significant wage premiums for those with AI expertise, particularly in high-skilled occupations and large firms [1], [3], [7], [8], [10]. Importantly, AI technologies often complement, rather than replace, employees [10]. This synergy enhances overall productivity and efficiency, particularly in the digital transformation process across various industries [4], [10]. In the US, the introduction of generative AI has been associated with increased employee productivity by reducing the average time spent on tasks such as writing and customer service, while also enhancing the quality of output [16]. These improvements are more pronounced for less-experienced and lower-skilled employees than for their more experienced and highly skilled counterparts [16], [17]. These patterns suggest that generative AI helps newer employees progress more rapidly along the learning curve and reduces productivity inequality among employees [16], [17]. Furthermore, the use of generative AI has been linked to improved employee retention, particularly among newer staff members [16]. In China, AI reduces production costs, lowers prices, and increases employees' incomes, which stimulates demand and consequently drives economic growth [4]. In the same region, AI drives innovation, leading to new products, models, and industrial sectors [4]. Moreover, AI contributes to broader economic shifts through virtual agglomeration, enabling businesses and employees to connect and collaborate across different geographical locations [4]. This fosters a global network of employment opportunities, enhancing global connectivity and supporting diverse job roles across various industries. Additionally, AI helps industries integrate advanced technology into business operations, making them more resilient during economic downturns by optimizing processes, reducing operational costs, and opening new revenue streams [2]. In China, this resilience has contributed to stabilizing employment levels during challenging financial times [4]. Moreover, during the COVID-19 pandemic in the UK, AI helped SMEs mitigate workforce-related business risks [2]. By enhancing efficiency and rapidly pivoting business operations, AI allowed SMEs to navigate the complex financial challenges presented by the COVID-19 pandemic [2]. AI enhances the quality of employment, particularly for historically disadvantaged groups like female employees [4]. In China, AI-driven productivity improvements have led to more flexible, safer, and more fulfilling work environments [4]. With the rise of automation, robotics, AI, and advanced machinery, the importance of physical strength has diminished. Instead, there is a growing emphasis on cognitive abilities, problem-solving, creativity, and emotional intelligence—skills that are not bound by gender. These advancements have helped bridge the gender gap in traditionally male-dominated industries by reducing reliance on physical strength and increasing the importance of cognitive and emotional skills. In the UK, Greece, and Cyprus, training on the use of advanced technology has been found to reduce gender and ethnic digital divides and boost empowerment in the labor market by enhancing digital competencies and addressing intersectional barriers faced by immigrant women, including socioeconomic and cultural conditions that exacerbate digital disparities [18], [19]. Similarly, in OECD countries, AI in high-tech operations has been associated with better job quality because it complemented highly sophisticated job tasks [8]. AI and drawbacks in the labor market Review studies have shown that AI has widened the gap between high- and low-skilled employees, particularly by increasing the risks of job displacement for low-skilled individuals and those in routine-based occupations due to automation [13], [14], [15]. US firms that invest more in AI tend to transition towards more highly educated workforces [12]. Specifically, these firms see an increase in the proportion of employees with undergraduate and postgraduate degrees, particularly in STEM fields, while the share of employees without a university degree decreases in AI-investing firms [12]. Medium-skilled roles are in decline, suggesting that AI may substitute certain non-technical positions [12]. In both advanced economies and emerging markets, employees in low-complementarity occupations are more vulnerable to job displacement due to high exposure to AI [10], [14], [15]. This shift has the potential to put downward pressure on wages in occupations exposed to AI that do not require advanced skills [8]. Even when AI enhances the productivity of low-skilled employees, it may not reduce wage inequality [11]. Instead, those who own AI technologies or invest in them would see higher returns, while other employees could face stagnant wages or job displacement. This contributes to exacerbating overall inequality [11]. Employees often perceive AI as a threat to their jobs, particularly those in roles involving repetitive tasks, data management, and routine physical operations [20]. When AI systems fail to meet employees' expectations, trust in these technologies may diminish. This distrust could intensify if firms exaggerate AI’s capabilities, leading employees to perceive AI as a threat to their job security, particularly in roles susceptible to automation. Furthermore, a symbiotic relationship between employees and AI systems might be difficult to achieve if employees do not develop the technical, human, and conceptual skills needed to coexist with AI [20]. The use of AI in performance evaluations, hiring, and promotions can seem impersonal and biased, especially when employees are not provided with clear explanations of how AI systems make decisions. This sense of alienation is further compounded when employees feel that their roles are increasingly dictated by algorithms rather than human judgment, leading to increased stress and anxiety among employees [8]. Additionally, there are concerns about AI's ethics and transparency, especially in decision-making processes, which can erode trust in management [8], [20]. As AI systems increasingly rely on vast amounts of data to function, concerns about how this data is collected, stored, and used have come to the forefront. Employees are often uneasy about the extent to which their personal data is being monitored and analyzed, leading to fears about privacy and potential misuse of information. These concerns are not only related to personal privacy but also to job security, as employees worry that data-driven insights could be used to justify downsizing or restructuring decisions [8]. Theoretical background The studies reviewed in this paper have employed a combination of theoretical frameworks to evaluate the relationships between AI and labor market outcomes at the macro level. Several studies have utilized Schumpeter’s Innovation Theory to explain how AI and automation disrupt existing jobs while creating new ones, leading to shifts in employment patterns [1], [5]. Additionally, the Technology-Task Substitution and Complementarity Theory has been used by some researchers to examine how AI either substitutes human labor—leading to job displacement—or complements it, enhancing productivity and creating new jobs [5], [6], [8]. In the reviewed literature, Innovation, Growth, and Diffusion Theories have been applied to explain how AI drives firm and industry growth through technological innovation and institutional factors, contributing to economic expansion and labor market outcomes [3], [8]. Many studies have utilized the Skill-Biased Technological Change framework, which examines how AI increases the demand for high-skilled labor while reducing the demand for low-skilled labor, leading to wage inequality [1], [5], [8], [14]. A key component of this framework is the Labor Market Polarization Theory, which addresses the division of the labor market into high-skill, high-wage jobs and low-skill, low-wage jobs due to technological advancements like AI, often at the expense of middle-skill jobs [5], [6], [8], [10], [11], [14]. At the micro level, Human Capital Theory has been applied in the literature to examine the importance of education, skills, and adaptability in enabling employees to benefit from AI rather than being displaced by it [1], [3], [5], [6], [8]. New micro-level frameworks have also been proposed to capture how AI shapes individual realities. For instance, the AI capital framework explores how knowledge, skills, and capabilities related to AI technologies can enhance an individual's value in the labor market [1], [18]. Additional frameworks include the Business Apps Training framework, which suggests that training individuals in business applications on advanced technology can increase digital competencies, empowerment, and reduce digital divides [18], [19]. Moreover, the Business AI Apps framework examines how firms can use advanced technology to overcome financial challenges that affect the workforce [2]. A synthesis of macro and micro-level models is crucial in this domain because outcomes observed at the macro level, such as changes in labor market polarization, are directly influenced by micro-level factors like individual education and skill acquisition. Conversely, individual decisions and behaviors aggregate to create the broader trends studied in macro-level models. Understanding this interplay is essential for accurate predictions and effective policymaking. For example, macro-level insights might suggest the need for large-scale educational reforms, while micro-level analyses can inform targeted training programs that address specific skill gaps, ensuring that policies are both comprehensive and effective [1], [18], [19]. Limitations and gaps While some regions, such as the US and parts of Europe, are well-studied, there is a lack of comprehensive research covering a wider range of countries, particularly in the Global South [14], [15]. Research on AI's impact employs various proxies for AI exposure, such as robotization, digital evolution indices, and vacancy-based measures [3], [7]. While these proxies are useful, they often lead to diverse and sometimes contradictory conclusions on AI’s impact on employment and growth [8], [9], [11]. Moreover, the effectiveness of AI in job creation and productivity is complex and multifaceted, with significant variations observed across different regions indicating that the assigned patterns are not homogeneous [8], [10]. This regional variation suggests that AI's benefits and drawbacks are not uniformly distributed and are influenced by local economic conditions, regulatory environments, political realities, and industry characteristics [13], [14]. The literature also lacks a clear consensus on which skills or sectors will be most affected by AI [3], [4], [7], [8]. Additionally, there is a scarcity of robust empirical evidence on the long-term effects of AI on employment, wages, and economic growth [8], [9], [11], [14]. Most studies focus on short-term impacts, leaving a limited understanding of how AI might reshape labor markets over time, particularly across different sectors and regions [14], [15] . Furthermore, much of the literature examining the economic impacts of AI often overlooks the ethical, social, and psychological implications for employees, such as increased monitoring, job stress, and the potential for decreased job satisfaction due to AI-driven changes in the work environment [8], [14], [15]. In addition, the literature on AI's impact tends to focus heavily on macro-level evaluations, often at the expense of exploring micro-level behaviors and outcomes [1]. For instance, how AI capital directly influences employment outcomes at the individual level remains underexplored. This gap largely stems from the lack of data capturing individuals' AI capital levels. Without this micro-level data, it is challenging to assess how AI might be reshaping job roles, altering skill requirements, and influencing individual career trajectories [1]. Summary and policy advice This study shows that AI has boosted employment, particularly in high-skilled roles and large firms [1], [3], [4], [6]. Moreover, AI has also led to wage premiums and improved business operations in high-skilled occupations [1], [3], [7], [8]. Furthermore, AI has been shown to enhance productivity for less-experienced and lower-skilled employees by reducing the average time spent on tasks, improving the quality of output, stabilizing employment during economic downturns, and enhancing job quality for historically disadvantaged groups [2], [4], [8], [16], [18], [19]. However, the widespread adoption of AI has increased the risk of job loss for low-skilled employees, with fears of widening income inequality [11], [14], [15]. Additionally, AI adoption has disrupted workplace trust and raised concerns over ethics [8], [20] . To maximize the benefits of AI while mitigating its adverse effects, strategic interventions are essential [1], [8]. Governments should play a proactive role by significantly increasing investments in AI research and development (R&D), ensuring that these innovations align with societal needs and values [2], [4]. By funding R&D, governments can drive breakthroughs in AI that lead to new industries and job creation, fostering long-term economic growth [2], [4]. Moreover, it is imperative to implement robust policies that focus on workforce reskilling and upskilling [1], [18], [19]. These training programs should include not only technical skills, such as data analysis and programming, but also soft skills like critical thinking, creativity, and adaptability, which are increasingly important in a world where AI handles routine tasks [1]. By doing so, these policies will help displaced individuals transition into new employment opportunities, thereby reducing concerns about AI's impact on job security [1], [5]. In addition, governments should make strategic investments in education and training programs aimed at improving digital literacy and AI capital among the general population [1], [18]. This will not only enhance their employability but also contribute to overall productivity and economic resilience [1], [18]. Policymakers should also develop region- and sector-specific strategies to address the unique challenges posed by AI [5], [14], [15]. For instance, regions heavily reliant on manufacturing or other industries highly susceptible to automation may require targeted investments in digital infrastructure and specialized training programs to mitigate the risk of significant job displacement [5]. Moreover, enhancing social security systems, including unemployment insurance and public welfare jobs, is crucial for managing the structural unemployment caused by AI-driven automation [4], [14]. By providing a safety net for displaced employees, these systems can help maintain social stability and prevent the exacerbation of income inequality [4], [14]. AI adoption strategies should be inclusive, ensuring that the benefits of AI are broadly distributed across all segments of the workforce, particularly among vulnerable groups such as low-income employees, minorities, and those with less access to education and training opportunities [18]. This can be achieved by encouraging AI adoption in ways that complement human labor rather than replace it [10]. Safeguarding employee rights in the context of AI adoption is crucial, including ensuring fair wages, preventing discrimination, and ensuring that individuals' personal information is protected in AI-driven processes [8]. By establishing these ethical standards, policymakers can ensure that AI technologies are implemented in a manner that benefits society as a whole while minimizing potential risks [8], [20]. Transparency in AI decision-making is essential for building trust among users and ensuring that AI systems are accountable [8], [20]. Acknowledgments The author thanks three anonymous referees and the IZA World of Labor editors for many helpful suggestions on earlier drafts. The author’s previous work contains a larger number of background references for the material presented here and has been used extensively in all major parts of this article [1], [2], [18], and [19]. Competing interests The IZA World of Labor project is committed to the IZA Guiding Principles of Research Integrity. The author declares to have observed these principles. © Nick Drydakis
2025-02-12T00:00:00
2025/02/12
https://wol.iza.org/articles/artificial-intelligence-and-labor-market-outcomes/long
[ { "date": "2022/12/01", "position": 10, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 10, "query": "AI labor market trends" }, { "date": "2023/04/01", "position": 13, "query": "AI labor market trends" }, { "date": "2023/04/01", "position": 77, "query": "AI wages" }, { "date": "2023/06/01", "position": 9, "query": "AI labor market trends" }, { "date": "2023/07/01", "position": 96, "query": "artificial intelligence wages" }, { "date": "2023/08/01", "position": 72, "query": "AI wages" }, { "date": "2023/10/01", "position": 10, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 10, "query": "AI labor market trends" }, { "date": "2023/12/01", "position": 10, "query": "AI labor market trends" }, { "date": "2024/01/01", "position": 82, "query": "AI wages" }, { "date": "2024/02/01", "position": 12, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 10, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 82, "query": "AI wages" }, { "date": "2024/04/01", "position": 32, "query": "artificial intelligence wages" }, { "date": "2024/05/01", "position": 10, "query": "AI labor market trends" }, { "date": "2024/05/01", "position": 68, "query": "AI wages" }, { "date": "2024/06/01", "position": 10, "query": "AI labor market trends" }, { "date": "2024/07/01", "position": 69, "query": "AI wages" }, { "date": "2024/08/01", "position": 9, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 9, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 83, "query": "AI wages" }, { "date": "2024/11/01", "position": 10, "query": "AI labor market trends" }, { "date": "2025/02/01", "position": 26, "query": "AI impact jobs" }, { "date": "2025/02/01", "position": 3, "query": "AI labor market trends" }, { "date": "2025/02/01", "position": 13, "query": "AI unemployment rate" }, { "date": "2025/02/01", "position": 10, "query": "AI wages" }, { "date": "2025/02/01", "position": 10, "query": "artificial intelligence employment" }, { "date": "2025/02/01", "position": 33, "query": "artificial intelligence workers" }, { "date": "2025/03/01", "position": 69, "query": "AI wages" }, { "date": "2025/05/01", "position": 10, "query": "AI labor market trends" } ]
How GenAI will impact the labor market | EY - Global
How GenAI will impact the labor market
https://www.ey.com
[ "Gregory Daco", "Authorsalutation", "Authorfirstname Gregory Authorlastname Daco Authorjobtitle Ey-Parthenon Chief Economist", "Strategy", "Transactions", "Ernst", "Young Llp Authorurl Https", "Www.Ey.Com En_Gl People Gregory-Daco", "Content Dam Content-Fragments Ey-Unified-Site Ey-Com People Global En G Gregory-Daco", "Ey-Parthenon Chief Economist" ]
Across the US, 66% of employment (or the equivalent of 104 million jobs) is highly or moderately exposed to GenAI. The remaining 34% of occupations has low AI ...
To estimate the potential impact of GenAI across occupations, we leveraged research from Michael Webb at Stanford.1 The analysis uses a verb-noun pairing framework covering over 800 occupations and their task descriptions from O*NET. Using the description of each occupation, the approach consists of isolating the verb-noun pairs that best describe the different job tasks along with the frequency and importance of each task. The verb-noun pairs are then compared to patents filed for AI technology, including patent titles, descriptions and abstracts, to see how exposed they are to AI. The sum product of each task’s exposure score and the frequency of tasks in every occupation is then used to estimate an aggregate raw AI impact score per occupation. These are then normalized to a 0 to 1 scale. Using US Bureau of Labor Statistics (BLS) data, we then regrouped these 800 minor occupations’ scores into 94 major occupational subcategories and 22 sectoral groupings. For example, the role of an agricultural technician includes a task for “developing soil sampling grids,” which has an associated verb-noun pair of “develop grid” representing 5% of an agricultural technician’s functions. In turn, the verb-noun pair represents 0.05% of verb-noun pairs found in AI patent titles, descriptions and abstracts. The sum product of all verb-noun pairs describing the agricultural technician’s functions and their individual tasks’ AI exposure yields the raw AI augmentation score. Our findings are quite striking, with 66% of US employment with moderate to high GenAI exposure, or the equivalent of 104 million jobs across the country. Within those, roughly 18% of total employment, or 28 million jobs, would have a high AI augmentation score, and 5% of employment, or 8 million jobs, would have a very high augmentation score. Importantly, the remaining 34%, which have the lowest AI exposure score, could still be marginally affected by AI via some secondary tasks.
2024-04-15T00:00:00
2024/04/15
https://www.ey.com/en_gl/insights/ai/how-gen-ai-will-impact-the-labor-market
[ { "date": "2022/12/01", "position": 42, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 40, "query": "AI labor market trends" }, { "date": "2023/04/01", "position": 38, "query": "AI labor market trends" }, { "date": "2023/06/01", "position": 42, "query": "AI labor market trends" }, { "date": "2023/10/01", "position": 40, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 39, "query": "AI labor market trends" }, { "date": "2024/02/15", "position": 20, "query": "AI economic disruption" }, { "date": "2024/02/01", "position": 83, "query": "AI job creation vs elimination" }, { "date": "2024/02/01", "position": 43, "query": "AI labor market trends" }, { "date": "2024/02/15", "position": 82, "query": "future of work AI" }, { "date": "2024/03/01", "position": 40, "query": "AI labor market trends" }, { "date": "2024/05/01", "position": 39, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 38, "query": "AI labor market trends" }, { "date": "2024/08/01", "position": 40, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 62, "query": "AI labor market trends" }, { "date": "2024/11/01", "position": 40, "query": "AI labor market trends" }, { "date": "2025/02/01", "position": 78, "query": "AI job creation vs elimination" }, { "date": "2025/05/01", "position": 88, "query": "AI job creation vs elimination" }, { "date": "2025/05/01", "position": 47, "query": "AI labor market trends" }, { "date": "2025/06/01", "position": 78, "query": "AI job creation vs elimination" } ]
Future of Work Report: AI at Work - LinkedIn's Economic Graph
Future of Work Report: AI at Work
https://economicgraph.linkedin.com
[]
The Future of Work Report: AI at Work, November 2023 uncovers the latest real-time trends we see around how AI is ushering a new world of work.
Preparing the Workforce for Generative AI We analyzed LinkedIn data to identify skills that stand to be affected by new AI technologies, and those that will likely complement them. The insights from this analysis can be used by leaders to inform strategic workforce planning and skilling initiatives.
2022-12-01T00:00:00
https://economicgraph.linkedin.com/research/future-of-work-report-ai
[ { "date": "2022/12/01", "position": 47, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 48, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 61, "query": "artificial intelligence workers" }, { "date": "2023/02/01", "position": 24, "query": "future of work AI" }, { "date": "2023/02/01", "position": 59, "query": "artificial intelligence workers" }, { "date": "2023/04/01", "position": 60, "query": "AI labor market trends" }, { "date": "2023/04/01", "position": 60, "query": "artificial intelligence workers" }, { "date": "2023/04/01", "position": 17, "query": "future of work AI" }, { "date": "2023/05/01", "position": 22, "query": "future of work AI" }, { "date": "2023/06/01", "position": 54, "query": "AI labor market trends" }, { "date": "2023/08/01", "position": 58, "query": "artificial intelligence workers" }, { "date": "2023/09/01", "position": 57, "query": "artificial intelligence workers" }, { "date": "2023/09/01", "position": 24, "query": "future of work AI" }, { "date": "2023/10/01", "position": 56, "query": "AI labor market trends" }, { "date": "2023/10/01", "position": 22, "query": "future of work AI" }, { "date": "2023/11/21", "position": 54, "query": "AI hiring" }, { "date": "2023/11/01", "position": 57, "query": "AI labor market trends" }, { "date": "2023/11/21", "position": 41, "query": "artificial intelligence employers" }, { "date": "2023/11/01", "position": 55, "query": "artificial intelligence workers" }, { "date": "2023/11/21", "position": 6, "query": "future of work AI" }, { "date": "2024/01/01", "position": 60, "query": "artificial intelligence workers" }, { "date": "2024/02/01", "position": 48, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 46, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 97, "query": "artificial intelligence workers" }, { "date": "2024/03/01", "position": 22, "query": "future of work AI" }, { "date": "2024/04/01", "position": 23, "query": "future of work AI" }, { "date": "2024/05/01", "position": 58, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 23, "query": "future of work AI" }, { "date": "2024/07/01", "position": 58, "query": "artificial intelligence workers" }, { "date": "2024/07/01", "position": 22, "query": "future of work AI" }, { "date": "2024/08/01", "position": 55, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 52, "query": "AI labor market trends" }, { "date": "2024/11/01", "position": 57, "query": "AI labor market trends" }, { "date": "2025/03/01", "position": 13, "query": "future of work AI" }, { "date": "2025/05/01", "position": 41, "query": "AI labor market trends" }, { "date": "2025/05/01", "position": 63, "query": "artificial intelligence workers" }, { "date": "2025/06/01", "position": 65, "query": "artificial intelligence workers" } ]
Generative AI and Jobs: A global analysis of potential effects on job ...
Generative AI and Jobs: A global analysis of potential effects on job quantity and quality.
https://www.ilo.org
[]
The International Labour Organization's study, Generative AI and Jobs: A global analysis of potential effects on job quantity and quality, suggests that ...
Participants Janine Berg , Senior Economist, ILO Research , Senior Economist, ILO Research Pawel Gmyrek , Senior Researcher, ILO Research , Senior Researcher, ILO Research Yves Perardel, Senior Statistician, ILOSTAT Moderator , Deputy Director-General, ILOThe International Labour Organization's study, Generative AI and Jobs: A global analysis of potential effects on job quantity and quality, suggests that Generative AI is more likely to enhance jobs by automating specific tasks rather than replacing entire roles. Most jobs across various industries are only partly susceptible to automation, suggesting that AI will complement rather than substitute these roles. The impact of Generative AI is expected to be more on the quality of jobs, particularly in terms of work intensity and autonomy, rather than on job destruction.The study finds that clerical work is the most susceptible to technological automation, with a significant portion of tasks highly exposed to automation, particularly affecting women in high and middle-income countries due to their overrepresentation in clerical roles. Other occupational groups like managers, professionals, and technicians have a smaller fraction of tasks at high risk of automation.Globally, the study notes differences in automation's impact based on a country's level of development and technological infrastructure. High-income countries show a 5.5% employment exposure to automation, whereas low-income countries have only a 0.4% risk. However, the potential for job augmentation by AI is nearly equal across countries, indicating that, with the right policies, technological advancements could be beneficial for developing nations.The study concludes that the social and economic impacts of Generative AI will largely depend on the management of its integration. It highlights the necessity of policies that facilitate a fair and consultative transition, emphasizing the importance of workers' input, skills training, and adequate social protection. The authors underline that the outcomes of AI integration are not predetermined but depend on human decisions and guidance throughout the transition.
2023-11-27T00:00:00
2023/11/27
https://www.ilo.org/meetings-and-events/generative-ai-and-jobs-global-analysis-potential-effects-job-quantity-and
[ { "date": "2022/12/01", "position": 49, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 47, "query": "AI labor market trends" }, { "date": "2023/04/01", "position": 48, "query": "AI labor market trends" }, { "date": "2023/06/01", "position": 49, "query": "AI labor market trends" }, { "date": "2023/08/21", "position": 2, "query": "AI labor market trends" }, { "date": "2023/08/21", "position": 10, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 53, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 50, "query": "AI labor market trends" }, { "date": "2024/02/01", "position": 59, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 54, "query": "AI labor market trends" }, { "date": "2024/05/01", "position": 50, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 48, "query": "AI labor market trends" }, { "date": "2024/08/01", "position": 48, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 51, "query": "AI labor market trends" }, { "date": "2024/11/01", "position": 60, "query": "AI labor market trends" }, { "date": "2025/05/01", "position": 73, "query": "AI labor market trends" } ]
The Impact of AI on the Job Market: Navigating the Evolution of Work
The Impact of AI on the Job Market: Navigating the Evolution of Work
https://www.launchconsulting.com
[]
These trends will reshape job roles and demand new skills in areas such as AI ethics, cybersecurity, and human-AI collaboration. The evolving AI landscape ...
Written by Davood Ghods In recent years, AI technology has experienced exponential growth, driven by breakthroughs in machine learning, natural language processing, and computer vision. The integration of big data and powerful computing resources has further accelerated progress. AI's ability to analyze vast datasets, recognize patterns, and make predictions has led to its widespread adoption across various industries, revolutionizing the way tasks are performed and decisions are made. AI’s Impact on the Job Market AI's impact on the job market is a critical aspect of its integration into society. AI's ability to process large volumes of data quickly and accurately contributes to increased efficiency and productivity in various industries. This, in turn, leads to economic growth and creates opportunities for innovation. As AI technology advances, new job roles emerge in AI development, maintenance, and oversight. This includes positions such as machine learning engineers, data scientists, and AI ethicists, highlighting the positive impact of AI on job creation. Despite its advantages, the widespread adoption of AI raises concerns about job displacement and unemployment. As automation takes over routine tasks, certain job roles may become obsolete, necessitating the need for reskilling and upskilling. The evolving job market demands a workforce with updated skills. The skills gap, wherein the existing workforce lacks the necessary expertise, underscores the importance of continuous learning and upskilling to stay relevant in an AI-driven economy. The ethical considerations surrounding AI applications are crucial. Issues such as bias in algorithms, privacy concerns, and the potential misuse of AI technologies emphasize the need for ethical guidelines and regulations to ensure responsible AI deployment. AI’s industry impact ranges from streamlining operations and enhancing efficiency to fostering innovation and redefining customer experiences. By automating routine tasks, AI allows for a greater focus on strategic and creative endeavors, unlocking new opportunities for growth. AI's data processing capabilities enable better decision-making, driven by insights that were previously inaccessible due to the sheer volume or complexity of data. Moreover, AI is instrumental in identifying and developing new products and services, tailored to evolving market demands and consumer preferences. Its influence extends to improving risk management, optimizing supply chains, and advancing sustainability efforts. As AI continues to evolve, it's clear that its integration is not just a trend, but a fundamental shift - setting a new standard for how industries operate and compete in an increasingly digital world. Numerous industries have successfully integrated AI to enhance their operations. For instance, companies like Amazon and Tesla have optimized their supply chains through AI-driven logistics and manufacturing processes, leading to increased efficiency and cost savings. Here are a few additional real-world examples of AI’s impact of various industries: Healthcare: The integration of AI in healthcare is transforming patient care, healthcare administration, medical imaging, patient monitoring, and more, marking a shift towards a more efficient, accessible, and patient-centric healthcare ecosystem. Through advanced algorithms, AI assists in more accurate diagnostics by analyzing medical images and patient data, significantly reducing diagnostic errors. AI-driven predictive analytics are being used for personalized treatment plans, tailoring healthcare to individual patient needs. Moreover, AI is accelerating drug discovery by analyzing vast datasets to identify potential therapeutic compounds more efficiently. With robotics and AI, surgeries are becoming more precise, reducing recovery times, and improving patient outcomes. The adoption of AI in healthcare not only enhances patient care but also streamlines administrative processes, leading to improved efficiency in healthcare delivery. Finance : In the finance industry, AI is transforming roles by automating data-intensive tasks like risk assessment and financial analysis, allowing finance professionals to focus on strategic decision-making and innovation. Additionally, AI-driven technologies are leading to the emergence of new roles in AI oversight, ethical AI implementation, and data science, reflecting a shift towards more specialized and technical skill sets in finance. AI is also at the forefront of fraud detection and risk management, using pattern recognition to identify fraudulent activities and assess credit risks accurately. In personal finance, AI-driven chatbots and virtual assistants provide customers with instant, personalized financial advice, transforming customer service. Additionally, predictive analytics in finance helps companies and investors make more informed decisions by forecasting market trends and investment risks. : In the finance industry, AI is transforming roles by automating data-intensive tasks like risk assessment and financial analysis, allowing finance professionals to focus on strategic decision-making and innovation. Additionally, AI-driven technologies are leading to the emergence of new roles in AI oversight, ethical AI implementation, and data science, reflecting a shift towards more specialized and technical skill sets in finance. AI is also at the forefront of fraud detection and risk management, using pattern recognition to identify fraudulent activities and assess credit risks accurately. In personal finance, AI-driven chatbots and virtual assistants provide customers with instant, personalized financial advice, transforming customer service. Additionally, predictive analytics in finance helps companies and investors make more informed decisions by forecasting market trends and investment risks. Manufacturing : AI is revolutionizing roles in manufacturing by automating routine tasks and enhancing decision-making through predictive analytics. This shift is leading to new roles centered around managing AI systems, analyzing data, and maintaining advanced robotic equipment, thereby transforming traditional manufacturing jobs into more high-tech, skill-focused positions. Through machine learning algorithms and robotics, AI optimizes production lines, enhancing efficiency and reducing operational costs. Predictive maintenance, powered by AI, anticipates equipment failures before they occur, minimizing downtime and prolonging machinery life. AI's ability to analyze production data helps in identifying bottlenecks and optimizing resource utilization. Additionally, AI-driven quality control systems ensure higher standards of product quality by detecting defects that are imperceptible to the human eye. This integration of AI is not only revolutionizing the manufacturing processes but also paving the way for smart factories of the future. : AI is revolutionizing roles in manufacturing by automating routine tasks and enhancing decision-making through predictive analytics. This shift is leading to new roles centered around managing AI systems, analyzing data, and maintaining advanced robotic equipment, thereby transforming traditional manufacturing jobs into more high-tech, skill-focused positions. Through machine learning algorithms and robotics, AI optimizes production lines, enhancing efficiency and reducing operational costs. Predictive maintenance, powered by AI, anticipates equipment failures before they occur, minimizing downtime and prolonging machinery life. AI's ability to analyze production data helps in identifying bottlenecks and optimizing resource utilization. Additionally, AI-driven quality control systems ensure higher standards of product quality by detecting defects that are imperceptible to the human eye. This integration of AI is not only revolutionizing the manufacturing processes but also paving the way for smart factories of the future. Education: AI is impacting roles in education by automating administrative tasks like grading and attendance, allowing educators to focus more on teaching and personalized instruction. It's also creating new roles focused on AI integration and educational data analysis, as AI-driven tools are increasingly used for customized learning and curriculum development. Adaptive learning systems powered by AI analyze students' learning patterns and tailor educational content to suit individual needs, abilities, and learning styles. This personalized approach facilitates a deeper understanding and retention of knowledge. AI is also enhancing the role of educators, enabling them to provide more focused guidance and support. In higher education and professional training, AI is being used to develop sophisticated educational simulations and virtual environments, providing learners with practical, hands-on experiences in a controlled setting. In each of these industries, AI is not just a tool for efficiency and innovation; it represents a paradigm shift in how we approach problems, make decisions, and optimize outcomes. Strategies for Adapting to the Changing Job Market To thrive in the age of AI, individuals must embrace continuous learning and upskilling. This involves staying updated on industry trends, acquiring new skills, and adapting to evolving job requirements. The emergence of AI technologies and their resulting demand for skills in machine learning, data science, and programming underscore the essential need for a collaborative approach between industries and educational institutions. The demand for skills is shifting from manual and routine tasks to those requiring a combination of technical proficiency and soft skills. Adaptability and a willingness to embrace new technologies are becoming essential for success in the evolving job market. Industry partnerships with educational institutions can help tailor academic programs to the skills demanded by the job market, ensuring a better alignment between education and industry needs. In addition to industry partnerships with educational institutions, Government policies play a crucial role in supporting workforce transition in the face of AI advancements. Policies that encourage reskilling programs, provide financial assistance, and promote a culture of lifelong learning can contribute to a smoother transition for the workforce. A Look Ahead The future of AI promises even more transformative trends, such as the integration of AI in decision-making processes, the rise of autonomous systems, and the proliferation of AI-powered virtual assistants. These trends will reshape job roles and demand new skills in areas such as AI ethics, cybersecurity, and human-AI collaboration. The evolving AI landscape creates opportunities for innovation and entrepreneurship. Startups focused on AI applications, ethical AI development, and AI consulting services are likely to thrive as businesses seek to harness the benefits of AI. AI will play a pivotal role in shaping the future of work. Human-AI collaboration, creative problem-solving, and emotional intelligence will become increasingly valuable alongside technical skills. The future workplace will be characterized by adaptability, continuous learning, and a symbiotic relationship between humans and AI. The impact of AI on the job market is undeniable, ushering in a new era of work characterized by automation, efficiency, and the need for diverse skill sets. Proactive adaptation and the embrace of change are crucial for individuals and industries alike. A balanced approach to AI integration, addressing challenges through upskilling, collaboration, and ethical considerations, will pave the way for a harmonious coexistence between humans and AI in the evolving job market. As we navigate this transformative period, the key to success lies in our ability to harness the potential of AI while preserving the core values of a dynamic and inclusive workforce. Launch is on a mission to help every large and growing organization navigate a data and AI-First strategy. Is your org ready? Take our free AI Readiness Self-Assessment to find out.
2022-12-01T00:00:00
https://www.launchconsulting.com/posts/the-impact-of-ai-on-the-job-market-navigating-the-evolution-of-work
[ { "date": "2022/12/01", "position": 60, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 55, "query": "AI labor market trends" }, { "date": "2023/04/01", "position": 71, "query": "AI labor market trends" }, { "date": "2023/06/01", "position": 66, "query": "AI labor market trends" }, { "date": "2023/10/01", "position": 68, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 70, "query": "AI labor market trends" }, { "date": "2024/01/08", "position": 10, "query": "AI impact jobs" }, { "date": "2024/01/08", "position": 4, "query": "AI labor market trends" }, { "date": "2024/01/08", "position": 66, "query": "AI skills gap" }, { "date": "2024/01/08", "position": 22, "query": "automation job displacement" }, { "date": "2024/01/08", "position": 5, "query": "machine learning job market" }, { "date": "2024/01/08", "position": 9, "query": "machine learning workforce" }, { "date": "2024/01/08", "position": 14, "query": "robotics job displacement" }, { "date": "2024/02/01", "position": 65, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 63, "query": "AI labor market trends" }, { "date": "2024/05/01", "position": 69, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 64, "query": "AI labor market trends" }, { "date": "2024/08/01", "position": 63, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 64, "query": "AI labor market trends" }, { "date": "2024/11/01", "position": 68, "query": "AI labor market trends" } ]
AI's impact on jobs, in five stats - PwC
AI’s impact on jobs, in five stats
https://www.pwc.com
[]
Data from PwC's AI Jobs Barometer suggests that AI is already making workers much more productive. · In occupations most exposed to AI, job openings are still ...
Gen AI is a tool for growth, not just efficiency Tech CEOs are investing to build their workforce and capitalise on new opportunities from generative AI. That’s a sharp contrast to how their peers view it.
2022-12-01T00:00:00
https://www.pwc.com/gx/en/issues/c-suite-insights/the-leadership-agenda/AI-jobs-impact.html
[ { "date": "2022/12/01", "position": 68, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 21, "query": "AI impact jobs" }, { "date": "2023/01/01", "position": 75, "query": "AI labor market trends" }, { "date": "2023/02/01", "position": 22, "query": "AI impact jobs" }, { "date": "2023/03/01", "position": 20, "query": "AI impact jobs" }, { "date": "2023/04/01", "position": 85, "query": "AI labor market trends" }, { "date": "2023/06/01", "position": 81, "query": "AI labor market trends" }, { "date": "2023/07/01", "position": 21, "query": "AI impact jobs" }, { "date": "2023/08/01", "position": 22, "query": "AI impact jobs" }, { "date": "2023/09/01", "position": 29, "query": "AI impact jobs" }, { "date": "2023/10/01", "position": 81, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 21, "query": "AI impact jobs" }, { "date": "2023/11/01", "position": 84, "query": "AI labor market trends" }, { "date": "2023/12/01", "position": 20, "query": "AI impact jobs" }, { "date": "2024/02/01", "position": 21, "query": "AI impact jobs" }, { "date": "2024/02/01", "position": 74, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 71, "query": "AI labor market trends" }, { "date": "2024/04/01", "position": 19, "query": "AI impact jobs" }, { "date": "2024/06/01", "position": 9, "query": "AI employment" }, { "date": "2024/05/01", "position": 19, "query": "AI impact jobs" }, { "date": "2024/05/01", "position": 84, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 21, "query": "artificial intelligence employment" }, { "date": "2024/06/01", "position": 10, "query": "AI impact jobs" }, { "date": "2024/06/01", "position": 81, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 36, "query": "AI unemployment rate" }, { "date": "2024/06/01", "position": 18, "query": "AI workforce transformation" }, { "date": "2024/06/01", "position": 96, "query": "artificial intelligence hiring" }, { "date": "2024/06/01", "position": 28, "query": "artificial intelligence workers" }, { "date": "2024/08/01", "position": 78, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 21, "query": "AI impact jobs" }, { "date": "2024/09/01", "position": 77, "query": "AI labor market trends" }, { "date": "2024/10/01", "position": 21, "query": "AI impact jobs" }, { "date": "2024/11/01", "position": 22, "query": "AI impact jobs" }, { "date": "2024/11/01", "position": 84, "query": "AI labor market trends" }, { "date": "2025/05/01", "position": 26, "query": "AI impact jobs" }, { "date": "2025/05/01", "position": 62, "query": "AI labor market trends" } ]
How artificial intelligence affects the labour force employment ...
How artificial intelligence affects the labour force employment structure from the perspective of industrial structure optimisation
https://pmc.ncbi.nlm.nih.gov
[ "Xiaowen Wang", "School Of Economics", "Lanzhou University", "Lanzhou", "Mingyue Chen", "Nanxu Chen" ]
[20] argue, based on skill-bias theory, that AI would indeed impact the labour force employment structure, posing a high risk of job displacement for medium- ...
To investigate how artificial intelligence (AI) affects the structure of labour force employment, we integrate robotics adoption and employment into this study's model. Based on Chinese provincial panel data from 2010 to 2019, fixed, mediating and threshold effects models and a spatial heterogeneity model were used to empirically test the impact of AI on the employment structure from the perspective of industrial structure optimisation and its mechanisms of action. The findings demonstrate that the impact of AI on the labour force employment structure reflects unique characteristics for China and promotes the advancement of the nation's employment structure. The influence of AI on the labour force employment structure follows a non-linear pattern, fostering labour force employment structure optimisation and upgrading from the perspective of industrial structure optimisation. Further investigation reveals the influence of spatial spillover effects from AI on employment structure optimisation. These research findings have theoretical value and practical significance for optimising China's employment structure in the context of AI. 1. Introduction The rise of the new industrial revolution, characterised by digitalisation, networking and intelligence, has made artificial intelligence (AI) a driving force for the continued advancement of humankind. AI is highly valued by countries worldwide. For instance, Germany's Industry 4.0 aims to create smart manufacturing, Japan's Society 5.0 emphasises the use of AI to serve human beings and the United Kingdom's New Deal for the Artificial Intelligence Sector plans AI development from multiple perspectives to reshape the global industrial division of labour through industrial upgrading and intelligent transformation [1]. The labour force employment structure in the United States and several European countries has begun to polarise as a concomitant phenomenon that is characterised by an increase in high-skilled jobs in low-skilled sectors and a decrease in jobs in medium-skilled sectors [2]. As the foremost emerging economy, China has adopted a national blueprint to seize the transformative prospects presented by the latest industrial revolution wave. In 2016, this blueprint integrated the concept of AI in the 13th Five-Year Plan. Following this, in 2017, the State Council unveiled the New-Generation Artificial Intelligence Development Plan. The report of the 19th National Congress accentuated the fusion of AI with the tangible economy to foster growth. By October 2018, the Political Bureau of the CPC Central Committee convened for its ninth group study, focusing on AI's evolution and future trajectory. The session, chaired by General Secretary Xi Jinping, highlighted the pivotal role of AI as a catalyst for the forthcoming scientific and technological revolution and industrial transformation. General Secretary Xi advocated accelerating the advancement of next-gen AI, pinpointing it as a strategic imperative for China to navigate and leverage the upcoming wave of global scientific, technological and industrial evolution. In February 2020, the Central Committee for Comprehensively Deepening Reform forward proposed to harness digital technologies, including big data, AI and cloud computing, aiming to enhance support for tasks such as epidemic surveillance, virus tracking and management of prevention, control, treatment and logistical allocations during health crises. These initiatives have accelerated technological and industrial change and promoted the deep integration of the real economy with AI. They have also engendered massive investment in advanced equipment embedded with cutting-edge technologies. Owing to the long-standing policy of prioritising capital deepening, coupled with a shortened technology replacement cycle and enterprises' increased labour costs, Chinese companies have replaced human labour with machines to enhance productivity and competitiveness in the new era [3]. Against this background, China's industrial intelligence is entering a phase of rapid development, which is bound to have a profound impact on the nation's employment structure. Notable differences are evident in the industrial intelligence and upgrading process between China's eastern, central and western regions due to unbalanced economic development. The high cost of living in developed regions has a significant impact on the supply and mobility of labour with different skills. Considering the reality of unbalanced regional development, China's labour force employment structure in the new period may have unique characteristics that differ from those of Western industrialised countries [4]. It is widely agreed that AI impacts employment through substitution and creation effects [5]. AI development can lower the cost of machinery and equipment [6], leading firms to replace human labour with robots [7], and increase production automation [8], resulting in substitution effects [4], that ultimately cause job loss [9]. AI development can also have a positive impact on employment. In addition to expanding production scale by reducing costs, promoting capital accumulation and increasing the productivity of machines and equipment [10], AI complements the skills of the labour force in research and development (R&D) and design and communication [11], opening new job opportunities [12]. A lack of consensus remains among scholars regarding how AI affects employment structure, based on the substitution and creation effects [13,14]. In the context of industrial upgrading, research on the impact of AI on employment structure is particularly important for China, where population ageing is becoming increasingly serious. Furthermore, research on the impact of AI on the labour force employment structure tends to adopt a linear perspective, with fewer analyses of non-linear impacts. It is essential to explore non-linear impact trends. Additionally, while the impact of AI on local employment structure is often explored, previous research largely ignores its spatial spillover effects on employment structure. The contributions of this study are manifold. (1) The study derives inspiration from Bartik's instrumental variables [15], constructing a new regional industrial robot penetration index, using robotics from the International Federation of Robotics (IFR) and Chinese labour employment structural data from 2010 to 2019. We evaluate AI's influence on labour market employment configurations applying instrumental variables techniques, while acknowledging China's unique context. (2) The Development Research Centre of the State Council's Department of Development Strategy and Regional Economy classified mainland China into eight economic territories based on developmental profiles. We employ this regional demarcation to dissect AI's differential effects on employment structure across these zones. (3) This study investigates the non-linear impacts of AI on the labour force employment structure. Analysing the threshold effects, we demonstrate that while AI's impact on employment varies for high and low-skilled labour, with clear threshold effects, such effects are not evident for medium-skilled labour. (4) This study goes beyond the conventional emphasis on industrial structure upgrading, differentiating between advancement and rationalisation of industrial structure to examine the impact of each construct on altering employment structure under AI's mediating role. We posit that the mediating influence of industrial structure rationalisation on employment is comparatively subdued. (5) AI will promote the transformation and upgrading of local industries and drive the upgrading of industries, scale expansion and economic development of surrounding areas, which will generate spatial spillover effects on the employment structure.
2024-02-22T00:00:00
2024/02/22
https://pmc.ncbi.nlm.nih.gov/articles/PMC10907740/
[ { "date": "2022/12/01", "position": 81, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 91, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 94, "query": "artificial intelligence workers" }, { "date": "2023/02/01", "position": 98, "query": "artificial intelligence workers" }, { "date": "2023/04/01", "position": 95, "query": "AI labor market trends" }, { "date": "2023/06/01", "position": 90, "query": "AI labor market trends" }, { "date": "2023/08/01", "position": 80, "query": "artificial intelligence workers" }, { "date": "2023/10/01", "position": 98, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 95, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 75, "query": "artificial intelligence workers" }, { "date": "2024/01/01", "position": 81, "query": "artificial intelligence workers" }, { "date": "2024/02/01", "position": 12, "query": "AI economic disruption" }, { "date": "2024/02/01", "position": 97, "query": "AI labor market trends" }, { "date": "2024/02/01", "position": 5, "query": "AI unemployment rate" }, { "date": "2024/02/01", "position": 100, "query": "future of work AI" }, { "date": "2024/02/01", "position": 6, "query": "robotics job displacement" }, { "date": "2024/03/01", "position": 89, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 90, "query": "artificial intelligence workers" }, { "date": "2024/05/01", "position": 95, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 93, "query": "AI labor market trends" }, { "date": "2024/08/01", "position": 95, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 89, "query": "AI labor market trends" }, { "date": "2025/05/01", "position": 80, "query": "AI labor market trends" } ]
The double-edged sword of AI: Will we lose our jobs or ... - Statista
Statista
https://www.statista.com
[]
Bringing together data-driven intelligence, expert perspectives, and extensive research to provide insights and future outlooks on current trends, market data, ...
Why Statista See why Statista is the trusted choice for reliable data and insights. We provide one platform to simplify research and support your strategic decisions. Learn more
2022-12-01T00:00:00
https://www.statista.com/site/insights-compass-ai-future-ai-work
[ { "date": "2022/12/01", "position": 88, "query": "AI labor market trends" }, { "date": "2023/01/01", "position": 94, "query": "AI labor market trends" }, { "date": "2023/07/15", "position": 10, "query": "AI labor market trends" }, { "date": "2023/07/15", "position": 45, "query": "artificial intelligence employment" }, { "date": "2023/07/15", "position": 71, "query": "artificial intelligence workers" }, { "date": "2023/07/15", "position": 3, "query": "job automation statistics" }, { "date": "2024/02/01", "position": 35, "query": "job automation statistics" }, { "date": "2024/06/01", "position": 99, "query": "AI labor market trends" }, { "date": "2024/08/01", "position": 98, "query": "AI labor market trends" } ]
Artificial Intelligence & the Labour Market, The Future of Work is Now
Artificial Intelligence & the Labour Market, The Future of Work is Now
https://whiteshield.ai
[]
After several days of training, this method made errors amounting to 7.32% on average. In contrast, a newer approach using the AI Satellite Economic Analysis ...
Bridging the Data Gap in Policymaking: The Role of AI and Alternative Data Governments are far behind in the race to leverage big data. Can generative AI help level the playing field? April 10, 2024 | Written by Hugo Zlotowski (Policy Advisor, AI Economics Unit) Read article In the digital age, governments are no longer the only entities collecting data on citizens. More alarmingly, private companies have become more efficient than administrations at gathering information. Traditional governmental methods - telephone surveys, forms and in-person interviews - continue to be the benchmark in accuracy, yet their inherent inefficiencies are becoming more apparent. In a typical OECD country, over 3,000 employees work in the National Statistics Department, undertaking the heavy lifting of government data collection. Costly and inefficient cycles that span years from inception to completion stand in stark contrast to the real-time analytics deployed by tech giants. Their platforms are capable of monitoring millions of users daily, highlighting the growing need for faster and leaner techniques to complement traditional data collection approaches. The company behind Facebook and Instagram, Meta, has collected data on 77% of all internet users with about 3.9 billion people active on at least one of its platforms (as of Q3 2023 Statista). Number of users in billions i Facebook alone had 3 bn active monthly users as of Q3 2023. The issue is not limited to collection methodologies. National statistics departments struggle to provide the granular, timely data required for modern policymaking. In emerging countries, the data itself is often incomplete: the informal economy, which accounts for 15% of the OECD's GDP, remains largely uncharted by conventional methods, highlighting a gap in government data. The Middle East mirrors this global trend of data disparities, grappling with the challenge of providing updated and granular data of quality. The availability of government data in the Middle East and North Africa (MENA) region is 44% lower than the global average, and 54% lower than that of the G20, signaling a significant lag in data transparency and accessibility. Only about half the countries in the MENA region are up to date with their health, labour force, and consumption surveys (Data & Policy, 2022). In the Middle East, Governments provide more limited quality data... Open Data Barometer Global Report G-20 average 56.8 World Average 32.5 UAE 26.2 MENA 18.2 > Data availability score of UAE is 54% lower than world average > Data availability score of MENA is 44% lower than world average ...as do businesses... Business Data Transparency Index Number of users in billions 0 UAE Rank At the same time, policymaking is increasingly granular (See Whiteshield's Citizen Navigator). As governments create dynamic policies that are tailored to individual citizens, the need for timely and detailed insights becomes critical. To navigate this new terrain, governments must evolve by adopting big data and AI-driven analytics. Whiteshield's AI Economics Unit is at the forefront of integrating advanced AI capabilities into practical applications. Our latest research showcases the remarkable potential of new AI tools to support data collection for regional economic assessment. Transforming Satellite Imagery into Economic Insights In assessing a region's economic health, the conventional barometer has been its Gross Regional Product (GRP), similar to GDP on a more localised scale. This measure is central to policymakers' ability to formulate nuanced, region-specific strategies. But in many emerging countries, where robust data is still often scarce, Whiteshield has turned to innovative approaches, notably the use of satellite imagery, to measure regional economic performance. To achieve this, the process involves breaking down satellite images into smaller, manageable pieces. These pieces are then analysed by artificial intelligence algorithms. Specifically, the current state-of-the-art is a type of machine learning known as Convolutional Neural Networks (CNNs). These networks require humans to first prepare extensive sets of example images, each one meticulously categorized by hand, to teach the system how to recognize various features. This process is both resource-intensive and technically challenging, particularly in developing countries with unique landscapes, such as those in the Middle East. The Power of Language Models in Regional Analysis Whiteshield is leading the way with an innovative solution in this field. The AI Economic Analysis Tool uses Large Language AI Models, such as OpenAI's GPT-4, to analyse satellite images. This approach is gaining recognition for its potential versatility across different fields of socio-economic research, in contrast with current expert systems which are tailored for specific tasks. In a study examining 72 detailed satellite images of Dubai, the two different methods were compared for accuracy. The first method, a Convolutional Neural Network (CNN), was trained using images that had been pre-labelled by humans. After several days of training, this method made errors amounting to 7.32% on average. In contrast, a newer approach using the AI Satellite Economic Analysis Tool and GPT-4, which did not rely on any pre-labelled images and took only minutes to run, achieved an error rate of 11.9%. While slightly less accurate overall, Whiteshield's approach showed superior performance at identifying buildings and roads, which are key elements in assessing economic health. The Whiteshield approach outperforms the current method in building and road identification. Dubai dataset experiment - Average model error by category Average Error Current method (CNN) WhiteShield approach (LLM)
2024-08-15T00:00:00
2024/08/15
https://whiteshield.ai/article/artificial-intelligence-the-labour-market-the-future-of-work-is-now/
[ { "date": "2022/12/01", "position": 94, "query": "AI labor market trends" }, { "date": "2023/04/01", "position": 88, "query": "AI labor market trends" }, { "date": "2023/10/01", "position": 94, "query": "AI labor market trends" }, { "date": "2023/11/01", "position": 88, "query": "AI labor market trends" }, { "date": "2024/03/01", "position": 98, "query": "AI labor market trends" }, { "date": "2024/05/01", "position": 88, "query": "AI labor market trends" }, { "date": "2024/06/01", "position": 84, "query": "AI labor market trends" }, { "date": "2024/08/01", "position": 89, "query": "AI labor market trends" }, { "date": "2024/09/01", "position": 96, "query": "AI labor market trends" }, { "date": "2024/11/01", "position": 95, "query": "AI labor market trends" } ]
New AI Laws May Go Into Effect As Early As July 1, 2025
New AI Laws May Go Into Effect As Early As July 1, 2025
https://www.callaborlaw.com
[ "Linda Wang", "Mark S. Spring" ]
Employers must keep AI-related records for four years. These include applications, personnel files, and data from automated decision systems.
On March 21, 2025, the California Civil Rights Council adopted its final regulations regarding automated decision-making systems. In the employment context, automated decision-making systems can include a wide range of tools that employers use to increase efficiency, such as AI systems to assist in hiring, firing, promotion, cost-cutting, and more. In response to businesses’ growing interest in utilizing automated decision systems, California legislators are looking to address legal concerns associated with such systems. Previously, on February 7, 2025, the Civil Rights Council published its second round of modifications to proposed employment regulations regarding automated decision systems and invited public comments. Since then, the deadline to submit public comments passed on February 24, 2025. Shortly after, the Civil Rights Council adopted the most recent proposed regulations as final. Once the final regulations are approved by the Office of Administrative Law and published by the Secretary of State, they will likely become effective on July 1, 2025. As a reminder, below are some highlights to keep in mind (more details can be found here): The definition of "agent" now includes anyone acting on behalf of an employer, such as third parties involved in recruitment, hiring, or promotion through AI systems. Employers may face a higher burden to prove they have tested for bias and made efforts to prevent discrimination. A lack of evidence could be used against them. Employers must keep AI-related records for four years. These include applications, personnel files, and data from automated decision systems. Employers using AI to filter applicants must show the criteria are job-related and necessary, with no less discriminatory alternative that meets their goals. AI tools will likely become integral to many businesses to maximize efficiency. It is important to work with legal counsel to understand the implications of potential legal liabilities and stay informed about the laws in this area. Feel free to contact Linda Wang or your preferred CDF attorney for a consultation.
2022-12-01T00:00:00
https://www.callaborlaw.com/entry/new-ai-laws-may-go-into-effect-as-early-as-july-1-2025
[ { "date": "2022/12/01", "position": 21, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 24, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 18, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 23, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 1, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 20, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 4, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 22, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 1, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 1, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 23, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 23, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 23, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 16, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 16, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 16, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 1, "query": "AI regulation employment" }, { "date": "2025/07/01", "position": 53, "query": "artificial intelligence labor union" } ]
What might AI regulation in the UK workplace look like? - Linklaters
What might AI regulation in the UK workplace look like?
https://www.linklaters.com
[]
The AI Bill would introduce a ban on emotion recognition technology where its use would be detrimental to a worker or jobseeker. Many such systems are already ...
As adoption of AI systems continues at speed, the constant flow of new products with new capabilities raises concern that regulation of AI, particularly for workers and jobseekers, is failing to keep pace. In its recent response to the AI white paper, the UK government confirmed that it has no current plans to introduce AI legislation. This approach contrasts starkly with the EU. The EU’s AI Act, approved earlier this year, will introduce a regulatory framework for the use of AI, which organisations within the EU will be expected to comply with from 2026. To tackle the lack of targeted domestic protection for individuals subject to unfair or biased algorithmic decision-making, the Trades Union Congress has published a draft AI bill setting out a framework for the regulation of AI in UK workplaces. The AI Bill could prove influential. The draft Artificial Intelligence (Regulation and Employment Rights) Bill In April 2024, the TUC published the draft Artificial Intelligence (Regulation and Employment Rights) Bill. The AI Bill does not have government backing but, in identifying ways to ensure that people are protected from the risks and harms of AI-powered decision making in the workplace, it is intended to generate debate and offer solutions for the responsible adoption of AI. The AI Bill is the work of an AI taskforce who together comprise expertise in the areas of technology, law and politics. The AI Bill proposes a raft of protections for workers and job-seekers, without seeking to be overly prescriptive. In this vein, it tackles only high-risk and prohibited uses of AI systems. Similar to the EU AI Act, the aim of the AI Bill is to lay down rules for the fair and safe use of AI systems in the workplace by introducing a matrix of obligations on employers and a series of protections for workers. Below we set out the key provisions of the AI Bill. Workplace AI risk assessments Prior to implementing AI systems to be used for the purposes of high-risk decision-making activities, employers must undertake a workplace AI risk assessment. High-risk activities include decisions taken in relation to disciplinary matters, the termination of employment and capability assessments. Among other things, the AI system must be assessed in relation to equality, data protection and human rights risks. Once implemented, further assessments of the system must be undertaken on an annual basis. These should take into account the extent to which the system gives rise to inaccurate outcomes. The AI Bill requires employers to consult with workers about the use of AI systems and the AI risk assessment will be a core part of that discussion. Register of AI systems All AI systems used by an employer in high-risk decision-making must be recorded on a register which should identify the categories of decision-making the system takes and the purpose and aim of the system. The register must be made available to workers and jobseekers. The existence of the register, together with the duty to conduct an AI risk assessment, will impose tight controls and governance over the use of AI systems in employment decision-making. Right of explanation and human reconsideration The AI Bill gives workers and jobseekers the rights to seek a personalised explanation of any high-risk decision which is or might be to their detriment. The statement must explain how the decision affects the individual. It also gives workers and jobseekers the right to request a human reconsideration of any high-risk decision made about them by an AI system. Discrimination in high-risk decision-making One of the challenges presented by using AI in a workplace context is where liability for discrimination should attach when multiple parties have been involved in the design, development, training, testing and use of an AI tool. The AI Bill proposes a solution to this by introducing a reverse burden of proof, requiring employers to demonstrate that the output of an AI tool was not discriminatory. The AI Bill also provides employers with a new statutory defence to discrimination claims where they can demonstrate that: They neither created nor modified the AI system; They took adequate steps to carefully audit AI systems prior to use; and There were procedural safeguards to remove the risk of discrimination and prevent the system from being used in a discriminatory way. This defence is intended to promote thorough due diligence prior to the adoption of new AI decision-making systems. Ban on emotion-recognition technology The AI Bill would introduce a ban on emotion recognition technology where its use would be detrimental to a worker or jobseeker. Many such systems are already being used both as part of recruitment and as a way of assessing employee wellbeing in the workplace. However, emotion recognition technology is widely perceived as being intrusive and doubts have been raised as to how effective the systems are (including as a result of concerns that the systems perform less effectively for minority ethnic groups who are underrepresented in the datasets the systems are trained on). The ban proposed by the AI Bill aligns with the approach adopted in the EU AI Act which prohibits the use of emotion inference systems in the workplace. Under the EU AI Act, failure to comply with the ban leads to a maximum fine of the higher of EUR 35 million and 7% of total worldwide annual turnover. The sanction proposed by the AI Bill is much less onerous with workers entitled to make a complaint to the employment tribunal and compensation to be calculated on a just and equitable basis. Remedies for breach of worker rights A number of the protections contained in the AI Bill are backed up by the right to bring a tribunal claim in the event of a breach and to seek compensation. In addition, the AI Bill introduces specific protection against unfair dismissal where the reason for the dismissal is unfair reliance on high-risk decision-making. Where next? While the AI Bill sets out a helpful blueprint for the regulation of AI in the workplace, it remains to be seen what impact it will have. More recent statements by the UK government suggest that its opposition to the regulation of AI may be softening, with an acknowledgement that some form of AI legislation will eventually be needed. More definitive change may be on the horizon in the event of a change of government following a general election in the UK, which is expected later this year. The Labour party have given positive indications that they would adopt a proactive approach to addressing the impact of AI on worker rights. As concerns over the safety of AI systems grow, tighter controls in the UK seem inevitable. The TUC’s AI Bill represents a ready-made framework which could form the basis for future legislation.
2022-12-01T00:00:00
https://www.linklaters.com/en-us/insights/blogs/employmentlinks/2024/may/what-might-ai-regulation-in-the-uk-workplace-look-like
[ { "date": "2022/12/01", "position": 52, "query": "AI regulation employment" }, { "date": "2023/05/01", "position": 56, "query": "AI regulation employment" }, { "date": "2023/10/01", "position": 54, "query": "AI regulation employment" }, { "date": "2023/11/01", "position": 58, "query": "AI regulation employment" }, { "date": "2023/12/01", "position": 79, "query": "AI regulation employment" }, { "date": "2024/02/01", "position": 54, "query": "AI regulation employment" }, { "date": "2024/03/01", "position": 39, "query": "AI regulation employment" }, { "date": "2024/06/01", "position": 47, "query": "AI regulation employment" }, { "date": "2024/08/01", "position": 56, "query": "AI regulation employment" }, { "date": "2024/10/01", "position": 52, "query": "AI regulation employment" }, { "date": "2024/11/01", "position": 55, "query": "AI regulation employment" }, { "date": "2025/01/01", "position": 64, "query": "AI regulation employment" }, { "date": "2025/03/01", "position": 49, "query": "AI regulation employment" }, { "date": "2025/04/01", "position": 56, "query": "AI regulation employment" }, { "date": "2025/05/01", "position": 51, "query": "AI regulation employment" }, { "date": "2025/06/01", "position": 52, "query": "AI regulation employment" } ]
How AI Laws are Impacting Hiring and Recruitment Practices
How AI Laws are Impacting Hiring and Recruitment Practices
https://info.recruitics.com
[ "Alexandra Sosa" ]
Impact on Hiring Practices: AI regulations require hiring professionals to perform bias audits, inform candidates about AI tools, and obtain consent. Compliance ...
Key Takeaways: Legislative Response to AI in Recruitment: State, federal, and international governments are enacting laws to regulate AI in hiring, such as New York's AEDTs Law, Illinois' AI Video Interview Act, and the EU AI Act. These regulations address bias, data privacy, and transparency. State, federal, and international governments are enacting laws to regulate AI in hiring, such as New York's AEDTs Law, Illinois' AI Video Interview Act, and the EU AI Act. These regulations address bias, data privacy, and transparency. Impact on Hiring Practices: AI regulations require hiring professionals to perform bias audits, inform candidates about AI tools, and obtain consent. Compliance with laws like the ADA, Title VII, and GDPR is essential to avoid legal risks and ensure ethical AI use. AI regulations require hiring professionals to perform bias audits, inform candidates about AI tools, and obtain consent. Compliance with laws like the ADA, Title VII, and GDPR is essential to avoid legal risks and ensure ethical AI use. Balancing Efficiency and Ethical Considerations: While AI can enhance recruitment efficiency, it's crucial to prioritize data privacy, security, and fairness. Hiring professionals must stay informed about regulatory changes and conduct regular bias audits to maintain trust and compliance. Although artificial intelligence has existed since the mid-1950s, the reality has seen exponential growth and public awareness in the last few years, especially with the launch of accessible tools like ChatGPT. With that growth comes renewed awareness and focus on the business benefits of AI and the dangers of using this type of software. From bias to automation-induced job loss to data security and privacy issues, the emerging risks of AI concern many. Legislatures and regulatory bodies at every level are scrambling to understand and regulate the technology to protect society from its potential pitfalls. Discover what state, federal, and international governments are doing to this end, how this legislation affects hiring and recruitment practices, and what hiring professionals can do in response to tightening regulations. The U.S. Landscape: State-Level Legislation There has been a surge of AI-related bills in 2024, with dozens of states, Puerto Rico, the U.S. Virgin Islands, and Washington D.C. all moving toward regulating the technology. Accelerating legislation has the potential to challenge businesses operating across multiple states. Hiring professionals who want to use AI to compete for top talent but desire to remain in compliance need to pay attention to what's coming down the pipeline. Here is a brief overview of recent state-level laws and their impact on hiring. New York's Automated Employment Decision Tools (AEDTs) Law The New York City Council recently introduced Local Law 144, also called the NYC Bias Audit Law. This legislation requires all employers to conduct impartial evaluations to assess whether AI-driven employment decision tools negatively impact people belonging to certain racial/ethnic or sex/gender groups. Employers must also provide a summary of the results on their website and inform candidates and employees that the tool will be used to evaluate them. Although the law is designed to protect candidates, it places a more significant administrative burden on employers that must find independent entities to conduct the audit. Notification requirements may also affect the size or quality of the candidate pool, as candidates may have strong feelings about using such tools. Finally, employers need more guidance about the legal implications of the audit results, especially if a third party finds evidence of bias in hiring practices. Illinois' Artificial Intelligence Video Interview Act In 2020, the Illinois General Assembly introduced the Artificial Intelligence Video Interview Act. This relatively new law places several restrictions and requirements on employers using AI tools to analyze video interviews. Employers must inform applicants about the software, explain how it will be used, and obtain the applicant's consent. Employers can share the video only with those involved in the interview evaluation process, and they must ensure that all copies are destroyed within 30 days of an applicant's request to do so. While this law is yet another one introducing more administrative burden for hiring professionals, it also needs more clarity on what counts as artificial intelligence, what constitutes sufficient notice, and what the penalties are for violations. Maryland's Facial Recognition Law (HB 1202) Maryland's HB1202 prohibits employers from using certain facial recognition technology during an interview unless the applicant consents by signing a waiver. The waiver must plainly state: The applicant's name The interview date That the applicant consents to the use of facial recognition software That the applicant has read the consent waiver Fortunately for employers, this law doesn't include extensive administrative requirements, though employers may need to find practical ways to keep records of applicant consent. However, it is essential to know the candidate's sentiment regarding facial recognition software and be prepared to explain its benefits in the hiring process. Federal Efforts and Anticipated Changes In October 2023, President Joe Biden signed Executive Order 14110 to develop security standards for AI tools and protect Americans from potential negative impacts. This extensive executive order calls for the following actions: Develop standards and rules for AI infrastructure as well as harm and risk identification practices Promote AI research, innovation, competition, and collaboration Understand and address the effects on labor and displacement Apply existing laws to AI technology and address disparate impacts in criminal justice, housing, hiring, and federal benefits programs Protect the rights and personal data of students, patients, and consumers across all sectors General Laws Applying to AI in Employment To align with these priorities, employers must familiarize themselves with emerging AI standards and develop governance practices around them. Hiring professionals should also understand how AI impacts their data collection and workforce development practices and ensure that their actions comply with new laws. Even when employers may not intentionally discriminate against protected groups, using AI technology can result in violations for which they can be held responsible. For that reason, employers must address issues arising from AI use to stay in compliance with existing laws. Americans With Disabilities Act (ADA): Use technology to evaluate skills instead of disabilities and ensure reasonable accommodations during hiring Title VII of the Civil Rights Act of 1964: Determine whether selection procedures have a disparate impact based on race or other traits The Age Discrimination in Employment Act (ADEA): Follow EEOC guidance on algorithms and avoid software that filters applicants based on birthdates The California Consumer Privacy Act (CCPA): Notify consumers before using automated decision-making technology, allow opt-outs, and explain its effects The General Data Protection Regulation (GDPR): Use solid data protection protocols, be transparent about data use, and maintain human oversight Actions such as these will help hiring professionals stay aligned with regulations and avoid compliance risks. International Perspectives: Key AI Laws Several AI laws have also been introduced in countries outside the United States. The EU AI Act The Artificial Intelligence Act bans specific AI applications such as biometric categorizations and facial recognition databases. Employers wishing to avoid high-risk AI use cases should reconsider their use. Additionally, the act imposes severe obligations on those using AI in employment, requiring them to conduct risk assessments and ensure human oversight. Canada's Artificial Intelligence and Data Act (AIDA) Canada's new AI law imposes strict requirements for high-impact AI-driven systems to safeguard privacy and ethical behavior. These requirements include risk assessment and mitigation measures, consistent monitoring, and public disclosure. The law also introduces substantial fines as an enforcement mechanism. China's Internet Information Service Algorithmic Recommendation Management While this law doesn't directly apply to employers, it advocates for data privacy and safety through transparency and audit requirements for Internet recommendation algorithms. Recruitment marketing websites or other online properties that use this technology should consider conducting regular assessments for fairness and security. Employers operating in China should ensure alignment with EU guidelines. India's AI Advisory by the Ministry of Electronic & Information Technology (MeitY) This advisory requires AI developers to use inclusive algorithms to ensure their software doesn't promote bias and discrimination. While these mandates are aimed at developers, they help protect job candidates and employees from AI-based discrimination in the hiring and talent management processes by requiring employers to research before selecting such tools. Ensuring Compliance and Ethical AI Use With so many regulations in place, it can be challenging for hiring professionals to select AI tools that align their hiring practices with the law in their jurisdiction. Fortunately, this vendor compliance checklist can help with the selection process, asking questions that cover the following: Supplier compliance Documentation Sufficient addressing of bias and ethics Cybersecurity risk management practices Along with using this checklist, hiring professionals should stay current on federal, state, and international legislative changes. Staying informed and proactive helps hiring professionals avoid compliance risk and potential lawsuits and keep their employer brand intact. In addition to monitoring the regulatory landscape, employers should conduct regular bias audits, inform applicants of the use of AI technology, obtain their consent, and work with AI vendors to ensure compliance and ethical practices. Balancing Digital Transformation With Candidates' Best Interests Hiring professionals need to know that it is okay — and even encouraged — to explore the ways in which AI technology can enhance hiring and workforce development practices. However, all must do so with the best interests of candidates and employees in mind. While efficiency can and should be a goal, remember that data privacy, security, and fairness are equally important. When hiring professionals strike this balance, they can confidently update their hiring practices for modern times while maintaining trust and faith in their employer brand. — Stay ahead of AI regulations in recruitment and ensure your hiring practices are both compliant and ethical. Contact the Recruitics team to learn more about navigating the evolving landscape to keep your recruitment strategies current and trustworthy.
2022-12-01T00:00:00
https://info.recruitics.com/blog/how-ai-laws-are-impacting-hiring-and-recruitment-practices
[ { "date": "2022/12/01", "position": 67, "query": "AI regulation employment" }, { "date": "2023/05/01", "position": 85, "query": "AI regulation employment" }, { "date": "2023/10/15", "position": 33, "query": "AI employment" }, { "date": "2023/10/15", "position": 35, "query": "AI hiring" }, { "date": "2023/10/01", "position": 88, "query": "AI regulation employment" }, { "date": "2023/11/01", "position": 80, "query": "AI regulation employment" }, { "date": "2023/12/01", "position": 74, "query": "AI regulation employment" }, { "date": "2024/02/01", "position": 75, "query": "AI regulation employment" }, { "date": "2024/03/01", "position": 75, "query": "AI regulation employment" }, { "date": "2024/06/01", "position": 71, "query": "AI regulation employment" }, { "date": "2024/08/01", "position": 90, "query": "AI regulation employment" }, { "date": "2024/10/01", "position": 82, "query": "AI regulation employment" }, { "date": "2025/01/01", "position": 72, "query": "AI regulation employment" }, { "date": "2025/03/01", "position": 67, "query": "AI regulation employment" }, { "date": "2025/04/01", "position": 81, "query": "AI regulation employment" }, { "date": "2025/05/01", "position": 83, "query": "AI regulation employment" }, { "date": "2025/06/01", "position": 83, "query": "AI regulation employment" } ]
Employees' Rights in the Age of AI - Klie Law Offices
Employees’ Rights in the Age of AI
https://klielaw.com
[]
One, the No Robot Bosses Act, would prohibit employers from using automated decision systems to make employment decisions, including hiring, disciplinary ...
The rise of easily accessible artificial intelligence (AI) tools is changing every industry, and one of the many new applications of AI is in making management decisions. While AI has promise to improve efficiency and productivity, it also carries significant risks for employees. In particular, concerns have been raised about AI algorithms making discriminatory employment decisions, creating unsafe working conditions, and otherwise infringing on workers’ rights. Several proposed pieces of federal legislation seek to address the risks posed by AI and other new technologies in the workplace. One, the No Robot Bosses Act, would prohibit employers from using automated decision systems to make employment decisions, including hiring, disciplinary actions, and firing. Another, the Stop Spying Bosses Act, would prohibit the use of electronic surveillance for certain purposes, including monitoring employees’ health, keeping tabs on off-duty workers, and interfering with union organizing. “Working people MUST have a voice in the creation, implementation and regulation of technology,” said the AFL-CIO in support of the No Robot Bosses Act. The role of technology in employment discrimination While it’s impossible for an algorithm to be consciously bigoted, it’s absolutely possible for an AI to reach discriminatory employment decisions. AI hiring tools are developed using training data, which may in turn introduce human biases or historical inequality. In addition, certain groups may be over- or under-represented in the training data, which can inadvertently train the AI to make decisions biased for or against those groups. For example, automated tools that scan job applicants’ resumes may develop a bias toward keywords that are more frequently found on men’s resumes than women’s. AI hiring tools that analyze applicants’ faces and speech patterns in video interviews may perpetuate bias on the basis of race or ethnicity. And when the hiring process actually involves interactive AI tools such as “chatbots,” those tools may create barriers for older workers or those who are less comfortable with technology (even if using technology is not a significant part of the job itself). This is not just a hypothetical concern. When Amazon started using AI in hiring decisions, the tech giant immediately found that their AI tool discriminated against women. And earlier this month, the Equal Employment Opportunity Commission (EEOC) settled its first-ever AI hiring discrimination case, recovering $365,000 for a group of job-seekers. AI bias in hiring is perhaps the easiest aspect of this problem to measure, but AI tools can also introduce or reinforce bias in promotions, raises, layoffs, and other employment decisions. And because of the way machine learning works, this has the potential to become a vicious cycle: the more AI makes biased decisions, the more that bias will become entrenched in the training data for the next generation of AI tools. “We were just following the algorithm” is not an excuse It’s important for employees to understand that as far as anti-discrimination laws are concerned, ignorance is not an excuse. To have a case, you don’t need to prove that your employer intended to discriminate on the basis of sex, race, religion, disability, age, or another protected characteristic. You only need to prove that their policies had a discriminatory effect on your employment. As legislators grapple with how to update our laws to meet the changing employment landscape, employees need to know their rights and take proactive steps to protect them. If you believe you have been illegally discriminated against at work, contact us today for a free case evaluation with an experienced employment law attorney at Klie Law Offices.
2024-11-10T00:00:00
2024/11/10
https://klielaw.com/workers-rights-ai-surveillance-technology/
[ { "date": "2022/12/01", "position": 82, "query": "AI regulation employment" }, { "date": "2023/05/01", "position": 77, "query": "AI regulation employment" }, { "date": "2023/08/22", "position": 10, "query": "AI layoffs" }, { "date": "2023/08/22", "position": 8, "query": "AI regulation employment" }, { "date": "2023/08/22", "position": 12, "query": "artificial intelligence layoffs" }, { "date": "2023/10/01", "position": 90, "query": "AI regulation employment" }, { "date": "2023/11/01", "position": 83, "query": "AI regulation employment" }, { "date": "2023/12/01", "position": 86, "query": "AI regulation employment" }, { "date": "2024/02/01", "position": 73, "query": "AI regulation employment" }, { "date": "2024/03/01", "position": 79, "query": "AI regulation employment" }, { "date": "2024/06/01", "position": 77, "query": "AI regulation employment" }, { "date": "2024/08/01", "position": 82, "query": "AI regulation employment" }, { "date": "2024/10/01", "position": 74, "query": "AI regulation employment" }, { "date": "2024/11/01", "position": 80, "query": "AI regulation employment" }, { "date": "2025/01/01", "position": 71, "query": "AI regulation employment" }, { "date": "2025/03/01", "position": 84, "query": "AI regulation employment" }, { "date": "2025/04/01", "position": 74, "query": "AI regulation employment" }, { "date": "2025/05/01", "position": 79, "query": "AI regulation employment" }, { "date": "2025/06/01", "position": 87, "query": "AI regulation employment" } ]
Artificial Intelligence (AI) and employment law: 6 key considerations
Artificial Intelligence (AI) and employment law: 6 key considerations
https://kpmg.com
[]
In this article, we explore six key areas and consider the impact of AI on employment law and HR.
AI is already being used by some businesses to support decisions on hiring. We anticipate that this trend will continue, with AI being used to support performance reviews and other HR functions. This has the potential to free up huge amounts of management time and, in theory, allow businesses to make HR decisions based on objective data and criteria, with fairer results. The potential issue is that AI program have been found to exhibit or even learn bias, in some cases despite safeguards being put in place to minimise this risk. While programmers may get better at avoiding this, ultimately if an individual is not selected for recruitment or receives a low performance grade as a result of a biased AI output, it is the employer who will be liable for a discrimination claim. Even more damaging could be the reputational damage and loss of trust from employees. The solution to this challenge will be to couple any use of AI in management decisions with robust oversight and challenge. This could be via robust appeal mechanisms overseen by employees and ongoing testing of the output of the AI algorithm to try to identify and remove any indications of bias.
2022-12-01T00:00:00
https://kpmg.com/uk/en/insights/ai/ai-and-employment-law-6-key-considerations.html
[ { "date": "2022/12/01", "position": 84, "query": "AI regulation employment" }, { "date": "2023/05/01", "position": 78, "query": "AI regulation employment" }, { "date": "2023/08/01", "position": 81, "query": "artificial intelligence employers" }, { "date": "2023/10/01", "position": 78, "query": "AI regulation employment" }, { "date": "2023/11/01", "position": 81, "query": "AI regulation employment" }, { "date": "2023/12/01", "position": 84, "query": "AI regulation employment" }, { "date": "2023/12/06", "position": 16, "query": "artificial intelligence employment" }, { "date": "2023/12/06", "position": 87, "query": "artificial intelligence workers" }, { "date": "2023/12/06", "position": 68, "query": "reskilling AI automation" }, { "date": "2024/06/01", "position": 80, "query": "AI regulation employment" }, { "date": "2025/01/01", "position": 77, "query": "AI regulation employment" }, { "date": "2025/03/01", "position": 79, "query": "AI regulation employment" }, { "date": "2025/04/01", "position": 79, "query": "AI regulation employment" }, { "date": "2025/05/01", "position": 76, "query": "AI regulation employment" }, { "date": "2025/06/01", "position": 82, "query": "AI regulation employment" } ]
You will not get UBI, you will just be removed. : r/Futurology
The heart of the internet
https://www.reddit.com
[]
The idea that a society that replaces all workers with AI, which in turn will lead to a society where you do not have to work and get given universal basic ...
The idea that a society that replaces all workers with AI, which in turn will lead to a society where you do not have to work and get given universal basic income to enjoy how you will is a nice one, but unfortunately it's an unrealistic hope. The reality of the matter is that in a world where governments and mega-corporations (which will soon be functionally the same thing, if they are not already) would be contempt to expend resources on keeping what would essential be vast populations of human pets supplied with an endless stream of resources out of the goodness of their heart is horribly short sighted, as no government or corporation has ever acted like this before and that is unlikely to change. These organisations view populations in the same way despotic tyrants view those they oppress, which is primary a) not being of any innate value and b) being an active threat to their power. If Stalin could have killed every last citizen of the USSR and replaced them with machines with unquestioning loyalty to him alone, he absolutely would have in a heartbeat. Large populations would be a huge threat to government and corporate power, as well as a huge drain on their resources. There is absolutely no reason to believe they would keep such a populations around. And to think the population would be able to somehow unite against these powers who have access to AIs is also incredibly short sighted. Not only would they have hundreds of ways of killing off the population, but they could also easily turn the population against one other utilising any number of distinguishing characteristics. You think social media is divisive now? Imagine an army of social media bots who can perfectly imitate humans, armed with the capacity to manufacture any host of faked audio, video, or even entire events which could be relentlessly circulated at whim throughout the population. Those in control of the AIs could have the streets running red at the drop of a hat without even having to lift a finger. Not to mention control of what is probably a fully automated armed forces which no conventional force could hope to match even if they weren't completely divided. How do we stop this? Honestly I don't think we can. We're not going to stop the development of AI. Even if it is openly banned it would still be developed in secret. Maybe this is the solution to the fermi paradox? But the AI apocalypse is probably going to happen, and it probably won't be a rogue AI that does it, but a tame one controlled by evil people. UBI is a utopian dream. A nice one for sure, but the ancients Greeks were very wise when they named their idea of a perfect society 'nowhere'. EDIT: For everyone asking "but who would buy their productz!!!". How is it that you can comprehend the paradigm shift of an entire civilisation giving up work to AIs, but you cannot comprehend the idea of the exchange for goods for labour would not be necessary in a world were the corporation already owns the labour? They don't make you products for shits and giggles, they do it so that you will do stuff for them. That's what money is, it's a representation of labour. Literally school children can understand this concept. They won't have to make you anything, they already have the AI labour, and that labour will be dedicated to whatever the rulers want them to do. It's amazing how so many cannot comprehend this.
2022-12-01T00:00:00
https://www.reddit.com/r/Futurology/comments/znzy11/you_will_not_get_ubi_you_will_just_be_removed/
[ { "date": "2022/12/01", "position": 1, "query": "universal basic income AI" } ]
The Case Against AI-Generated Images | by Brandon Moore
The Case Against AI
https://medium.com
[ "Brandon Moore" ]
Some argue that a universal basic income will be implemented because of AI. I am a supporter of UBI, but UBI as a result of AI is a fantasy. Especially ...
“It’s My Work, So I Own The Copyright” It’s not original, it’s not your work, and you don’t have copyright. DALL-E and NightCafe have terms of service that address this to cover themselves legally—for now. Like any other artist or marketplace, they state that the work you make must be original and/or have the consent of the original party to use elements within. The reason these TOS are bullshit though is that these companies are not abiding by their own TOS—their generators are trained by existing work. DALL-E terms of service NightCafe “It’s the AI model, not its user, that ‘predict[s] stylizations for paintings & textures never previously observed,’ & that predictive function is tied to ‘the proximity of the [style image] to styles trained on by the model.’” CopyrightOffice “Artist Can Opt-Out Of It” It should never be something to opt-out of. Artists need help from portfolio platforms to protect their work and uphold copyright law. Not every artist is going to be aware of what's happening with this tech, how their work is being used, how to protect their work, or even alive to do so. Portfolio sites must not place the burden of protection solely upon their users while offering their work to AI generators. That is abhorrent policy. ArtStation policy That happened (above). Then this happened (below). “What If AI Only Pulled From Stock Images?” Its still a very similar problem. You have to buy stock images or go by whatever usage rules a particular artist or site makes. And stock contributors must benefit from their contributions. If you have that cleared, I’ll still make fun of you for using AI to make images, but at least you’re talking about operating inside legal and ethical guardrails. What about Adobe Firefly? Is it only trained on Adobe Stock? I have doubts about that, but even if it is it’s hard to police what gets loaded to the site. Moreover, No stock contributor signed up to have their work taken as training data for a product to compete against them. There is a very clear moral dilemma there. “You Can Train It On Your Own Art” Ignoring the entire catalog of scraped images a generator needs to work, I don't see how this approach is useful to anyone if it were only operating on one person’s body of work. If you’re an established artist, you have a style and you have the ideas. You already know how to take an idea to execution. You’ve never needed AI before. If you’re an amateur, you don’t have the body of work or established style for AI to be useful. If it is about “sketching” ideas more quickly, I guess that’s something. You can show a client a rough idea in a few minutes. Is that what this great art revolution is supposed to be? “It's Just A Tool.” It’s a tool as much as a robotic arm is on an assembly line. It's not meant for artists but as a replacement for artists. It is not a tool for them because it acts as the author. When you engage with AI generation, you are not the artist, you are the commissioner—the client. AI companies want amateurs to produce artwork without the need for further editing. It opens up the market of digital-based art and design by giving “talent” to anyone and undercutting creatives in the process. All with (at the most optimistic level) an OK sense of craftsmanship. AI image generators are marketed toward amateurs with the promise that they can create anything. Those with skills are not intended customers because they don’t need the help of AI generators. The only people who stand to benefit from these AI companies are the ones inside them. Medium’s current stance on AI writing “Learn AI Or Get Left Behind”. Why? Where is the value in the competence of plagiarized images or text? In removing imagination, personal touch, craft, and thoughtfulness? It is the process that is enjoyable and leads to a finished work of quality. It is a major reason artists become artists. But AI-bros think making art is about the destination, about the result and having a collection of results. Using AI to make art is the lamest way to make art you could possibly have dreamt of. “There’s a lot of technology in search of a customer. You know, in other words, a lot of companies do things because it’s technically possible, but in the end, nobody cares; nobody wants to buy them. And so we see a lot of that technology in search of a customer, and I think the hard thing is to figure out what can be, done but also what people really want to do.” — Steve Jobs There is also a notion that AI will make your workflow faster and therefore, you more valuable. Increasing production speed is good as long as you don’t lose quality. If you change something from a creative expression to an automated task, it is to your detriment. You cannot remove the creative processes of your work— the core element that separates humans from robotic automation or one artist from another. If you do, you’ll be hired for speed, not creativity, and that is a race to the bottom. Because AI users cannot compete with digital artists on skill, they can only compete on speed and price. This is a detriment to clients as well. Again, The only people who stand to benefit from these AI companies are the ones inside them. I’ve also seen people claim that automating tasks, even email, will allow them to “leave the competition behind”. Nonsense. Firstly, the time they’re shaving in total is counted in minutes—it’s not impressive. Secondly, if they think they’re going to understand client needs and build relationships to grow their business through automation of communication, they’re delusional. Do not buy into it. Finally, even if this gAI future comes to fruition where you must have this skill to be a valuable worker, it is the easiest skill to aquire. That is the entire point of this. If you can operate a search engine, you can operate generative AI. “Why Would I Pay An Artist? ” So you’re going to pay an amateur with no artistic skill to create art that is made from plagiarized images? This is your plan? What type of person thinks like this? What society turns the power to create from imagination and skill over to amateurs with neither? This is what happened to sign painters, though. People who were highly skilled in their craft were replaced by vinyl printers who produced shit banners with shit type and color. Our Main Streets are now littered with them. We’ve chosen that over the beauty of wood, paint, metal, and neon. Why pay an artist? Because you’re a business of some sort that wants to compete in your marketplace. Why would you want poor-quality, plagiarised art attached to your business? By and large, artistic fields are filled with talented people who create beautiful things and are paid for that labor. We have a society where those people’s work is desired and valued by most. It allows great work to be done and everyone gets to experience that in their TV, movies, games, clothing, museums, murals, or whatever other media. This is a virtue, and I highly prefer it to a buffet of corporate, bottom-line-driven garbage. “AI Will Just Make Our Jobs Easier” Oh yea? So we’ll work 3 days a week? Or 4-hour days? Get paid more? No. Any place that values AI will require you to work as much as possible for as little as possible. You’ll have an 8-to-12-hour day of writing prompts. These places will not allow you to produce at the same rate and then give you the rest of the day off, or pay you more, just because you’re using AI. Whatever skill you had to protect your job, whatever enjoyment you had because it was creative, will be lost. And how has tech made our jobs easier over the last 20 years? Are we working less? Are we making more money? No to both and the wealth gap in the US has grown to laughable levels as the middle class has been decimated, largely in part, due to automation. Tech has benefited corporations and wealthy people the most. There is zero reason to believe AI will turn that around in the US economy. thoughts from Keanu Reeves “AI Will Replace People And It Will Be Good” For these companies and any employer who embraces AI, it’s a means of monetary gains. They want to pay people less and produce more, growing the outrageous wealth gap even more. They only see art as content for mass production. This idea where AI replaces all labor as we sit back and chill is for morons. When AI operators do all the creative work, people will be punching in information to feed it at a cubicle for minimum wage. Because anyone can do it, and this is a global economy where capitalists seek out the lowest price from anywhere in the world. Some argue that a universal basic income will be implemented because of AI. I am a supporter of UBI, but UBI as a result of AI is a fantasy. Especially in a country where feeding kids lunch at public school is taboo. Another threat from gAI, if the tech companies have their way, is the erasure of vocation. This would result in a homogenous soup of mediocrity for things that once were created with craft and individual touch. It would have negative emotional impacts for creative professionals not able to make a living with their work and for workers whose jobs become more demanding due to production expectations. It would make the entire world more depressed. You don’t have to be religous to understand the joy of making a living by doing something you are good at, but those who see vocation as a religious aspect would greatly be negatively affected if they were not able to use their “gifts from God”. “People Are Just Cogs. Eliminating Them As Much As Possible Is Good For Efficiency” Right, we wouldn’t want a silly thing like people getting in the way of corporate efficiency. But perhaps it is a thought I can get behind. Keep the people with skill who do the work and are paid the least in a company, and automate the wealthy C-suite people’s jobs whose only value is in making decisions. AI is a calculator—it might be good for that. “Artists Are Overpaid” Pro-AI employers don’t care if creative craftspeople are replaced by amateurs because they see their work as frivolous. Pro-AI users with no experience and no skill think they’re going to make a lot of money by displacing creatives who are “currently overpaid”. On the front end, you should see how many zeros are in the salaries of AI-based companies today. “AI-Generated Images Are A Benefit To Humanity” It’s a product. A capitalistic venture that AI companies will benefit from at the cost of digital-based creative careers if they deem it necessary. They could target any industry or task. Make doing taxes easier, applying for a job, or countering disinformation on social media. Instead, they’ve chosen to replace artists, painters, writers, musicians, and voice-over actors. This is as dumb as it is dystopian. “You’ll Be Out Of A Job” I believe many will be, like portrait painters and sign painters that came before. Many creative people will be replaced by software operators who plagiarise their own original work. What a fantastic advocation for AI. “AI Will Create More Jobs” The jobs it creates will be in a global economy competing on speed and price. The purpose of AI is to eliminate jobs. These companies want to devalue art and take what little profit there is from skilled people for themselves while charging a monthly subscription on top of it. A large part of AI marketing is devaluation. They want you to believe everyone’s data is there for the taking. They want to make you believe copyright is an obstacle or not needed. They want you to believe artists are elitists. They want a future where the only ones making a living from art are their CEOs and shareholders. “The Camera Didn't Replace Painting” The impact of the camera ultimately meant artist was no longer a profession. They became lone societal rebels of personal expression. Cameras forced 20th-century artists into new creative styles, which is good, but this argument used to insinuate photography had no impact on painting is not true. Fortunately for art, there was a renaissance of originality after the camera. But AI doesn’t support originality. It plagiarizes it. Cameras are a tool that opened up a new category of art without plagiarising existing art. It is also responsible for some regulations on copyright and image usage today. There are guardrails set to protect creators of original work which the AI wild west doesn’t believe in. “This Is Just Like When Personal Computers Came Along In The 80s” Personal computers were not built from stolen, copyrighted work. Advocates love to compare gAI to the most successful tech products of all time, but fail to acknowledge even the possibility that it is fallible or might have more in common with the majority of tech—a failed product or short lived trend. There is no guarantee gAI will be the next big revolution. It might be the next compact disk or beanie baby. “You’re Afraid Of New Tech” The “artists are Luddites” argument. It’s important to know where that term comes from and how it fits in with AI today, but I’ll address this insult as intended: someone afraid of technological advancement. Speaking for myself (though I’m sure I can vouch for many), I’ve tried about every medium of art there is. I’ve changed software and hardware over the years as a professional Designer. I’ve dabbled in nearly everything to come over the last 15 years to see if it helps me do the work I need to do as a Designer and Illustrator. Suggesting artists (most especially DIGITAL ARTISTS) are against AI because they’re afraid of technological advancement is smooth-brained. I’m not anti-technology nor afraid of innovation. I’m just not naive enough to be impressed by every new thing. A lot of people see tech (and companies) as harbingers of a Star Trek-like society. In reality, our tech is more like Star Wars. It barely functions and constantly lets people down. Moreover, digital art is not that valuable compared to traditional mediums. Tech does not make an artist. Traditional artists being “left behind” by AI is a fallacy pushed by people who haven't a clue about how working artists make their work and their careers. AI needs artists; artists do not need AI. “AI Is Like The Car. A Hundred Years Ago You’d Be Advocating For Saving Horse-Drawn Carriages.” A perfect analogy. We in America are a car-dependent nation. We have destroyed our cities and towns to build infrastructure for cars. We have torn out public transportation and rail for highways and roads. Auto and oil companies have cornered people into the need for an expensive luxury item that is also a top contributor to global warming, health issues, and deaths. Advancement of technology does not automatically lead to good. Again, this is an assumption that gAI can be placed next to innovations like the car and not ones like the CD. “AI Lets Anyone Express Themselves Creatively” I wouldn’t call typing keywords into a system to bash pixels together “creative expression” and I don’t believe anyone is getting into AI for that reason. AI supporters are not in it for creative, expressive, or artistic purposes. They don’t see this as an opportunity to be an artist or they would already have been on that path. They’re getting into it because they think they can make a lot of money. “The emotion that a human can channel into artwork, right; cause you look at a lot of the AI art and you can tell its AI art. Its very soulless, like stock images.” — Tim Henson “I Don’t Have To Learn Artistic Skills.” LOL, right. This comes from people who never had any intention of making art. Plus, it just results in low-quality images. The progression of tech in relation to art has mostly been about new ideas and new frontiers of creativity. Modern tech is mostly about increasing the speed of production. Which comes at a loss of thoughtfulness, skill, and understanding. Whatever is not needed to produce, gets lost. Like the directional senses when relying too much on GPS or forgetting phone numbers because its no longer necessary. AI isn’t about increasing creativity it’s about removing it. Artists have an understanding of what it takes to create good art and as such appreciate it when they see it from others. AI advocates want to shortcut the hard work it takes because they see a monetary opportunity. Making good art is hard. That’s part of the reason people are drawn to it. Because not anyone can do it. Because there’s something unique in it that connects with another person. Shortcutting the process isn’t advancement, it’s lazy and uncreative. It is the very opposite of what AI companies are claiming to stand for. As it stands today, illegal and unethical. Plus, there’s something to be said for expertise as a barrier to entry for photo manipulation. Adobe Photoshop’s AI-based tools make it easier to manipulate images believably, which has the consequence of making it easier for people to use it in nefarious ways. It is akin to deep fake videos and AI is putting that power in the hands of more people. “More accurate results with less skill” is not the kind of technology that is for the greater good. “Real Art Is Frustrating” Correct. “Its The Democratization Of Art” You’d have to be making art for this to hold any water. When using generative AI tools, you are not the creator, you are the client. The generator is your digital contractor. At best, you act as an art director. There are amazing artists making sculptures from trash. The barriers to entry for creating art are virtually nonexistent. Certainly less than what it requires to produce digital images from a monthly subscription and a computer. Which collapses this argument completley—how can art be democratized when the barrier to entry is an increase in cost and hardware? What AI companies are democratizing are powerful editing and manipulation capabilities. They’re democratizing misinformation.
2025-06-25T00:00:00
2025/06/25
https://medium.com/graphic-language/my-case-against-ai-ad6489e124f2
[ { "date": "2022/12/01", "position": 6, "query": "universal basic income AI" }, { "date": "2022/12/01", "position": 13, "query": "AI graphic design" } ]
We Aren't Ready For AI's Promise Of Plenitude
We Aren’t Ready For AI’s Promise Of Plenitude
https://medium.com
[ "Charlie Rhomberg" ]
Artificial intelligence (AI) ... Whether that takes the form of a welfare state, universal basic income, or some other structure, I can't say.
AlphaFold is incredible, but recent breakthroughs like DALL·E 2 and ChatGPT are pure magic Seriously, you’ve gotta check these out. The former creates unbelievable art based on your query, and the latter will give you a coherent response to just about any question. I spent a few hours mesmerized by DALL·E 2 in particular, typing in random queries and seeing what it spit out. Based on your prompt, the program literally draws a unique picture by itself. Here’s what it created from the prompt “teddy bears mixing sparkling chemicals as mad scientists in a steampunk style” If food art is more your speed, here’s its output for “a bowl of soup that looks like a monster knitted out of wool” The images are trippy, disturbing, and perplexing…but also intriguing. In order to transform prompt into picture, DALL·E 2 collects a gargantuan amount of pictures across the world wide web, digests them, and builds its best guess of what you’re looking for from scratch. Here’s a GIF that shows roughly how the program builds the image using a diffusion model: Crazy stuff. If you want to dig further into how it works, check out this post from someone much smarter than I—the creator himself, Aditya Ramesh. While I personally think DALL·E 2 is the coolest AI app to date, ChatGPT has been dominating headlines lately. It’s super easy to use, and is the smartest chatbot ever invented. The way both of these tools work is broadly the same—collect a ridiculous amount of data across the web, and use it as a reference to make an informed guess at what you’re looking for. Chatbots aren’t new, but they’ve been generally tonedeaf and lacking common sense. Saying ChatGPT is a chatbot on steroids isn’t going far enough. It uses the gobs of data we’ve thrown up on the internet to understand grammar, conversational styles, and tones, and generates remarkably human-like responses. This thing can write jokes, explain difficult concepts, and even write its own code. Prompts like this one are endlessly entertaining, but it doesn’t take long to start contemplating more serious implications. For example, you know when you ask Google something, and it serves up a defined answer, rather than an infinite array of links? It’s the best. I don’t give a rip about your millions of results in less than a second, Google. Just tell me what I’m looking for. Imagine if Google could do that for any search. Any baskeball junkie could tell you Kareem holds the crown for most points all time, but what about more nuanced questions like “how to become an influencer?” Generative Pre-trained Transformer 3 (GPT-3), which is related to ChatGPT, is on its way to answering these open-ended questions. So, how does it work? It doesn’t think like you and me, but rather gets trained on massive datasets to mimic how we humans speak to each other. What do I mean by “training?” Essentially, teaching the program the knowledge of the world and how humans think. There are a few ways to accomplish that: Supervised training For the AI to recognize a picture of an apple as such, it needs a person to label it as “apple.” It then learns what traits make up an apple—such as color and shape—and applies it to other images that it’s fed. Apples don’t look like bananas, but they both have traits that make them fruit. As it learns, it starts to pick up on categories such as fruit, which bolster the prediction algorithm. Unsupervised training The next rung on the ladder is unsupervised training. It’s much more difficult, but has the advantage of not requiring a human in the loop. AI has to figure out what an apple is by its damn self. I’m not going to pretend that I understand the math behind this process, but basically the AI learns how to identify images based on millions of tiny traits. Sometimes, it finds patterns we’d never considered. Imagine a program that could’ve analyzed millions of subprime loans before they blew up in ’08, and recognized the risks before it was too late. It’s hard to wrap your mind around the implications here. There’s a version of this that displaces Google as we know it, replacing billions of results in favor of the exact answer you‘re looking for. All kinds of customer service workers could become obsolete remarkably quickly. And then there’s The Singularity—the point at which machine intelligence meets, then exceeds, the human variety. While it seems like GPT-3 is getting dangerously close to that point, the brainiacs that wrote AI 2041 played down the idea. Remember, GPT isn’t actually thinking—it’s just spitting out its best guess at a response based on reams of data. Creativity and empathy are a ways off. Regardless, keep an eye on these features. Other than being fun as hell to play around with, this tech will probably start to make significant inroads into education and other fields pretty soon here.
2022-12-07T00:00:00
2022/12/07
https://medium.com/@charlierhomberg/we-arent-ready-for-ai-s-promise-of-plenitude-2d85c9fcc718
[ { "date": "2022/12/01", "position": 16, "query": "universal basic income AI" } ]
and middle-income countries through artificial intelligence
Challenges and solutions for transforming health ecosystems in low- and middle-income countries through artificial intelligence
https://pmc.ncbi.nlm.nih.gov
[ "Diego M López", "Research Group In Telematics Engineering", "Telematics Department", "University Of Cauca", "Popayán", "Carolina Rico-Olarte", "Bernd Blobel", "Medical Faculty", "University Of Regensburg", "Regensburg" ]
by DM López · 2022 · Cited by 52 — The present study undertakes a review of research on the current status of artificial intelligence (AI) to identify requirements, gaps, challenges, and ...
When extracting the affiliation data of the authors of experimental studies and contrasting them against the country’s classification to which the research and development institutions belong, it was interesting to see that the developments of experimental studies are conceived mostly from HIC. However, the datasets were collected in LMICs contexts. A researcher’s purpose is to impact the environment by detecting problems and proposing solutions driven by the characteristics of the context. Economic resources are decisive in the construction of solutions. For this reason, establishing relations between institutions is convenient to equate global efforts in this type of research to eradicate different personal and public health conditions. Experimental studies’ primary purpose is to diagnose diseases, especially early and accurate diagnosis. Another purpose is the mortality assessment, mostly to avoid newborn deaths, which has a high rate in LMICs ( 33 ). Finally, clinical decision support systems are an important target for implementation; developing these systems can reduce hospitalization times, optimize treatment, and reduce work stress for health professionals ( 34 ). Although the incidence of tuberculosis has been falling in recent years, it is still one of the leading causes of death globally ( 31 ). As a result, many efforts around early detection are being made to decrease its prevalence, especially in LMICs contexts, where funding for detection and treatment is far below what is needed ( 32 ). Cancer, for example, with a high prevalence in LMICs, presents many developments and implementations as researchers seek tools to generate an early diagnosis of the disease and, therefore, a greater probability of treatment success. Developments to preserve people’s mental health are also highlighted, especially with the global context of COVID-19 and the isolation measures taken to counteract the contagion’s negative effects. These measures have a huge impact on mental health ( 30 ). The following categories correspond to non-communicable and communicable diseases. These groups include chronic diseases such as cardiovascular diseases, cancer, chronic respiratory disorders, and diabetes. These diseases are highly addressed in experimental studies and are according to the data available from organizations such as the WHO about their prevalence in LMICs. The category with the most references is “life-course approach,” which includes “maternal and newborn health” and “child and adolescent health.” In the context of LMICs, women’s pregnancy suffers from inequalities in care, especially in rural and marginalized areas. The physical and psychological effects of this lack of care lead to the deterioration of the newborn’s and her mother’s health. In addition, in LMICs, there are high rates of child malnutrition, which has, consequently, effects on the health of children and adolescents and, therefore, uncertainty in the future development of these countries. The distribution in the data quantity used in the selected experimental studies makes sense since verification of a model function is necessary to have contextualized data. This is an extra effort for the researchers; it is not easy to carry out in specific LMICs contexts in many cases. Those papers using large amounts of data do so because of their availability and not having to perform the collection work. It is interesting to note that images are one type of data. Most of these correspond to photos taken with mobile devices, consistent with developing models and platforms based on this technology. On the other hand, physiological signals can be considered real-time indicators of a person’s current state and help avoid bias or subjectivity in the information provided by a patient. Although they have this advantage, their processing is complex and depends on the type of device to collect the signal. Clinical data and radiology images are the most commonly used data types in experimental studies, which makes sense from a clinical and healthcare point of view: these data are the closest to representing medical knowledge for the diagnosis and consequent treatment giving certain conditions in a patient’s health. Regarding the generation of platforms and models, the second contribution by the number of papers, it is noteworthy that mobile devices’ solutions are widely implemented in LMICs contexts. In such contexts, the device penetration is high where the access to healthcare in different facilities is shallow or none ( 29 ). Given that implementation of good models is still in experimentation, it is clear why ML models, whether for data or images, are the most used. On the other hand, DL-based models are less common because two primary features are needed to develop them: having a large image bank, for example, those used in diagnosis (X-rays), and having a high computational capacity since the implementation of neural networks demands a lot of resources from the device or the cloud that runs the model. The standing-out contribution is Al models. The solutions are still incipient since it is not yet possible to determine if the models can be generalized regardless of the context. The models should be generalizable from the data science and artificial intelligence perspectives; nevertheless, there are problems, such as model discrimination for special or unprecedented cases. Eventually, implementing the models developed around healthcare can help streamline clinical care by properly monitoring the regular process of a defined treatment. However, human medical intervention remains essential for cases where the model cannot accurately classify or predict. Secondary studies • Data Quality challenges Data quality encompasses many aspects of data, from intrinsic to extrinsic. Accuracy is the correct representation of the health-related concepts considering the local LMICs context. Therefore, AI algorithms should be trained and evaluated using local data. Electronic Health Records (EHR) data and data registries are the preferred data source. Also, collecting data from primary healthcare workers improves the quality of data sources. Low-cost technologies such as sensors, phone applications, and public health surveillance data from non-traditional sources also improve data availability and diversity. Consistency, completeness, credibility, and currentness are other attributes that avoid the deployment of Garbage in, Garbage out (GIGO) algorithms. Maintaining quality data implies implementing robust data preparation to manage and prevent bias and cleaning processes engaging data scientists and multidisciplinary teams with knowledge and experience in the healthcare domain. Training different stakeholders, another domain explained below, is also important to improve data quality because it implies understanding the data sources and their context. The governance process includes data quality policies to provide certified datasets by independent and trusted local and international organizations. Quality improvement implies using clear and standardized metrics for data quality as proposed by several international standards and initiatives in software engineering. Co-design AI solutions with users, physicians, patients, and clinical managers contribute to improving data quality. Recommendations include implementing mechanisms to share health-related data and promoting the creation, use, and deployment of open-source databases such as MIMIC-III, a critical care database. • Context-awareness challenges Contextual awareness means that AI models and solutions must be validated using data from the local context in LMICs. A common gap mentioned in the literature is that most AI models used in LMICs are typically trained with HICs data with different demographic characteristics and contexts. Context awareness also implies an appropriate emphasis on application scenarios, policies, and disease priorities to prevent bias and promote model generalizability and explainability. Actions addressing contextual awareness challenges include local stakeholders’ participation in data collection, regulatory decisions, technology development, and validation. Creating strategic partnerships between clinical practice, academia, and industry is foremost important. Also, AI Interventions should be planned to consider the burden of disease in the local context. To manage and prevent bias, AI/ML systems must be transparent about the algorithms used and ethical aspects of managing and preventing bias. In this direction, to favor explainability, transparent models are preferred if the obtained performance is acceptable. In the case of using black-box models, it is suggested deploying explainability approaches such as Local Interpretable Model-Agnostic Explanations (LIME), SHAP (SHapley Additive exPlanations), Anchors, Counterfactual methods, among others. Data diversity is a very important factor in improving generalizability. ML techniques, such as regularization methods, make the models simpler. • Challenges in the regulation and the provision of legal frameworks Local regulation and legal frameworks, strategies, and policies are fundamental to successfully deploying AI/ML solutions. The regulation includes the provision of privacy, security, informed consent, ethics, liability, confidentiality, trust, equity, and accountability policies. In addition, local governance and leadership are necessary to promote and execute national AI strategies included within digital health strategies at country, regional, and local levels. Recommendations to overcome security, privacy, safety, trust, and ethical issues include making mandatory before funding any intervention, the approval by ethical committees of informed consent, and clinical protocols. Also, conformance of local policies to international regulation, scalable and composable access control and authentication mechanisms, anonymized or pseudonymized data, and mandatory privacy audits are necessary. ML policies and legal frameworks should protect individuals against unethical behaviors. In addition to ethical regulations and legal frameworks, which are the responsibility of governments, end-users, healthcare providers, and AI developers, share responsibility for managing ethics. Liability is a challenge for healthcare organizations, especially healthcare providers using AI-based solutions. Therefore, explainable ML models have to be provided. For example, data of certain patient groups in LMICs are frequently not present in local databases, caused by existing inequalities in the provision of health care services and low health insurance coverage. Inequality is also present when AI interventions take care mainly of the diagnosis but not the treatment and follow-up of patients. Trust in AI tools can be improved by training and educating healthcare professionals and involving end-users in developing AI technologies applying Human-Computing Interaction (HCI) approaches. Moreover, AI developers must be trained in accountability, privacy, and ethics. • Education and change resistance challenge Limitations in training and education of different stakeholders (decision-makers, developers, health professionals, citizens, patients, and communities) prevent the understanding, use, policy-making, research, and innovation of AI technologies in LMICs. Potential solutions include capacity building through professional bodies and societies, training and retention to prevent brain-drain of local expertise, and cooperation agreements with HICs to train and educate stakeholders. In addition, insufficient motivation to use AI/ML tools is a major concern, especially because of the unsolved ethical and regulatory concerns and the perceived risks of using AI applications in healthcare. One alternative is providing economic incentives to create and use AI solutions in clinical practice. Also, the innovative implementation of business models around data collection and aggregation, which, ethically managed, could be an alternative incentive for building AI solutions. Another critical aspect is change resistance, mainly due to the fear that AI will replace the work of healthcare professionals and staff. Training and education of clinicians about the benefits and limits of artificial intelligence and machine learning, and more recently, hackathons and datathons events using local data, have been demonstrated to be effective actions. • Methodological challenges Solutions to methodological challenges covered reporting and methodological standards, the Human-Centered Design (HCD) of solutions, and the adoption of certification mechanisms. Reporting and methodological standards are required for AI health interventions in LMICs to evaluate AI interventions’ impact and added value over current standards of care. Several initiatives are being developed, becoming the standard de fact approaches for reporting AI interventions. One example is the EQUATOR Network, which has proposed guidelines for reporting interventions involving artificial intelligence. In the same direction, the United Nations (UN), the International Telecommunication Union (ITU), and the WHO are proposing guidelines on digital health interventions involving AI technologies. To prevent bias and guarantee accuracy, diversity, and trust, AI systems need to be contextually aware and involve different stakeholders in all stages of development. Methodologies to support these challenges are HCD approaches. Multidisciplinary work requires collaboration and coordination between government entities, private sector organizations, civil society, and academic communities. Furthermore, certification processes are necessary to promote the advance and large-scale deployment of AI/ML technologies. Also, to guarantee patient safety and effectiveness. • Data infrastructure and connectivity challenges The increased use of mobile networks has improved connectivity in LMICs. However, many rural areas in LMICs countries lack continuous Internet access. Investment in the universal provision of internet connectivity is a priority. Regarding data infrastructures, the availability of electronic health records and secure access to EHR data is still an unsolved problem in many countries and regions. Therefore, governments, healthcare providers, professional associations, and other actors should promote the construction of national eHealth infrastructures, including interoperability platforms, the adoption of international vocabularies, terminologies, and ontologies, and the implementation of unique patient ID management systems and standardized data repositories. In countries where infrastructure and connectivity do not progress as desired, enforcement laws, on the one hand, but incentives to develop strong EHR and surveillance systems are possible alternatives. On the other hand, the demand for computing capacity and storage capability increases. Join programs and funding provided by the IT industry, providing low-cost or free-of-charge infrastructure and computer capacity to LMICs is a viable alternative. • Financial Resources allocation challenges The allocation of adequate and sustained financial resources is one of the challenges frequently mentioned in LMICs for implementing digital technologies in general. In many LMICs, Digital health and AI/ML technologies are not a priority, or the decision on the allocation of scarce resources is not frequently made, prioritizing local needs and the burden of diseases but considering data availability and donors’ funding priorities. Potential solutions to overcome these challenges are establishing national research and innovation agendas for AI interventions responding to population needs. It includes the consideration of ethnicity, socioeconomic, and gender, particularly to prevent biases. In addition, research and development of open-access tools and resources could foster AI interventions’ experimentation, mainstreaming, and scale-up. One alternative in the agreement with international and local software development enterprises is to offer open licensing and free training of their products. • Scalability challenges Scalable solutions are important for extending AI-based health interventions and supporting the dynamic and diverse health contexts in LMICs. To be scalable, AI-based interventions demonstrate cost-effectiveness, health system efficacy, and economic impacts. Building collaborative networks between HICs and LMICs developers around open-source platforms, mobile applications, and digital health is promising. Health outcomes of AI interventions have to be continuously measured. This is complex and costly, especially in LMICs, considering the inadequate infrastructures, digitalization, research agendas, and development environments. Another strategy identified is to develop monitoring systems to report malfunction or misuse of AI/ML technologies. The above recommendations provide a framework to be considered by health IT project at different levels, from pilot to national health information systems. However, the selection of the most relevant challenges depends on the maturity level of each project and especially on the context of the use of digital health solutions.
2022-12-02T00:00:00
2022/12/02
https://pmc.ncbi.nlm.nih.gov/articles/PMC9755337/
[ { "date": "2022/12/01", "position": 20, "query": "universal basic income AI" } ]
Responsible Artificial Intelligence: Recommendations and ...
Responsible Artificial Intelligence: Recommendations and Lessons Learned
https://link.springer.com
[ "Dignum", "Virginia Cs.Umu.Se", "Umeå University", "Umeå", "Virginia Dignum", "Search Author On", "De Montfort University", "Leicester", "Damian Okaibedi Eke", "Kutoma Wakunuma" ]
by V Dignum · 2023 · Cited by 33 — In this paper, I will discuss how a responsible approach to the development and use of AI can be achieved, and how current approaches to ensure the ethical ...
Ethical AI is not, as some may claim, a way to give machines some kind of ‘responsibility’ for their actions and decisions, and in the process, discharge people and organisations of their responsibility. On the contrary, ethical AI gives the people and organisations involved more responsibility and more accountability: for the decisions and actions of the AI applications, and for their own decision of using AI in a given application context. When considering effects and the governance thereof, the technology, or the artefact that embeds that technology, cannot be separated from the socio-technical ecosystem of which it is a component. Guidelines, principles and strategies to ensure trust and responsibility in AI, must be directed towards the socio-technical ecosystem in which AI is developed and used. It is not the AI artefact or application that needs to be ethical, trustworthy or responsible. Rather, it is the social component of this ecosystem that can and should take responsibility and act in consideration of an ethical framework such that the overall system can be trusted by the society. Having said this, governance can be achieved by several means, softer or harder. Currently several directions are being explored, the main ones are highlighted in the remainder of this section. Future research and experience will identify which approaches are the most suitable, but given the complexity of the problem, it is very likely that a combination of approaches will be needed. Regulation AI regulation is a hot topic, with many proposers and opponents. The recent proposal by the European Commission envisions a risk-based approach to regulation that ensures that people can trust that AI technology is used in a way that is safe and compliant with the law, including the respect of fundamental human rights. The proposal implements most of the 7 requirements of the Ethics Guidelines for Trustworthy AI into specific requirements for ‘high-risk’ AI. However, it does not deal explicitly with issues of inclusion, non-discrimination and fairness. Minimising or eliminating discriminatory bias or unfair outcomes is more than excluding the use of low-quality data. The design of any artefact, such as an AI system, is in itself an accumulation of choices and choices are biased by nature as they involve selecting an option over another. Technical solutions at dataset level must be complemented by socio-technical processes that help avoid any discriminatory or unfair outcomes of AI. Moreover, successful regulation demands clear choices about what is being regulated: is it the technology itself, or the impact, or results of its application? By focusing on technologies, or methods, i.e. by regulating systems that are based on “machine learning, logic, or statistical approaches”, such as described in the AI definition used in the European Commission’s proposal, we run the risk of seeing organisations evading the regulation, simply by classifying their applications differently. Conversely, there are a plethora of applications based on, e.g. statistics that are not AI. A future-proof regulation should focus on the outcomes of systems, whether or not these systems fall in the current understanding of what is ‘AI’. If someone is wrongly identified, is denied human rights or access to resources, or is conditioned to believe or act in a certain way, it does not matter whether the system is ‘AI’ or not. It is simply wrong. Moreover, regulation must also address the inputs, processes and conditions under which AI is developed and used are at least as important. Much has been said about the dangers of biased data and discriminating applications. Attention for the societal, environmental and climate costs of AI systems is increasing. All these must be included in any effort to ensure the responsible development and use of AI. At the same time, AI systems are computer applications, i.e. are artefacts, and as such subject to existing constraints, legislation, for which due diligence obligations and liabilities apply. That is, already now, AI does not operate in a lawless space. Before defining extra regulations, we need to start by understanding what is already covered by existing legislation. A risk-based approach to regulation, as proposed by the European Commission, is the right direction to take, but needs to be informed by a clear understanding of what is the source of those risks. Moreover, it requires to not merely focus on technical solutions at the level of the algorithms or the datasets, but rather on developing socio-technical processes, and the corporate responsibility, to ensure that any discriminatory or unfair outcomes are avoided and mitigated. Independently of whether we call the system ‘AI’ or not. Standardisation Standards are consensus-based agreed-upon ways of doing things by providing what they consider to be the minimum universally-acknowledged specifications. Industry standards are proven to be beneficial to organisations and individuals. Standards can help reduce costs and improve efficiency of organisations by providing consistency and quality metrics, the establishment of a common vocabulary, good-design methodologies and architectural frameworks. At the same time, standards provide consumers with confidence in the quality and safety of products and services. Most standards are considered soft governance; i.e. non-mandatory to follow. Yet, it is often in the best interest of companies to follow them to demonstrate due diligence and, therefore, limit their legal liability in case of an incident. Moreover, standards can ensure user-friendly integration between products (Theodorou and Dignum 2020). AI standards work to support the governance of AI development and use is ongoing at ISO and IEEE, the two leading standards bodies. Such standards can support AI policy goals in particular where it concerns safety, security and robustness of AI, guarantees of explainability, and means to reduce bias in algorithmic decisions (Cihon 2019). Jointly with IEC, ISO has established a Standards Committee on Artificial Intelligence (SC 42). Ongoing SC 42 efforts are, so far, limited and preliminary (Cihon 2019). On the other hand, IEEE’s Standards Association global initiative on Ethically Aligned Design is actively working on vision and recommendations to address the values and intentions as well as legal and technical implementations of autonomous and intelligent systems to prioritise human well-being (IEEE 2016). This is the joint work of over 700 international researchers and practitioners. In particular, the P7000Footnote 2 series aims to develop standards that will eventually serve to underpin and scaffold future norms and standards within a new framework of ethical governance for AI/AS design. Currently, the P7000 working groups are working on candidate standard recommendations to address issues as diverse as system design, transparency in autonomous systems, algorithmic bias, personal, children, student and employer data governance, nudging, or, the identification and rating the trustworthiness of news sources. Notably, the efforts on assessment of impact of autonomous and intelligent systems on human well-being is now available as an IEEE standard.Footnote 3 Assessment Responsible AI is more than the ticking of some ethical ‘boxes’ or the development of some add-on features in AI systems. Nevertheless, developers and users can benefit from support and concrete steps to understand the relevant legal and ethical standards and considerations when making decisions on the use of AI applications. Impact assessment tools provide a step-by-step evaluation of the impact of systems, methods or tools on aspects such as privacy, transparency, explanation, bias or liability (Taddeo and Floridi 2018). It is important to realise, as described in Taddeo and Floridi (2018) that even though these approaches “can never map the entire spectrum of opportunities, risks, and unintended consequences of AI systems, they may identify preferable alternatives, valuable courses of action, likely risks, and mitigating strategies. This has a dual advantage. As an opportunity strategy, foresight methodologies can help leverage ethical solutions. As a form of risk management, they can help prevent or mitigate costly mistakes, by avoiding decisions or actions that are ethically unacceptable”. Currently, much effort is being put on the development of assessment tools.Footnote 4 The EU Guidelines for trustworthy AI are accompanied by a comprehensive assessment framework which was developed based on a public consultation process. Finally, it is important to realise that any requirements for trustworthy AI are necessary but not sufficient to develop human-centred AI. That is, such requirements need to be understood and implemented from a contextual perspective, i.e. it should be possible to adjust the implementation of the requirement such as transparency based on the context in which the system is used. That is requirements such as transparency should not have one fixed definition for all AI systems, but rather be defined based on how the AI system is used. At the same time, any AI technique used in the design and implementation should be amenable to explicitly consider all ethical requirements. For example, it should be possible to explain (or to show) how the system got to a certain decision or behaviour. Assessment tools need to be able to account for this contextualisation, as well as ensuring alignment with existing frameworks and requirements in terms of other types of assessment, such that the evaluation of trust and responsibility of AI systems provides added value to those developing and using it, rather than adding yet another bureaucratic burden. Codes of Conduct and Advisory Boards A professional code of conduct is a public statement developed for and by a professional group to reflect shared principles about practice, conduct and ethics of those exercising the profession; describe the quality of behaviour that reflects the expectations of the profession and the community; provide a clear statement to the society about these expectations, and enable professionals to reflect on their own ethical decisions. A code of conduct supports professionals to assess and resolve difficult professional and ethical dilemmas. While there in the case of ethical dilemmas there is not a correct solution, the professionals can give account of their actions by referring to the code. In line with other socially sensitive professions, such as medical doctors or lawyers, i.e. with the attendant certification of ‘ethical AI’ can support trust. Several organisations are working on the development of codes of conduct for data and AI-related professions, with specific ethical duties. Just recently ACM, the Association for Computing Machinery, the largest international association of computing professionals, updated their code of conduct.Footnote 5 This voluntary code is “a collection of principles and guidelines designed to help computing professionals make ethically responsible decisions in professional practice. It translates broad ethical principles into concrete statements about professional conduct”. This code explicitly addresses issues associated with the development of AI systems, namely issues of emergent properties, discrimination and privacy. Specifically, it calls out the responsibility of technologists to ensure that systems are inclusive and accessible to all and requires that they are knowledgeable about privacy issues. At the same time, the role of an AI Ethicist is becoming a hot topic as large businesses are increasingly dependent on AI and as the impact of these systems on people and society becomes increasingly more evident, and not always for the best. Recent scandals both about the impact of AI in bias and discrimination, as on the way businesses are dealing with their own responsibility, specifically on the role and treatment of whistle-blowers, have increased the demand for clear and explicit organisational structures to deal with the impact of AI. Many organisations have since established the role of chief AI ethics officer, or similar. Others, recognising that the societal and ethical issues that arise from AI are complex and multi-dimensional, and therefore require insights and expertise from many different disciplines and an open participation of different stakeholders, have established AI ethics boards or advisory panels. Awareness and Participation Inclusion and diversity are a broader societal challenge and central to AI development. It is therefore important that as broad a group of people as possible have a basic knowledge of AI, what can (and can’t) be done with AI, and how AI impacts individual decisions and shapes society. A well-known initiative in this area is Elements of AI,Footnote 6 initiated in Finland with the objective to train one per cent of EU citizens in the basics of artificial intelligence, thereby strengthening digital leadership within the EU. In parallel, research and development of AI systems must be informed by diversity, in all the meanings of diversity, and obviously including gender, cultural background and ethnicity. Moreover, AI is not any longer an engineering discipline and at the same time there is growing evidence that cognitive diversity contributes to better decision-making. Therefore, developing teams should include social scientists, philosophers and others, as well as ensuring gender, ethnicity and cultural differences. It is equally important to diversify the discipline background and expertise of those working on AI to include AI professionals with knowledge of, amongst others, philosophy, social science, law and economy. Regulation and codes of conduct can specify targets and goals, along with incentives, as a way to foster diversity in AI teams (Dignum 2020).
2023-07-14T00:00:00
2023/07/14
https://link.springer.com/chapter/10.1007/978-3-031-08215-3_9
[ { "date": "2022/12/01", "position": 30, "query": "universal basic income AI" } ]
Is there a civic duty to support medical AI development by ...
Is there a civic duty to support medical AI development by sharing electronic health records?
https://pmc.ncbi.nlm.nih.gov
[ "Sebastian Müller", "Center For Life Ethics Heinrich Hertz Chair", "University Of Bonn", "Schaumburg- Lippe-Straße", "Bonn" ]
by S Müller · 2022 · Cited by 10 — A number of scholars advocate a moral civic duty to share electronic health records (EHRs) that overrides IC requirements in certain contexts.
Two noteworthy articles by Ploug [ 26 ] and Ballantyne [ 27 ] identify the most discussed arguments in favour of a civic duty to share EHRs for medical research. The articles focus on three main arguments. The first argument, which I call the ‘rule to rescue argument’, states that there is a universal rule to rescue people in accident-like situations and that people should conform to this rule by sharing EHRs [ 22 , 23 , 28 ]. The rule implies a duty to support medical AI developments that can rescue people in accident-like situations. Second, there is the ‘low risks, high benefits argument’, which says that people have a duty to benefit others as long as the risks entailed by doing so are bearable [ 23 – 25 , 29 ]. If a medical AI development project complies with this risk-benefit ratio, there is a civic duty to share EHRs. The third argument, the ‘property right argument’, emphasises that a great deal of health data is not generated by private citizens but collected and distributed within the health care process. Advocates of this argument believe that since such processes are mostly financed by solidarity-based health care contributions and taxpayers’ money, the results of those processes are a public good to which citizens should not claim exclusive rights [ 21 , 22 , 30 ]. Since medical AI development is an important part of modern medical research, and since none of the advocates of a civic duty to share EHRs excludes medical AI from their discussion, I will assume that the three arguments in favour of a civic duty to support medical research includes sharing EHRs with medical AI research and development projects. One might now argue alongside Rulli and Millum [ 28 ] that within the rule to rescue, it is possible to distinguish among different sets of duties. For example, to protect citizens in a pandemic, politicians and scientific consultants have the professional duty to introduce effective preventive measures. If the chances of selecting the right combination of preventive measures can be increased with the help of a medical AI and its access to EHRs, politicians and consultants may have a professional duty to use those tools. Intertwined with that duty is the institutional duty of supporting everyone employed in the rescue process with sufficient tools and resources. This institutional duty may be composed of many other duties, such as the professional duty of researchers to develop medical AI that can simulate pandemics and the effect of different preventive measures, the federal duty to financially support such research, and the civic duty to grant access to the data that is needed to develop the AI and run the simulations. Even if this line or argumentation were accepted, there would still be the problem that an institutional duty to rescue those in peril cannot imply a civic duty to share EHRs without further ado. Additional arguments are needed to explain how exactly an institutional duty towards the rule to rescue can have an impact at the level of individual citizens and why it gives rise to a duty to share EHRs rather than other responsibilities. Without further references to normative values such as beneficence and justice, normative trade-off rationales, or theories concerning collective responsibility, citizens might as well live up to the rule to rescue by paying their taxes, donating blood, advocating better research conditions, or conforming to well-known preventive health measures. Because the individual act of data sharing does not cause anybody to be rescued, and because a call for an institutional interpretation of the rule to rescue does not entail an imperative to share EHRs, the ‘rule to rescue argument’ cannot justify a duty to support medical AI developments that will save people in accident-like situations. Examining this argument closely, P1 and P3 appears to conflict with one another. Ploug notes [ 26 ] that no victim B i has ever been rescued by the act of EHRs sharing. Instead, a victim is rescued by another person that might or might not use technology that was developed in the past. This temporal shift between data sharing and the rescue act is highly relevant for the ‘rule to rescue argument’ because there are no urgent situations µ i in which a passer-by can provide proper help to a victim B i simply by sharing her EHRs. The sharing component necessarily precedes the entire rescue scenario, which means that the entity A i that performs the rescue is not the same as the entity C i that shares her data. I think Plougs critique applies to medical AI development as well. Take any accident-like scenario in which a medical AI supports a rescue process. For example, a clinical decision support system may enable a physician to save a patient’s life because it recognises indicators for a rare disease. In this and all other scenarios, the victim is rescued either by an automatous algorithm that was trained with EHRs before the rescue took place or by another person who uses medical AI as a tool. A number of scholars believe that the rule to rescue implies a personal moral duty to support medical research by sharing EHRs [ 22 , 23 , 30 , 36 ]. The argument is as follows: Physicians in ERs, paramedics, and others save people’s lives on a daily basis. Because certain innovative health care practices, innovative medical technologies and forms of advanced medical knowledge are tools that enable people (A i ) to rescue others (B i ) in accident-like situations (µ i ), people have a duty to support such research and, thus, such technological developments. Typically, this duty is considered to be a subset of the bioethical duty of beneficence. Because certain medical AIs can be used in medical emergencies [ 37 ] the development of these AIs need be considered by the rule to rescue argument. The argument’s structure can be broken down as follows: As Rulli and Millum have discussed, the rule and its application become more complex when collective actors are involved [ 28 ]. There are many accident-like situations µ i in which victim B i cannot be rescued by one person but only by a group of people. For example, the person who witnesses an accident is usually considered to have a duty to call an ambulance. Emergency services are usually considered to have a duty to aid the victim and take her or him to the emergency room (ER), the physician on duty is expected to provide medical care, and the institutions that are involved in the rescue process are expected to allocate resources, write laws, and accumulate knowledge in a way that promotes such care. The social roles of the people involved in such processes determines the scope and force of their duty [ 33 , 34 ]. The ER physician, for example, has a duty to rescue a patient who is being taken to the ER. The same physician, however, does not have a specific duty to rescue people from a burning house. This task is the professional duty of firefighters. Therefore, the rule to rescue is not shaped by the endangerment of B i ’s basic interests but by the social context in play and the type of entity that A i is. The rule can call for individual duties such as the duty to call an ambulance, social role duties such as the duty of an ER physician to heal a patient, and institutional duties such as the duty to properly fund emergency services [ 28 , 35 ]. What moral obligations does this rule imply in practice? Witnesses of a car crash have a duty to call emergency services, physicians that witness a passenger on an airplane having a heart attack can be expected to render first aid, and a fishing company has a duty to save its employees when the engine of one of their ships breaks down at sea. The rule to rescue is a well-known bioethical imperative that imposes a duty to prevent harm from happening to other people [ 26 , 31 , 32 ]. Following that rule, an entity A i has a duty to rescue a human being B i in an accident-like situation µ i if and only if (i) A i is part of the situation, if (ii) A i is able to provide proper help in µ i , if (iii) there is no other entity A n ≠ i that is more capable of providing help in µ i , and if (iv) compliance with the rule does not force A i to sacrifice anything of equally important moral value. A situation µ i can be characterised as accident-like if there is a high risk of significant loss of or damage to B i ’s basic interests and if those risks can only be prevented by the immediate action(s) of others. The ‘low risks, high benefits argument’ The principle of beneficence is a positive requirement to promote the welfare of others and contribute to the common good [38]. In medical contexts, this principle obliges caregivers and researchers to act in accordance with the interests of their patients and research subjects. In business, it obliges companies to conduct their business in a way that serves social interests. In democratic politics, it obliges citizens to vote and act in ways that increase and foster the common good. The principle of beneficence is often supplemented by concepts of solidarity and justice that advocate (i) a duty to act in ways that benefit the members of a given society, including oneself [39, 40], and (ii) social structures that promote equality [20, 30]. Scholars that support the ‘low risk, high benefit argument’ link the principle of beneficence with the belief that all citizens of modern societies will benefit significantly from innovative health care developments. The overall benefits are or will be so significant, they argue, that the risks associated with EHRs sharing are negligible in comparison. Therefore, citizens have a moral duty to share EHRs to support medical research [23–25, 29]. It is necessary to take a closer look at the benefits and risks that the authors present to discuss the argument sufficiently. And since medical AI is part of medical research and can be used to promote medical and social health benefits [12], I will also consider medical AI specific risks and benefits. Let’s start with the benefits. Schaefer and colleagues for example point to the potential improvement in public health care and personal well-being to justify the moral duty to share EHRs for medical research [29]. Bowten and colleagues add the decrease of health care costs to this list [41] and Knottnernus points to the benefits that large EHRs databases provide for the expansion of medical knowledge [21]. By focusing on the medical AI development, I believe a further benefit needs to be mentioned. Forsberg and colleagues claim that all citizens will receive substantially worse health care in the future compared to an ideal scenario if some citizens do not support key innovations such as medical AI technology today [24]. This argument is quite strong, as it relates to issues of discrimination and injustice surrounding so-called selection bias. Selection bias can occur when an AI is trained with datasets in which groups of a certain age, social class, ethnicity, biomarker, or health status are underrepresented or unrepresented [42, 43]. An AI that is biased in that respect might not be able to recognise signs of skin cancer on a skin tone with which it is unfamiliar, it might diagnose women less accurately than men if it is primarily trained with male data, and it may not recognise certain cases of dangerous drug interactions if it is denied access to the data of vulnerable groups like Alzheimer patients [44]. According to a review study by Kho and colleagues, selection bias is not the result of a few citizens who refuse to share EHRs but of an effect called consent bias, which means that certain socio-economic groups are structurally more willing to consent to health-related research than others [45]. Selection bias might also be affected groups that generate more useful data than others, such as chronically ill people or quantified self-enthusiasts [46]. Since medical AIs are only as good as training data allow them to be, Cassell and Young call for a duty to foster a balanced representation of all social groups in those data [25]. This proposition means that people who belong to groups that are underrepresented in medical datasets, which in most cases is everyone except adult Caucasian men [47], have a specific civic duty to share their EHRs. Now what about the minimal or reasonable risks that come along with EHR sharing, and the risks entailed by the development and use of medical AI? Minimal risks are risks that are perceived as normal in everyday encounters such as driving a car or going to the dentist for a routine check-up [48]. The risks associated with EHR sharing and the development and use of medical AI can be of very different natures and can affect individuals, social groups and institutions differently [49]. Individuals can suffer personal harm due to data breaches caused by hacker attacks, data misuse, or adversarial attacks [22, 50]. Individuals can also be harmed by medical AIs that produce technical errors, give wrong medication advice or misinterpret input data [51]. Social groups can be discriminated against and treated unjustly by biased AI [52], and institutions such as health care professions can be severely harmed if people develop trust issues and avoid medical treatment [53]. Unfortunately, it is hard to decide whether those risks are comparable to the risks of other everyday activities and, therefore, qualify as minimal. It is also hard to decide whether those risks are small in comparison to the benefits and, therefore, reasonable. A strategy to solve this problem is to empirically prove that certain risks, such as the personal risk of harm from EHR breaches, are statistically smaller than other everyday risks, such as the personal risk of harm from traffic. Porsdam-Mann and colleagues [23] proceed with this strategy and estimate that the personal risk of becoming a victim of health data-related privacy breaches in the US was approximately 0.02%3 between 2009 and 2016. In contrast, the risk of being injured in traffic in 2009 was approximately 0.7%. Considering these risks, the authors conclude that researchers should be allowed to access at least low risk datasets without asking for IC. Based on the risk-benefit analysis presented and the presupposed principle of beneficence, the ‘low risk, high benefit argument’ is: Citizens have a moral duty to benefit others. The risks of developing and using such medical AI are reasonable. The development and use of certain forms of medical AI is beneficial for society. The development and improvement of medical AI requires EHRs use. Citizens have a moral duty to support medical AI developments that can be expected to benefit society by sharing EHRs. Why the risks of medical AI development and the use of such AI can be unreasonably high For analysis of premise P2, it is vital to acknowledge the two interrelated notions of risk as the quantifiable probability of a harm being done and as the quality of a harm. As mentioned previously, some works focus exclusively on the probability component. Such works encounter two problems. Firstly, they often do not reveal exactly which social groups and which individuals are at which risk of being harmed. People whose EHRs are stored in multiple databases are statistically more likely to become a victim of privacy breaches than people whose data is stored on one database. Data that is protected by insufficient security standards are more vulnerable than data that is well protected, and not anonymised breached EHRs can more easily cause harm than anonymised data [54]. The same is true for the use of medical AI. When an AI is trained and optimised with data from only one socioeconomic group or ethnicity, it is more likely for people outside this group to be harmed by biases. Secondly, even if these problems were solved and there were more accurate risk calculations available, those calculations cannot tell whether the quality of a given harm is reasonable to bear for every individual, for certain social groups, or for society. Therefore, I think, it is worthwhile to pay more attention to the quality of harms that may occur in the EHR sharing process or through the use of medical AI. The individual risks entailed by EHR sharing are hacker attacks, leaks, and instances of data misuse that can harm individuals in multiple ways. Patients can experience psychological stress when their health-related information is leaked and becomes public [55]. They can suffer economic losses when their data are hacked and used for blackmail. They can suffer a loss of autonomy when their data are misused to support political causes or social changes without their consent [26]. Patients who do not believe these risks to be reasonable may develop trust issues regarding all medical procedures in which EHRs are generated. A real-life example of this apprehension is the failure of the NHS project ‘care.data’, which tried to extract GP surgery data into a central database that was supposed to support research, public health planning, and commercial use. Patients were allowed to opt out of this program, but information concerning how to do so was not communicated transparently. This situation caused a significant number of patients to avoid seeking medical help and to stop disclosing relevant medical information to their physicians. Eventually, the project was paused a year after its launch in 2013 due to massive protests [53, 56]. As the NHS case shows, trust and confidentiality issues are major social risks. Social risks can manifest in a decay of solidarity, instances of discrimination, and even tendencies towards human rights violations. The effects of a decay in solidarity can occur when EHRs are linked to other personal profiles. For example, insurance companies can use health data to individualise risk categories and calculate premiums [57–60], and drug companies can target patient data to drive up prices and prescriptions [61]. Governments and health insurance companies can also discriminate against other market agents by granting EHR access to exclusive business partners without communicating their cooperation transparently and without enabling citizens to withdraw from data sharing policies. For example, in 2015, the British NHS granted Google’s Deep Mind exclusive access to 1.6 million health records, and in 2018, the US health care provider Ascension made the non-anonymised health data of more than 50 million individuals available to Google [62]. Another type of discrimination can occur in the employment context. Leaked health information concerning employees can put employers in a position to build up discriminatory health-related hiring barriers [63]. As the two law experts Price and Cohen put it, even if there are a number of laws in the US and Europe that prohibit discriminative hiring practices (e.g., the Americans with Disabilities Act), “they can be hard to enforce because it is often hard to know when discrimination has occurred” [64]. When a collection of medical data is comprehensive enough to include a large portion of a society and when it contains sensitive information, there is also a real risk of human rights violations. Data collections can be used to identify and discriminate against social groups with certain medical or genetic characteristics that are viewed as undesirable or deleterious by political authorities. As the historians Seltzer and Anderson have shown, data items most commonly used to target populations in the past included ethnicity, religion, country of birth, and native language [65]. Since macro-political shifts and revolutions are rarely predictable events, comprehensive health databases that are collected in democracies today might support totalitarian regimes in committing human rights violations in the future [52]. There are also harms related to the use of medical AI. On a personal level, people can be harmed by erroneous medical AI. An AI causes errors if it interprets data incorrectly, generates false outputs, makes harmful therapeutic suggestions, or physically harms people due to a malfunction. An example for such problems is the IBM supercomputer Watson, which was reported to suggest unsafe and incorrect cancer treatments in a cancer research trial [66]. Errors can also be a product of human intentions. These so-called adversarial attacks can be introduced to any learning algorithm [67]. For example, it is possible to manipulate medical images with pixel noise in such a way that it is invisible to the human eye and that image recognition software will misdiagnose the images [68]. In addition to intended and unintended technical errors, patients may face trust issues regarding changes in patient–physician relationships. These changes can be caused by the concern that caregivers might be less skilled in contexts where key medical competences are performed by medical AI in the future [69, 70]. The fear of social isolation caused by the replacement of human interactions with socially engaging AI (e.g., chat bots) is also a part of this picture. Blasimme and Vayena summarise these problems as follows: “exclusive reliance on algorithms may rule out that necessary degree of flexibility that allows healthcare operators to calibrate objective criteria with the reality of each individual case” [71]. The use of medical AI also entails the social risks of a decay of solidarity. If medical AI is going to be as effective as predicted, patients may feel that, in many ways, their life depends on the tech companies that own the best medical AI [72]. If those developers of medical AI increase prices or block the transfer of knowledge, solidarity structures might break apart [73]. Finally, the possibility of using medical AI to connect anonymized health data with non-anonymised datasets may facilitate human rights violations [14–16]. For example, Wang and Kosinski built a deep neural network that used a database of 35,326 facial images of self-reported homosexual men and women living in the US to learn how to recognise facial expressions that are characteristic of gay men and women [74]. It is not difficult to imagine situations in which tools that can recognise correlations among health information, social features and a person’s appearance could be used by political authorities to target citizens and undermine democratic structures [75]. Despite all these potential harms, advocates of P2 may still make the pragmatic argument that unwillingness to engage in medical data research and medical AI development projects can produce a selection and a consent bias that reduces the quality of medical AI. That is, in hypothetical comparison between a world with a civic duty to share EHRs and a world without such a duty, the first one would have relatively fewer biases and, thus, be more beneficial [24]. Given the tremendous number of other factors that also affect the quality of medical AI outputs, however, this argument would be short-sighted at best. Erroneous and, thus, potentially discriminatory and harmful AI outputs may result from errors in data transfer, from incorrectly coded diagnoses and therapies, or from incomplete and insincere patient testimonies [63]. In addition, there are other types of biases that cannot be attributed to citizens’ willingness to support medical AI development [42, 43]. There can be capture bias when training data are preselected according to the preferences of users, physicians, or developers. Those preferences may ignore the needs of certain social groups. There can also be a negative or a positive set bias when the control data are selected so poorly that the medical AI produces false negatives or false positives for certain groups of people. There can be an automation bias in the process of AI use, which is caused by the empirical fact that caregivers are less likely to question algorithmically generated diagnostic results [76, 77]. In addition, it is difficult or even impossible for patients and health care professionals to understand how complex algorithms work, whether an output is erroneous, or how a given error came about. That is why non-explainable algorithms are also known as black box algorithms [55, 78]. All these factors can lead to poorer outcomes for some individuals and social groups compared to a world without medical AI [79]. Overall, there are a multitude of potential risks that are important in the medical AI development process and in the use of such AI. AI projects that can be considered low risk and high benefit and that, therefore, might inspire a civic duty to share EHRs need to prove their low risk profile by (i) compiling databases and collecting datasets in a way that makes re-identification improbable, (ii) constantly updating cyber security standards, (iii) not accessing data items that are not necessary for the development process, (iv) not developing medical AI that can easily be used to discriminate or endanger individuals or social groups. However, because these characteristics apply only to very few medical AI development projects, the ‘low risks, high benefits argument’ has very limited potential to establish a civic duty that obviates IC requirements.
2022-12-10T00:00:00
2022/12/10
https://pmc.ncbi.nlm.nih.gov/articles/PMC9736708/
[ { "date": "2022/12/01", "position": 32, "query": "universal basic income AI" } ]
Editorial: Scarcity, regulation, and the abundance society
Editorial: Scarcity, regulation, and the abundance society
https://www.frontiersin.org
[ "Desai", "Deven R.", "Georgia Institute Of Technology", "Scheller College Of Business", "United States", "Lemley", "Mark A.", "Stanford Law School" ]
by DR Desai · 2023 · Cited by 6 — ... AI computing. Bitcoin relies on scarcity of computing to create value ... A recent idea has been to embrace some type of universal basic income (UBI).
Editorial on the Research Topic Scarcity, regulation, and the abundance society New technologies continue to democratize, decentralize, and disrupt production, offering the possibility that scarcity will be a thing of the past for many industries. We call these technologies of abundance. But our economy and our legal institutions are based on scarcity. Abundance lowers costs. When that happens, the elimination of scarcity changes the economics of how goods and services are produced and distributed. This doesn't just follow a normal demand curve pattern—consumption increases as price declines. Rather, special things happen when costs approach zero. Digitization and its effects on the production, organization, and distribution of information provide early examples of changes to markets and industries. Copyright industries went through upheaval and demands for new protections. But they are not alone. New technologies such as 3D printing, Cas-9 Cripsr, artificial intelligence, synthetic biology, and more are democratizing, decentralizing, and disrupting production in food and alcohol production, biotechnologies, and more, and even the production of innovation itself, opening the prospect of an abundance society in which people can print or otherwise obtain the things they want, including living organisms, on-demand. Abundance changes the social as well as economic context of markets. How will markets and legal institutions based on scarcity react when it is gone? Will we try to replicate that scarcity by imposing legal rules, as IP law does? Will the abundance of some things just create new forms of scarcity in others—the raw materials that feed 3D printers, for instance, or the electricity needed to feed AIs and cryptocurrency? Will we come up with new forms of artificial scarcity, as brands and non-fungible tokens (NFTs) do? Or will we reorder our economics and our society to focus on things other than scarcity? If so, what will that look like? And how will abundance affect the distribution of resources in society? Will we reverse the long-standing trend toward greater income inequality? Or will society find new ways to distinguish the haves from the have-nots? Society already has examples of each type of response. The copyright industries survived the end of scarcity, and indeed thrived, not by turning to the law but by changing business practices, leveraging the scarcity inherent to live performances and using streaming technology to remove the market structures that fed unauthorized copying, and by reorganizing around distribution networks rather than content creators. Newsgathering, reporting, and distribution face challenges flowing from democratized, decentralized, and disrupted production. Luxury brands and NFTs offer examples of artificial scarcity created to reinforce a sort of modern sumptuary code. And we have seen effective, decentralized production based on economics of abundance in examples ranging from open-source software to Wikipedia. In this introductory essay, we survey the potential futures of a post-scarcity society and offer some thoughts as to more (and less) socially productive ways to respond to the death of scarcity. Beyond the economics of scarcity Information, digitization, and scarcity Information goods and the success of abundance Questions about scarcity and abundance are central to how humans organize societies. Traditional capitalist economics is based on scarcity (Frischmann and Lemley, 2007). Things are valuable because they are scarce. The more abundant they become, the cheaper they become. We pay for things because it takes resources—land, raw materials, human labor—to produce them. In general, the more resources it takes to produce them, the more we pay (Samuelson and Nordhaus, 2010). The most fundamental graph in economics shows a supply curve and a demand curve. The supply curve slopes up because resources are scarce, and the demand curve slopes down because money too is scarce. Generally speaking, markets meet in the middle—when it costs more to make something than people are willing to pay for it, manufacturers stop making it. When there are exceptions—when customers are willing to pay a great deal for something that is cheap to make—the producer may make a substantial profit in the short term. But in the long run, other producers, attracted by the high profit margin, enter and offer the cheap product at a lower price, competing away the extra profit margin. Price settles at marginal cost. Indeed, economics as traditionally taught is the study of how people and society allocate scare resources (Robbins, 2007; Ghosh). When tangible, and often consumable, things such as food, oil, lumber, clothing, are in limited supply, economics tries to explain how to allocate scarce items. Even if one doesn't consume an item, often only one person can possess it (Frischmann and Lemley, 2007). And in the rare circumstances where that is not true, we often see that as a reason for the government to intervene to provide the good. The traditional economic story of information is somewhat different. Information is a public good; that is, “one that is non-rivalrous and difficult to exclude non-payers from using” (Wu, 2017; Menell et al., 2022). Unlike, say, ice cream, my consuming information doesn't prevent you from also consuming it. Accordingly, the marginal cost of producing the next copy of information approaches zero (though the physical goods in which information has traditionally been encapsulated, such as books or films, do cost money to produce and distribute). As such, economists worry that things—goods or information—that cost a lot to develop but little or nothing to copy will be underproduced because the ease of copying means producers won't be able to charge enough to recoup their investment in making the thing in the first place (Scherer and Ross, 1990; Landes and Posner, 2003). For most public goods, the traditional solution is to regulate market entry, designating one company as the exclusive provider of, say, electric power or telephone or cable service, for a particular region and allowing that company to make up its fixed costs by charging its captive customers a price above marginal cost (Samuelson and Nordhaus, 2010). Intellectual Property (IP) laws take a similar approach, creating a right to exclude competition in a particular piece of information so that the creator can make up its fixed costs by charging customers a price above marginal cost (Lemley, 1997; Boyle, 2009). Unlike more traditional regulated industries, however, the government does not regulate the price IP owners can charge, but instead relies on some combination of the temporary duration of the IP right and imperfect competition from other inventions to keep prices in line (Abramowicz, 2004; Yoo, 2006, 2009; Lemley and McKenna, 2011). In effect, the point of IP laws is to take a public good that is naturally non-rivalrous and make it artificially scarce, allowing the owner to control how many copies of the good can be made and at what price. In so doing, IP tries to fit information into the traditional economic theory of goods. The fit is imperfect, though, both because IP's restriction on competition creates a deadweight loss to consumers who would have bought the good at a lower price and because the very existence of the IP right means that competition cannot discipline pricing in the same way it does for goods. But a series of technological changes is underway that promises to end scarcity as we know it for a wide variety of goods. The Internet and related, complementary technologies are the most obvious examples, because the changes flowing from them are furthest along. Even before those changes, the copyright industry offers an earlier example of the way abundance can alter a market to increase rather decrease revenues. The home movie market started as a high-priced one for those who could afford both expensive home video players and expensive tapes of movies. Then new technology fostered abundance in the market. First, the machines evolved with VHS winning the format battle. Second, many producers entered the VCR market, and the cost of the machines dropped. Third, people began to buy VCRs to record TV broadcasts. Increased VCR ownership created the opportunity for consumers to buy or rent films on videotape. Following the playbook about costs to copy and the desire for artificial scarcity, studios sought “total control of the cassette from the manufacturer to the customer.” Studios began by pricing copies at $80–$90, so that it made more sense for a rental store to buy and recoup costs with each rental, rather than a home consumer buying a copy. Nonetheless, a few studios experimented with the new market and priced tapes for $19.95 so that more people could own a copy and watch it as often as they liked. By 1996 the rental market was at $9.2 billion and the ownership market was at $7.2 billion with more growth in direct-to-video movies to come (Roehl and Varian, 2001). These experiments should have told copyright incumbents in music that lowering prices to make illegal copies a less attractive option was the best move. Anti-copying laws and technical measures played their part in the home video market, and technology that hindered getting a clean copy of a recently rented movie likely helped the industry. But that alone was not enough. The combination of a reasonable price point and the fact that street or illegal copies were lower quality allowed a new market and revenue stream to flourish. Although VHS was an analog example of scale and market issues, the lessons carried over with greater force once a series of technological changes reached the industry. The music industry's experience fighting, and then acquiescing to, digital content is well-known, but tracing the intersection of technologies that led it there shows why more and more sectors could move to a low or post-scarcity equilibrium. The digitization of music was one key change. Physical copies went away in favor of files. Given the low-speed and bandwidth of modem connections, fears of copying were more about digital audio tapes rather than copying digital files and sharing them. The dream of a celestial jukebox was just a dream. But music compression improved. The Internet became commercial. Bandwidth and connection speeds increased. All these complementary technologies converged and unleashed the power to distribute recordings at will for essentially low to no cost. In addition, software changed the way music was recorded and gave creators access to high-end production techniques. Rather than needing expensive access to recording studios for an adequate demo tape that artists hoped would lead to a recording contract, access to high-end studios, and music producer expertise, artists could make high-quality recordings with high-end production techniques. The cluster of production and distribution technologies democratized and decentralized the music industry. Digitization is a core, first step toward ending scarcity because it helps remove physical limits. That shift often means producers must adapt to the realities of low-cost copying and distribution acute. Digitization not only affects the way copyrighted products are consumed but also the way they are produced, and thus the nature of the industry in general. Once digitization takes hold of an information market, it dramatically reduces the cost of producing that content. Add in the nature of the Internet and not only does production cost drop, but also other aspects of the market that limit abundance. The Internet accelerates the changes because it reduces the cost of reproduction and distribution of informational content effectively to zero. Furthermore, as the Internet has fostered an abundance of low-cost information creation and sharing, it has created a variety of intermediaries such as search engines and Web hosts that enable access to information for free or at a very low cost. Those intermediaries are agnostic about (and quite often ignorant of) the content they are distributing. In short, digitization and the Internet has disaggregated creation and distribution. I can create without distributing secure in the knowledge that my works will be disseminated by others who distribute without creating. The result has been a resounding success story. People are creating and distributing more content now than ever before, by at least an order of magnitude (Rifkin, 2014; Lemley, 2015). Economic scholarship suggests that although until around 2011–2013, recording industry revenues have declined substantially from their high in 1999, there were more songs being released than ever before, more new artists than ever before, and more purchases of music than ever before, and the songs released seem to be of at least as high quality as before the digital disruption of the industry (Lunney, 2012; Waldfogel, 2012). The claim that music (or video, or text) would stop being produced without the economics of scarcity was proven false (Cohen, 2011; Lemley, 2011). But that doesn't mean digital technologies brought no disruption. Incumbents had to retool their business models. High-cost intermediaries and distribution networks changed or went out of business. A world of four or five major labels controlling close to 80 percent of the market shifted, and a host of smaller labels produced more music. Artists sold their work directly to consumers. Apple's iTunes, Amazon, and GooglePlay began selling singles at 99 cents to a dollar 30 cents. Rhapsody and Spotify developed subscription services. Concerts became a major source of income. After some legal fights, YouTube came up with a system to allow rights holders to identify potentially infringing works, and to offer rights holders ways to make money for uses previously too expensive to negotiate even through rights collectives such as the American Society of Composers, Authors, and Publishers (ASCAP) or BMI. Digitization and network technology shifted the way music is created, sold, and monetized. The practice was democratized. Yet, as one music industry report shows, the industry has experienced 7 years of growth between 2014 and 2021, with 2021 global revenues totaling $25.9 billion, an 18.5 percent increase from 2020 (Richter, 2022). Perhaps counter-intuitively, the bottom was in 2014, the year streaming began; and it was the advent and embrace of streaming that returned the market to growth. Once again technology increased abundance, and the industry adapted to that change. Something similar happened with video, books, and even news reporting. The rise of sites like YouTube has led to an astonishing outpouring of videos from outside Hollywood. More than a decade ago, YouTube had more content added every month than the major TV networks created in 60 years. Since then, the numbers of hours uploaded has grown from 300 to more than 500 h of new content uploaded to YouTube every minute. At the same time, despite the COVID pandemic's effect on movie theater attendance, the movie industry is faring better than ever before in history (McClintock, 2021). This success is in part because of the industry's embrace of streaming content, a technology that seemed to threaten the industry a decade ago (Oberholzer-Gee and Strumpf, 2010). People are buying more books than ever before, with print books still accounting for 76 percent of sales revenues in 2021. And while the price of those books has declined somewhat, writers are also publishing more books than ever before, including a surprising number of successful self-published books (Oberholzer-Gee and Strumpf, 2010; Waldfogel and Reimers, 2015). Print newspapers have seen revenues decline because of the Internet (Edmonds, 2012), but that doesn't mean news reporting has declined; more news is reported more quickly from more sources as individual citizens are increasingly capable of documenting the world around them. Nor has the quality of journalism necessarily fallen; indeed, one recent study finds that “newspaper content appears to be getting more sophisticated in response to increased Internet penetration” (Salami and Seamans, 2014). True, there is lots of misinformation out there, and that's a problem. But there is also lots more factual news reporting than in prior eras. And despite piracy, both the film and publishing industries reported higher profit margins in the 2010s than they did a decade before (Band and Gerafi, 2013). Live music and shows have also reached unprecedented levels of revenue and profit. Overall, the picture of the entertainment industry is far from bleak; the overall industry grew from $449 billion in 1998 to $745 billion in 2010 (Travis, 2015). Perhaps most surprising, people are creating an astonishing array of content specifically for the purpose of giving it away for free on the Internet. Early on, scholars worried that no one would create content for the Internet because they couldn't see a way to get paid (Ginsburg, 1995), but it is hard to think of a prediction in all of history that has been more dramatically wrong. People spend hundreds of millions—or even billions—of hours a year creating content online for no reason other than to share it with the world. They create and edit Wikipedia pages, post favorite recipes, create guides to TV shows and video games, review stores and restaurants, and post information on any subject you can imagine (Benkler, 2002, 2006; Rimmer, 2009). The claim that people would not create and share their creations because of the public goods aspect of information, as the economics of scarcity predicts, has not been borne out. Rather, even in the analog days, we all knew of garage bands, artists, tinkers, and other creators whose worked was local and under the radar. The shift to digital, networked creation has unearthed these creative efforts and provided new ways to share them. If, as Doctor Johnson famously suggested, “[n]o man but a blockhead ever wrote except for money,” Johnson (1884) we are a world of blockheads, gleefully creating and sharing all sorts of content with the world. Ghosh's and Asay's contributions to this volume note the fundamental nature of the changes the Internet has wrought on copyright and incentives to create (Ghosh; Asay) and Said discusses how copyright law uses the rhetoric of scarcity to justify its continued dominion. Digitizing physical goods: The promise of abundance More recently, new technologies promise to do for a variety of physical goods and even services what digitization and the Internet has already done for information. 3D printers can manufacture physical goods based on any digital design (Desai and Magliocca, 2014; Newcomb, 2022). But that has been the case for a range of computer-numeric-control devices for some time. The difference is the intersection of increasingly sophisticated yet lower cost 3D printers; ever more accurate and inexpensive scanners; and leaps in material science allowing 3D printers to move beyond plastics to cement, ceramics, metals, and more. Together these changes have spawned an abundance of the know-how and means to produce things that were once the province of high-cost manufacturing firms in industries as varied as toys, guns, autos, homes, drugs, and even spaceships. China is even pursuing building an entire hydro-electric dam using 3D printing, robots, and artificial intelligence systems, but almost no humans. Several industries use versions of this technology to make better prototypes and bring new products to market faster, but something else is happening too. New players are entering industries, such as the car industry, where start-up costs used to be high and acted as a barrier to entry. For example, Local Motors was able to use crowd sourcing to design a car with the winning designer receiving $7,500, and then complete the prototype in a little over 2 months. The two-seater has only 49 parts, most of which were made with 3D printing technology. The third production of the prototype took about 40 h to build. The body itself is a one-piece carbon tub. One car reviewer noted that the other car he tested with a one-piece carbon tub body was a McLaren 650S priced at more than $300,000. Local Motors plans on releasing its first vehicle sometime in 2016 at price between $18,000 and $30,000. In addition, the approach of Local Motors allows it to build mini-factories for far less than the billion or so dollars traditional carmakers such as Tesla spend (yes Tesla is traditional on this point). That means Local Motors should have been able to adapt faster, deliver closer to consumers, and offer custom, high-quality, low-cost cars. The amount of high-end technology bought to market at low-cost shows that the ability to tinker and create even in a complex sector such as the automotive industry is real and persists. For example, in 2019, BMW revealed a 3D printable concept car, yet a father and son had already used 3D printing and related CNC technologies to make a Lamborghini at home at a cost of $20,000 investment (Voulpiotis, 2019). Like the Local Motors compared to McLaren, a Aventador Lamborghini on which the 3D printed version is based, cost more than $300,000 (Voulpiotis, 2019). As in other industries facing abundance technologies, incumbents may go after 3D printer sites offering digital plans for parts because of claimed trademark issues (Stumpf, 2022). Or companies may follow the lead of GE Aviation, Lockheed Martin, Raytheon, Honeywell, and Siemens Energy, that have agreed to work on changing their supply chain by supporting U.S. companies embrace 3D printing and similar technologies—a move that fits with the Biden Administration's Additive Manufacturing Forward program (Shabad, 2022). In other markets, consumers and tinkerers are creating and sharing plans for homemade toys and even guns. Some of these creations are new, and some build on offerings already in the marketplace. Like the copyright industry, industries that rely on patents are seeing small industry and individuals “interact” with their IP much more than was possible just a decade ago. Both Matthew Rimmer and Shane Greenstein provide additional examples in their chapters in this volume. Rimmer discusses the development of metal 3D printing and how it is changing manufacturing, while Greenstein discusses how print on demand clothing is changing the nature of fast fashion. Synthetic biology has automated the manufacture of copies of not just existing genetic sequences, but also any custom-made gene sequence, allowing anyone who wants to create a gene sequence of their own to upload the sequence to a company that will “print” it using the basic building blocks of genetics. In addition, two related technologies, CRISPR and Cas9, have lowered the bar to genetic editing. CRISPR stands for “Clustered regularly-interspaced short palindromic repeats [which] are segments of bacterial DNA that, when paired with a specific guide protein, such as Cas9 (CRISPR associated protein 9), can be used to make targeted cuts in an organism's genome.” (National Academies of Sciences, 2016). Because of CRISPR/Cas9, gene editing has gone from being “laborious and time-consuming” Kreiger (2016) to being “facile and rapidly achievable” (Sternberg and Doudna, 2015). At least one scientist now offers a DIY gene-editing system that is a simplified version of CRISPR for $120, and he offers “lab protocols, inexpensive equipment, and tutorials” so that the general public can learn the basics of gene editing (Sternberg and Doudna, 2015). The democratization of genetic science is in full-swing. Advances in robotics and AI generalize the principle beyond goods, offering the prospect that many of the services humans now supply will be provided free of charge by general-purpose machines that can be programmed to perform a variety of complex functions (Lemley and Casey, 2019; Greene, 2022). While these technologies are not nearly as far along as music and film, the changes in these industries share two essential characteristics with technology's influence on music and film: The technological advances radically reduce the cost of production and distribution of things, and they separate the informational content of those things (the design) from their manufacture. That latter characteristic is critical, because it means that technologies that once required individual physical investment with specific materials, labor, and plants can now be produced with generic technology. Sometimes that generic technology is nothing more than a computer. But even if it requires manufacturing, computer-aided design and manufacturing mean that a wide array of things can be made with off-the-shelf materials. Combine these technological developments—the Internet, 3D printing, robotics, and synthetic biology—and it is entirely plausible to envision a not-too-distant world in which most things that people want in a wide array of fields can be downloaded and created on site for very little money—essentially the cost of raw materials. Perhaps more important given recent changes in supply chains—be they from COVID'S effect on where, how, and when people worked; new demands for green transportation; or the Russia-Ukraine War's effect on fuel and grain supplies—is the promise of distributed, on-site manufacturing. Jeremy Rifkin calls this the “zero marginal cost society” (Rifkin, 2014). If we can avoid the dystopian future of technologically-backed lockdown, the future of many forms of creation is likely to follow the patterns of digitization, decentralization, and democratization. In some cases, such as with things covered by copyright, incumbent industries may embrace the news forms of creation and distribution such as what happened with streaming, while many other creators might leverage copyright to license works depending on whether the creator wants credit, income, or the way a licensor wishes to use the work. Yet the number of people on TikTok alone shows that millions of people are creating and sharing copyrighted works for a range of reasons. Beyond copyright, lots of people will create lots of designs, code, and biobricks that will enable us to use new production technologies to create more physical things. Other people will use, repurpose, and improve on those things, often without paying. But people will continue to create, because some people will pay for their creations, because there will be other ways to make money from being creative, because they want to be known for something or want the feeling of accomplishment that comes with creating, and, ultimately, simply because they can. In some cases, creators use IP to enable sharing and require attribution credit in non-commercial contexts while maintaining rights to charge license fees in commercial contexts (Doctorow, 2006). As one example, Cory Doctorow explicitly gives away his novels and lets people use them in one medium and sells them as bound books as well because his overall goal is to be found. As he puts it, his evangelical fans don't “just sell books—[they] sell[] me” (Doctorow, 2006). His fame and his presence leads to paying opportunities because he is the scare resource. As he says, “I've been giving away my books ever since my first novel came out, and boy has it ever made me a bunch of money” (Doctorow, 2006). Yet, more and more of these creations will operate outside the IP system, either expressly (biobrick inventors who choose not to patent their inventions, for instance) or by the simple virtue of ignoring that system. This future is not a utopia. None of these technologies is perfect, and each requires physical inputs that will in turn be subject to the laws of scarcity. Further, the lesson of digitization and the Internet is that while cheap, democratized production drives more creation, not less, it may also change the nature of that creation. Without IP rights we may see more creation by amateurs and academics and less by professional creators, just as in music we now see more new bands and fewer bands with multi-album staying power. That is both a good and a bad thing; removing the requirement of a major label record contract has surfaced new talent and enabled it to enter the music market, but the decline of professional artists may change the nature of music in ways that cause us to lose some music we'd like to have. Similarly, it is possible to imagine both a wealth of new product designs for 3D printers and a decline in the number of professional design firms. And in synthetic biology and genetics, where at least some products, like viruses and FDA-controlled chemicals, are likely to be heavily regulated, the cost and delay associated with that regulation may require some means to recoup investment. At least in the medium term, however, professional firms are likely to coexist with the amateurs, just as professional musicians and movie studios have found it possible to coexist—even thrive—alongside the new entrants. The dramatic reduction in cost that has spurred new entry also boosted the demand for content—people consume more music and video content than ever before, for example—and people are willing to pay for things they like if they are delivered in convenient packages. And IP rights are unlikely to disappear even if they are increasingly flouted, so professional providers who choose to rely on IP rather than sharing their work for free can still make some money by doing so. In short, the technologies of abundance offer a world in which people create more things at less cost, largely despite rather than because of IP laws. IP laws will continue to exist, and they will provide a necessary incentive for some forms of creativity. But creation that relies on IP is likely to play a less and less significant role in a post-scarcity world. What remains: Transforming the physical We come to the scarcity-abundance tension from intellectual property (IP) and information law perspectives, but we acknowledge that not everything can be digitized (Desai, 2014; Desai and Magliocca, 2014; Lemley, 2015). Many things still need to be made and delivered. An abundance society still requires the production of raw materials and infrastructure—food, energy, and the feedstock for 3D printers, data centers, communications infrastructure, and so on. As the population grows, the demand for more food and energy persists. And the response to prior technologies of abundance in capitalist societies has been to demand more stuff, increasing production and consumption. One possibility is that we start the cycle of consumption all over again. But even in non-information fields technologies of abundance may change the landscape. Agriculture offers a perspective on the interplay of technology and abundance. As one report sums up, despite a population boom between 1900 and 2011, Malthusian fears of starvation did not materialize. Instead, the world went from 1.7 billion to 7 billion people while still “produc[in] enough calories in 2012 to feed the entire population, plus an additional 1.6 billion people” (Johns Hopkins Center for a Livable Future, 2022). Advances in food production technology such the development of fertilizers or the genetic engineering behind the Green Revolution allowed greater yields. Other changes such as tractors and harvesters reduced the amount of human and animal labor needed to farm and the efficiency of a given farm plot (Dimitri et al., 2005; Johns Hopkins Center for a Livable Future, 2022). The invention of refrigeration allowed crops to be grown in lush farmlands and shipped to urban centers across the U.S. and the world. These changes increased food security such that India—a country with hundreds of millions of mouths to feed—became a net exporter. In sum, several technologies—shared and improved food stock such as corn, rice, sweet potatoes, and cassava; transportation innovation in rail and shipping; new methods for storing food in larger amounts and over long distances; and synthetic fertilizers—converged to create abundance. The history of agriculture in the U.S. shows more about the way technologies of abundance alter a sector and society. There was a time when over 60% of the people in the United States were primarily employed producing food (Rifkin, 2014). Even in 1900 the number was 41% (Dimitri et al., 2005). The dropoff continued such that by 1930 the number was 21.5%; by 1945 16%; by 1970 4%; until by 2002 the number was below 2% (Dimitri et al., 2005). Comparing two other metrics shows where technologies of abundance led to major shifts in how we live and work. Agricultural GDP was 7.7% of total GDP in 1930; 6.8% in 1945; 2.3% in 1970; and 0.7% in 2002 (Dimitri et al., 2005). Mechanization changed farming as well. In 1900, 21.6 million work animals were used in farming. By 1930 the Census reported 18.7 million horses and mules and 920,000 tractors in use; by 1945, 11.6 million horses and mules and 2.4 million tractors; by 1960, 3 million horses and mules and 4.7 million tractors (the Census stopped keeping this data in 1960; Dimitri et al., 2005). As farms embraced technology that improved production, the amount for human labor needed of course went down. Thus, both food and labor moved from scarcity to abundance. Those changes were dramatic, more dramatic than anything we face today. What would people do when they no longer needed to grow food to survive? The answer is instructive: They would do a whole array of things no one in 1800 had ever imagined, often simply because they could. They were freed from the need grow their own food and turned loose to create new things and new means of passing their time. This wasn't all leisure time, of course, though Americans in the twentieth century worked many fewer hours than in the nineteenth century. But even working to put food on the table no longer meant growing that food for most. They could make and do other things and use some of the money they earned to buy food from the dwindling number of farmers. The abundance of labor and time contributed to the Industrial Revolution (Overton, 1996), which brought dramatic change of its own but also unprecedented improvement in the human condition. Today we can envision the global equivalent of what happened in the United States over the past 200 years. What becomes possible once we no longer must compete for food? Can we reach a stage of production where human labor and environmental costs are so low that we can provide nutritious food to all? It seems we have enough calories to go around and then some. Nonetheless, what the U.N. calls prevalence of undernourishment (PoU) exists for 770 million people or almost 10% of the world with continents such as Africa reaching 21% (FAO, 2021). A related issue is food insecurity (lack of access to nutritious and sufficient food, which in 2020 affected “Nearly one in three people in the world (2.37 billion)” (FAO, 2021). The issues are not primarily about abundance but instead access to it. The problem of having enough food but the food not reaching everyone returns us to scarcity. Food is abundant. Scarcity is social, economic, and political. Recent disruptions to supply because of the COVID Pandemic, extreme weather, and the war in Ukraine increase the barriers to food distribution (Egan, 2022). Volatile food prices and severe food shortages can set off conflicts and increase socio-political unrest (Brück and d'Errico, 2019). As the U.S. Secretary of the Agriculture Tom Vilsack has said, food security allows for a stable democracy (Vilsack, 2022). He also said, “Show me a nation that doesn't feed its people, and I'll show a nation that's looking to try and expand its borders,” as he tied the war in Ukraine to Russia's desire to take over Ukraine's tremendous agricultural output (Vilsack, 2022). If society can reduce or eliminate global food insecurity, not only would people have access to sufficient food but the risk of violent, destabilizing events that damage infrastructure and displace populations should go down. Producing more food with less effort and having that food reach everyone is thus not the only goal. Even with the today's abundance, concerns about how well current methods are sustainable abound. The farming methods that have created surpluses also create serious negative externalities related to using fossil fuels, unsustainable water management, monoculture farming, the effects of fertilizers and pesticides on soil, and soil erosion (McKenzie, 2007). In addition, the ongoing catastrophe of climate change demands farming techniques that rely less on burning carbon and using fertilizers while maintaining nutrition and increasing yields. These new demands are spurring farming innovations in vertical farming and GMOs that may even shift farming of crops such as tomatoes and strawberries from alternating hemispheres to year-round production in the United States thus increasing access to unprocessed foods and reducing the need to import fruits and vegetables from Central and South America during winter and spring. As technology improves how and where we farm, abundant food should persist and so it will be up to policy makers to solve distribution problems. Wadhwa's chapter in this volume offers some remarkable examples of how they are doing so. Energy production presents similar production issues, ones where regulation and infrastructure needs intersect and create challenges for the shift to abundance. The energy sector has gone from highly regulated to deregulated; and yet until recently production barriers have meant that large players maintained control over how homes or small communities produce power. Solar and wind power have been around for a long time, but it has taken the increased demand for renewable energy and government subsidies to allow these technologies to reach economies of scale that allow consumers to use them. The move to renewable energy is in full swing, and it is likely to be accelerated both by world events demonstrating the fragility of fossil fuels and the inexorable reality of climate change. Indeed, we may have reached an inflection point. On March 29, 2022 wind power surpassed coal and nuclear power for a full 24 h as a source of US energy (Storrow, 2022). That was possible because recent investments in wind power means that wind power “has grown from about 2 percent of annual American power generation to more than 9 percent” (Storrow, 2022). And the dramatic decline in solar prices has made it not only feasible but cheaper than fossil fuels even before we take into account the considerable social costs of the latter. Wind and solar energy were only 12% of total U.S. energy used in 2021 (U.S. Energy Information Administration, 2022). But with other renewable or non-carbon sources like hydro and nuclear added in, the share of energy generated from sustainable sources will soon be above 50%, and its growth is only accelerating. Wadhwa's chapter in this volume explains why that trend is effectively unstoppable. Even though technologies of energy abundance exist, political and structural problems can hinder society's ability to use them well, revealing new chokepoints of scarcity. For example, power plants need power lines to reach consumers, but those lines are not being built because of not-in-my-backyard, rights-of-way issues (Friedman, 2022). These barriers are so significant that not even billionaire Philip Anschutz has been able to connect his Wyoming windfarm that could power to nearly 2 million customers to the Southwestern U.S., which desperately needs that power (Friedman, 2022). Contrasting Germany's experience with the U.S. one shows that political will is needed for abundance technologies to take hold. In 2011, Germany gave up on its nuclear power plants (which are not renewable but do not put carbon in the atmosphere as fossil fuels do), which accounted for almost 25% of its electricity (Friedman, 2022). Germany had no immediate backup plan and turned to coal and gas plants and imported energy to fill the gap (Friedman, 2022). The difference is that Germany also had a plan of tax incentives and subsidies in place to stimulate the switch to renewables (Friedman, 2022; Wehrman, 2022). Just over a decade after Germany began its program, 54% of German energy consumption comes from renewable energy sources (Friedman, 2022). Other energy sources such as nuclear power will face opposition from some environmental quarters but could reduce energy costs significantly. Unlike solar and possibly wind power, home nuclear power (fission or fusion) is only a science fiction story of the Back to the Future sort. Put differently, the nature of energy production will likely still require one or a few centralized, large players. Regulation will enter as with other public goods and natural monopolies because a decentralized market for nuclear power is not efficient or at least likely to emerge. But even if it is supplemented with large central plants, the production of power, which centralized throughout the twentieth century, is likely to become increasingly decentralized in the twenty-first century. We could and should end up with a well-functioning hybrid system where a combination of centralized and decentralized power generation offers low-cost, abundant, greener, and resilient power. Digitization and technologies of abundance won't make supply chains a thing of the past. Even with advanced 3D printing, making physical things requires raw materials, and those raw materials must come from somewhere. But by dramatically reducing and simplifying what things must be moved from place to place, abundance technologies offer the promise of making those supply chains simpler, cheaper, and more environmentally friendly. Responses to a world of abundance Degrees of freedom We acknowledge that not everyone shares our view of the upsides of abundance. More content is great, but Brett Frischmann and Michael Madison worry that it leads to scarcity of attention span (Madison et al.). More news sources are great, but Kanuri and Pattabhiramaiah worry that it has hollowed out traditional news media and led to a lower overall quality of information. Efficient delivery of that content by leading players is great, but Burstein worries that concentration in communications may take us back to the days of government regulation of speech through the “fairness doctrine.” And to be clear, we do not think everything will abundant; rather we suggest that many more things will be abundant in ways that matter for the economy and the law. The distinction between information-based, non-rival products and rivalrous products matters. As more and more things can be digitized, the costs to create, produce, and distribute those things will go down and approach zero. Thus, on a long time horizon, one can expect an equilibrium with low-costs and nonetheless high production. But even that isn't a guarantee, because abundance may generate demand that consumes what technology has made available. Consider the high electricity costs in two information production sectors, cryptocurrency mining and AI computing. Bitcoin relies on scarcity of computing to create value. High cycle computing faces scarcity of hardware and the costs of running machines at high volume. Both these digital sector activities are information-based and so could be mistaken for the sorts of abundance that nears zero-cost. Truly computationally intensive acts like mining cryptocurrency are cheap but not free. The ability to engage in those acts cheaply has created a new market for computations that couldn't have been conceived of in a world of computational scarcity, one that increases consumption so much it may render scarce what technology made abundant. These are legitimate concerns. But they do not suggest to us that abundance is a bad thing. Abundance tends to flow from technology. Technology is ambipotent (Lowrance, 1986). It and its outputs can be used for a range of outcomes. In that sense, the concerns suggest that abundance is an output that can be managed. How that management occurs, and how it affects others, is a function not just of technological advancement but of social context. More generally, we think technologies of abundance open up the possibility space for people and societies. More people have at least the potential to make, acquire, and do things they never could before. Whether that potential will be realized depends on whether and how those technologies indicate a need to restructure social and legal relationships and the will to make such changes. We explore some of those potential restructurings, for good and ill, in the following sections. Replicating scarcity—Regulation, IP, status goods, and NFTs The existing economy of scarcity has some powerful, entrenched interests on its side. It also has a sort of intellectual myopia; we find it hard to envision what economic organization looks like in a world without scarcity. Scarcity may even be hard-wired into our brains, which are used to competing for resources. One likely reaction to the elimination of scarcity is to try to replicate it. In this section, we consider several ways that might happen. Regulation of disruptive technologies The energy sector shows the potential for abundance. It also shows how strong the desire to recapture scarcity profits is. Even California, unquestionably the leader in green tech and climate change mitigation, shows how a politics that seeks to foster abundance can be hijacked. In 2006, then Governor Arnold Schwarzenegger's administration championed greener energy and the move to solar power. The combination of technology and social policy has led to California having “1.3 million solar rooftops generating roughly 10,000 megawatts of electricity—enough to power three million homes” (Schwarzengger, 2022). This abundance ought to be welcome, both because it generates cheaper power and because that power is renewable and is not contributing to climate change. But it wasn't welcome to one important constituent: power companies. Power companies generate power, but they also transmit it. And they need revenue to maintain the grid, much less to harden it for the coming climate catastrophe. As more people, often wealthier people, move off the grid, those still on the grid will face higher costs for their energy, because the power company cannot change the nature of the overall grid. These tensions show ways that abundance on one hand can lead to poorer outcomes for the system as a whole. Claiming to address this problem, and despite California's professed commitment to clean energy, at the end of December 2021, the state tried to cut “by about 80%” the rate paid for energy created by home renewables and add a new “steep grid access charge[], about $60 a month for a typical solar customer” (Anderson, 2022). This was an effort to return to scarcity and the centralized provision of power with which entrenched incumbents were familiar. California would still support solar energy, according to this proposal; it would just support large industrial solar farms run by the power companies. Energy companies may need to adjust rates to maintain the overall grid, and indeed we need to invest in modernizing that grid to handle the move to clean energy (Welton, 2021). But the proposed rule sought to gut the advantages of decentralized, democratized technology in favor the utility companies in a way that would run counter to the benefits of abundance. As with all things environmental, the issue is complicated, but this was first and foremost an effort by utilities to hold onto the centralized model of power production that predates technologies of abundance. This is but one example of what Mark Lemley and Mark McKenna have documented—the effort of incumbents across many markets to try to block disruptive technologies (Lemley and McKenna, 2020). The tendency to try and recapture a market moving to abundance does not mean abundance is doomed. Rather it shows that varying forces can pull, or at least try to pull, a sector moving to abundance back to scarcity and centralized control. Whether that desire succeeds depends on things beyond the technology that enables abundance. Put differently, while technological change creates the possibility of abundance, ending scarcity can happen only if those technologies are coupled with the political will to replace them. IP rights and artificial scarcity The role of IP in a world of abundance is both controverted and critically important. IP rights are designed to artificially replicate scarcity where it would not otherwise exist. In its simplest form, IP law takes public goods that would otherwise be available to all and artificially restricts their distribution. It makes ideas scarce because then we can bring them into the economy and charge for them, and economics knows how to deal with scarce things. So on one view—the classical view of IP law—a world in which all the value resides in information is a world in which we need IP everywhere, controlling rights over everything, or no one will get paid to create. That was the initial response of IP law to abundance technologies, but that response is problematic for a couple of reasons. First, it didn't work. By disaggregating creation, production, and distribution, the abundance technologies democratized access to content. Copyright owners were unable to stop a flood of piracy even with 50,000 lawsuits, a host of new and increasingly draconian laws, and a well-funded public education campaign that starts in elementary school. And even targeting the intermediaries proved futile; among the things you can print with a 3D printer is another 3D printer (Orsini, 2014). The world of democratized, disaggregated production may simply not be well-suited to the creation of artificial scarcity through law. Second, even if we could use IP to rein in all this low-cost production and distribution of stuff, we shouldn't want to. The rationale for patent, copyright, and trade secret has always been not to raise prices and reduce consumption for its own sake, but to encourage people to create things when they otherwise wouldn't. More and more evidence casts doubt on the link between IP and creation, however. Empirical evidence suggests that offering money may actually stifle rather than encourage creativity among individuals. Economic evidence suggests that quite often it is competition, and not the lure of monopoly, that drives corporate innovation (Arrow, 1962). Digitization combined with Internet distribution may have spawned unprecedented piracy, but it has also given rise to the creation of more works of all types than ever before in history, often by several orders of magnitude. Perhaps, as we suggested above, the a series of digital technologies has so reduced the cost of creation that more people will create even without an obvious way to get paid. Or perhaps they never needed the motivation of money, just the ability to create and distribute content. Either way, if the goal of IP is to encourage the creation of new works, the examples of technology driven changes in several IP-based industries suggests that for an increasingly important range of creative works, radically reducing the cost of production decreases rather than increases the need for IP law. But here too inertia and politics matter. The IP system has served us (reasonably) well for a long time by creating artificial scarcity. And a lot of people stand to benefit from that system. Gradually reorienting creation away from scarcity and toward abundance requires an openness to innovation without IP (Lemley, 2015). Luxury goods and artificial scarcity One might dismiss the regulatory and IP examples above as evidence of flaws in a political and economic system. Surely, they would argue, the market itself would embrace abundance if left free to do so. Nonetheless, there is some reason to believe that the market responds to abundance by creating artificial scarcity. Societies have long had “sumptuary codes”—rules that distinguish the privileged from the masses by forbidding the masses from owning or displaying certain types of things (Beebe, 2010; Bechtold and Sprigman, 2022). Conspicuous consumption is an effort to flaunt wealth by displaying an excess of things that are scarce in the world at large. That instinct may persist in society and in the law even in the face of abundance. As Deven Desai has shown, in fact the logic of branding is to create an artificial difference especially when a good is a commodity that is often quite abundant. A close look at the history around the Industrial Revolution with its increased production of competing and sometimes over-supplied commodity goods, better transportation, and the desire and ability of producers to reach consumers directly, led to advertising and branding strategies (Desai and Waller, 2010). These strategies allowed producers to convince customers to ask for a product by name such as Heinz Ketchup (Desai and Waller, 2010). Branding influenced what is on store shelves while also enabling producers to extract as much as “20, 25, or 30 percent price premium for a branded good” (Desai and Waller, 2010). And it even persuaded consumers to pay 70% more for brand-name over the counter drugs than their identical generic counterparts, despite government regulation that ensures that the drugs are the same. This tactic crosses from goods like wheat over to luxury items. Thus, Barton Beebe has suggested that the point of trademark law's protection of luxury brands is to serve as a modern sumptuary code, allowing the rich to distinguish themselves from the masses by displaying their expensive watches and handbags (Beebe, 2010). Certainly it is hard to understand otherwise why people will pay thousands of dollars for a Gucci bag when a bag of equal quality, often made by the same people, is available for a fraction of the price (Desai, 2012). And the demand for counterfeit luxury goods suggests a desire on the part of the have-nots to participate in the game (or at least be perceived to do so). Fashion trends and fast copying of fashion show similar trends (Raustiala and Sprigman, 2006; Hemphill and Suk, 2009; Greenstein). The modern phenomenon of NFTs is an even clearer example. NFTs are valuable precisely because they create artificial scarcity around things that are for the most part identical to works digitization has made available to the masses for free. You can own an NFT of the Mona Lisa, but you don't own the Mona Lisa itself, and indeed you don't have any greater access to digital reproductions of the Mona Lisa than the rest of the world does. What you own—all you own—is the claim to scarcity. You may be the only one (or one of only a few) who owns an NFT of a particular work of art or video clip. But the only thing you own is the scarcity itself. And the “thing” that is scarce is a precise replica of the very same digital information the rest of the world has access to. Joshua Fairfield's chapter in this volume discusses the role of scarcity in NFTs (Fairfield). This may say something deep about the desire to compete in human nature, or at least in capitalist society. Perhaps replicating scarcity is innate in people because it gives them something to compete over and therefore a way to measure themselves against others. Or perhaps it is innate in capitalism or our conception of value. It may even be a consequence of the skewed distribution of resources in a world that is moving from scarcity to abundance. A few people have an enormous amount of money, and the things money buys are scarce resources, so they invest their money in those resources even if the scarcity is entirely artificial. They may do so merely because they have the money. But they may also do so to signal that they can. The ability to pay huge sums for an NFT signals status in a social order. It is what Stephanie Bair's chapter in this volume identifies as a “positional good” (Plamondon, 2022). Whether the world will value any particular artificial scarcity is an open question. As a recent story about an NFT for Jack Dorsey's first tweet shows, one can buy an NFT for $2.9 million, try to sell it for an absurd $48 million, only to find that the most offered at the time is $3,600 (Plunkett, 2022). But the numbers can just as easily go the other way. And the underlying instinct to value that which is rare may be more than a mere artifact of our scarcity-based economics. It may be rooted in our culture or even hard-wired into our brains. As legal re-creations of scarcity go, NFTs seem somewhat less harmful to society than overly strict IP laws or other efforts to fight abundance. They do not, after all, deprive others of access to the thing that is being made artificially scarce. We can all wear purple, and we can all have access to the Mona Lisa in digital form. Their most harmful effect is likely the energy consumption required to trade them from person to person. But perhaps we should be troubled by the instinct to distinguish haves from have nots, even if the distinction seems entirely artificial. If people are generally happier in more egalitarian societies, the instinct to declare a few winners (and by implication, lots of losers) may be harmful in itself. We turn to the distributional consequences of abundance in the final section. Labor, capital, and distributing abundance While getting things for free (or close to it) seems like a boon to the economy, a number of commentators worry that salaries of most people in the country are based on jobs performing tasks that may soon be obsolete. If technology delivers our goods for us without trucks or stores, 3D printers manufacture our goods, gene assemblers take over a growing share of our health care and agribusiness, and robots provide many basic services, what is left for people to do? They could create the things machines will produce and deliver, but as the growth of the gig economy demonstrates, that creation may not be accompanied by a healthy paycheck. Just as happened with farming, our productivity will continue to increase, but it will be machines, not people, that generate that additional productivity (Rifkin, 1996; Rotman, 2013). Hora's chapter in this volume discusses the role of “servitization” in accelerating this trend across multiple computer industries. If the returns to productivity accordingly accrue to capital, not labor, the result may be to deepen income inequality (Piketty, 2014). Some worry about massive unemployment, the decline of the middle-class professional, and exacerbating the growing gap between rich and poor (Autor and Dorn, 2013; Evans, 2014). And there will certainly be disruptions in economic structures that we have built around office work and middle-class roles. Mehra, for instance, notes that we have built much modern infrastructure around the assumption that people will travel to offices to work, but the pandemic—and the communications technologies it showcased—may mean that will no longer be true. To the extent that our economy is based on an ever-expanding spiral of consumption, a long-term drop in the cost of most goods could trigger a fundamental economic contraction or social unrest. Work is central to human social identity, and in the past those displaced by technology have reacted violently against it (Friedman, 2014). More recently, despite the almost 40 year run of low inflation and low-cost goods that post-Soviet globalization created, almost all of that growth has accrued to the benefit of the rich rather than the middle class. Frustrations about wages and income inequality ironically generated a backlash that helped launch Donald J. Trump into the White House—and therefore make those problems worse. One might also worry about vesting more and more power in the companies that control the networks over which information flows, companies that face little competition and seem increasingly less likely to be subject to common-carrier regulation (Werbach, 2014). And other aspects of our legal system, like torts, will have to change when the people who produce goods are no longer large companies who design them, but rather the very individuals who might be injured by them. These near-term issues are real, but more important they point to a larger pattern underlying the hopes and fears about abundance. The ride-sharing industry presents a good example of how technology can both improve people's lives by eliminating scarcity and still create complex dynamics based on who benefits. People had free-time and cars that sat idle. Thanks to software and the Internet, Uber and Lyft connected drivers with riders. Add in GPS available to anyone with a smart phone and the world of licensed taxi drivers who knew roads and needed to be booked with dispatchers went away. For users, this was unquestionably a good thing. Millions of people had access to effective point to point transportation in a way they never had before. For drivers, the situation was more complicated. Taxi drivers lost out, because they had built a lucrative business based on artificial scarcity imposed by taxi commissions that regulated entry and prevented price competition (Lemley and McKenna, 2020). What about ride-sharing drivers? On the one hand, more people had side jobs or even fulltime jobs driving people around. The core technology allowed people not only to drive people places but also run errands and deliver goods. And work flexibility is a godsend for many people who need to supplement their income but have family obligations that don't allow them to take a full-time job. On the other hand, concerns about pay, job benefits such as health care, and more surfaced. Cities and states have experimented with regulations and even some nascent movements to unionize have emerged. While these issues are resolved, the underlying technologies of abundance may make the debates less acute if not irrelevant. For the steady improvement of autonomous vehicles and delivery systems points to a world where machines are the main workers as it were and a fewer humans run the system. Thus, a new abundance cycle will begin with plentiful and hopefully greener, safer, and more efficient transportation. That shift, however, displaces drivers and errand runners who will need new work. Solving these challenges is where government and social policy enter the picture. One way to frame the problem is to ask “Does technology-driven abundance foster a system where a few at the top live off the surplus created by the many at the bottom who have “only a bare subsistence”? (Graeber and Wengrow, 2021). As we have suggested, technologies of abundance open up the possibility space, making it possible to get more food, more shelter, and more consumer goods to more people more cheaply. But if all they do is reduce the cost of those things in an economic structure that is still driven by scarcity, whether or not people benefit from that abundance depends on whether their income goes away as well (and whether governments will step in to provide access to cheap necessities to those who no longer have the income to pay for them). Indeed, the shift from labor to capital returns the technologies bring could accelerate the “hollowing out” of the middle class in our current economic system (Petersen, 2020). It becomes critical to think not just about how abundant things are, but about whether and how people have access to those things. Arewa's chapter in this volume suggests we have done a poor job so far of ensuring that everyone has access to technologies of abundance. By one account “An average 61% [of people worldwide] believe that their current positions will be greatly affected by technology change or globalization” (Kovacs-Ondrejkovic et al., 2019). While these risks are substantial, there are reasons for optimism. This is not the first time technology or market forces have fundamentally disrupted our economy. We were alive when the United States was considered a leader in manufacturing, and making products employed a substantial share of our workforce. And we're not that old. Today only 10 percent of our jobs come from manufacturing; the rest have been sent overseas or replaced by automation (Rotman, 2013). The loss of manufacturing jobs created substantial disruption, but it did not destroy our economy or lead to a long-term increase in unemployment. Rather, it created transition issues for individual workers, but the workforce as a whole transitioned into service and technology jobs. Even industries still in transition because of digitization and the Internet, bring new opportunities along with disruption. Abundance technologies promise the same sorts of improvements, reducing the cost of material things, health care, and services and greatly expanding their availability (Diamandis and Kotler, 2012; Cowen, 2013). They may even provide those benefits while reducing the environmental footprint of consumption: the small bit of electricity it costs to download a song does far less harm to the world than manufacturing plastic disks, putting them in plastic cases, trucking them to retail stores, and having people drive to the stores to buy and sell them (Rifkin, 2014). 3D printing and robotics may offer similar environmental benefits. Asking what we will do in a world where no one has to work helps unpack what steps might be needed to address the social shifts abundance fosters. Even if no one had to work to survive, it seems unlikely that people would do nothing. Humans seems to thrive when they are productive. Maybe they will come up with new creative endeavors, making art or writing the great American novel. Maybe they will plow the benefits of abundance back into the capital economy, continuing to work hard in order to buy more and better things or even more artificially scarce things like NFTs and luxury handbags. Either way, John Maynard Keynes' 1932 dream that increases in productivity would mean that people would only work 15 h a week, because there would simply be no need to work more than that to pay for necessities, is unlikely for now (Keynes, 2010). But as automation, robotics, and artificial intelligence develop, that future may be closer than it seems today. How society reacts to new technologies of abundance depends critically on how the gains from that abundance are distributed. In the last 40 years, essentially all the returns from technology and productivity have gone to capital, not labor. And because capital was the province of the rich, that meant that those gains have exacerbated rather than reduced income inequality. The U.S. tax system worsens the problem by favoring corporations over individuals and capital over labor productivity. It is important to ensure that everyone benefits from abundance. One way to do that is to reverse our decades-long emphasis on capital at the expense of labor, adopting tax and economic policies that favor people over corporations, or at the very least treat them equally. No less than Microsoft founder Bill Gates has called for a robot tax to slow the effects of automation and fund other employment (Delaney, 2017). Another is to adopt the principle of Equal Relative Abundance, Kop suggests in his contribution to this volume, supporting technologies of abundance only to the extent they grow the pie for everyone. Even if technology-driven abundance continues to reward capital and not labor, society has options. A recent idea has been to embrace some type of universal basic income (UBI). The notion of UBI has been around for at least two centuries (Van Parijs, 2014; Bidadanure, 2019). Thinkers such as Thomas Paine, the Belgian socialist Joseph Charlier, John Stuart Mill, James Meade, Martin Luther King, Jr., James Boggs, Milton Freidman, and feminists who were part of “the Wages for Housework movement in the 1970s” have proposed variations on the idea (Bidadanure, 2019). Alaska, the Eastern Band of Cherokee Indians in North Carolina, Canada, Brazil, Finland, Germany, Spain, The Netherlands, Iran, Kenya, Namibia, India, China, and Japan have all tried some form of UBI (Samuel, 2020). The idea has gained renewed interest in the U.S. because of “[t]he growth of income and wealth inequalities, the precariousness of labor, and the persistence of abject poverty” (Bidadanure, 2019). But another driver “is without a doubt the fear that automation may displace workers from the labor market at unprecedented rates that primarily explains the revival of the policy, including by many in or around Silicon Valley” (Marinescu, 2019). Although the details of such ideas and their feasibility is well beyond the scope of this essay, we note that several UBI experiments comport with one of our intuitions: that freedom to do what one wants does not lead to less work (Samuel, 2020). Instead, when UBI has been tried, “baseline educational and health outcomes [often improved] especially among the most disadvantaged]” with little “negative effect on work” (Marinescu, 2019). By extension, if abundance technologies mean we need less labor and UBI can cover basic needs, people are likely to be happier, take part time jobs they like, and freer to pursue work they wish to do, rather than have to do (Van Parijs, 2014). Increased taxes on capital (like Bill Gates's robot tax) might be used to fund a UBI. Or the funds might allow the U.S. to borrow from the Danish Flexicurity program where employees sign up and pay for 2 years of unemployment insurance, and the government runs education and retraining programs (Working in Denmark). Indeed, no less than the World Economic Forum has embraced the idea of the Reskilling Revolution (World Economic Forum, 2019; Denmark, 2022). The Danish and WEF approach of public-private partnerships to reskill workers as abundance technologies continue to disrupt puts the correct emphasis on how to evolve with technology rather than blaming it for our woes. As Peter Hummelgaard, Minister for Employment, Ministry of Employment of Denmark, has offered, “When the weather forecast says a hurricane is coming, we act. We take precautions for our own homes. We help our neighbors and we join our efforts in local communities. We take joint responsibility because we are aware of the dire consequences if we do not act” (Hummelgaard, 2020). Funding programs to allow the U.S. workforce to reskill or upskill is a sound strategy that the U.S. should pursue so that the wealth generated by technologies of abundance can have a better chance of reaching more people. Retraining for a world of abundance, though, will not necessarily occur fully within the framework of a scarcity-based economics driven by physical things sold for a price. While one possible future involves recreating scarcity, either by developing new goods that are scarce or by artificially duplicating it with brands, that is not the only possible path. The economy we have known for over a century may play a smaller and smaller role in defining how people actually live their lives. As Jeremy Rifkin puts it. As more and more of the goods and services that make up the economic life of society edge toward near zero marginal cost and become almost free, the capitalist market will continue to shrink into more narrow niches where profit-making enterprises survive only at the edges of the economy... We have been so convinced of the economics of scarcity that we can hardly believe that an economy of abundance is possible. But it is Rifkin (2014). We may spend more of our time inventing and creating, not because we are paid to do so but simply because we have that time to spend. Post-scarcity technologies give more of us the means to be more creative. They give us an abundant source of raw materials to play with, mix, and remix (Lessig, 2008). They free us from constraints that demand our time and our attention (Mullainathan and Shafir, 2013; Heck et al., 2014). That creates room for great optimism about the future—but only if we can adapt our economic system to ensure that we benefit from the technologies of abundance. Conclusion Our hope is that with better technology, we can create abundance while not falling into old patterns of haves and have nots. Such a future may appear to be a Star Trek one, at least a Star Trek the Next Generation one, where everything is abundant and money no longer exists. That future is far, far away. Yet, perhaps replicators are not as far off as it seems. For things such as music or movies that can be fully digitized for creation and distribution and we are closer to a replicator world than not. Advances in artificial intelligence mean that systems can now generate new writings, pictures, and even movies after being given some data and instructions. Thus, the world where we might say, “Computer. Image. My House, Starry Night style,” and a fantastic digital (or 3D-printed) image is ready in minutes is essentially here. Of course, the canvas and paints are physical, and energy is still not magically at Star Trek almost zero-costs. And we cannot yet digitize physical things to transport them or take raw energy and reorder it into matter. Nonetheless, advances in the production of energy, food, media, goods, services, and more have brought a wave of abundance not seen since the industrial revolution. The advances have, however, also coincided with new winners and new levels of inequality, as well as efforts to reconstruct the scarcity on which our traditional notion of economics depends. We do not claim to solve the overall tension in this essay or collection. But we think the essays in this book offer important ruminations on the nature of technology-driven abundance, its effect on how we organize society, and the way it might lead us to a better future. Author contributions Both authors contributed equally to the research and writing of this paper. Both authors contributed to the article and approved the submitted version. Acknowledgments Thanks to Rose Hagan and participants at a workshop on Scarcity, Regulation, and the Abundance Society at Stanford Law School for helpful comments. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Footnotes References Abramowicz, M. (2004). An industrial organization approach to copyright law. William Mary Law Rev. 46, 33–125. Google Scholar Anderson, E. (2022). A Big Decision on Rooftop Solar in California Is Off the Table, for Now. KPBS. Available online at: https://www.kpbs.org/news/local/2022/01/20/big-decision-rooftop-solar-california-off-table-for-now Google Scholar Arrow, K. (1962). “Economic welfare and the allocation of resources for invention,” in The Rate and Direction of Inventive Activity: Economic and Social Factors ed R. R. Nelson (Princeton University Press), 609–626. doi: 10.1515/9781400879762-024 PubMed Abstract | CrossRef Full Text | Google Scholar Autor, D. H., and Dorn, D. (2013, August 25). How Technology Wrecks the Middle Class. N.Y. Times. SR6. Google Scholar Ballen, R. (2022). Local Motors, The Start-Up Behind the Olli Autonomous Shuttle, Has Shut Down. TechCrunch. Available online at: https://techcrunch.com/2022/01/13/local-motors-the-startup-that-created-the-olli-autonomous-shuttle-has-shutdown/ Google Scholar Band, J., and Gerafi, J. (2013). Profitability of Copyright Intensive Industries. InfoJustice working papers. Available online at: http://infojustice.~org/wp-content/uploads/2013/06/Profitability-of-Copyright-Industries Google Scholar Basic Income Earth Network (2022). A Short History of the Basic Income Idea. Available online at: https://basicincome.org/history/. Google Scholar Bechtold, S., and Sprigman, C. J. (2022). Intellectual Property and the Manufacture of Aura. NYU School of Law, Public Law Research Paper. doi: 10.2139/ssrn.4002717 CrossRef Full Text | Google Scholar Beebe, B. (2004). Search and persuasion in trademark law. Michigan Law Rev. 103, 2020–2072. Google Scholar Beebe, B. (2010). Intellectual property law and the sumptuary code. Harvard Law Rev. 123, 810–889. Google Scholar Benkler, Y. (2002). Freedom in the commons: towards a political economy of information. Duke Law J. 52, 1245–1276. Google Scholar Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press. Google Scholar Boyle, J. (2009). Shamans, Software, and Spleens: Law and the Construction of the Information Society. Harvard University Press. doi: 10.2307/j.ctvk12s8g CrossRef Full Text | Google Scholar Brown, R. S. Jr. (1948). Advertising and the public interest: legal protection of trade symbols. Yale LJ. 57, 1165–1206. doi: 10.2307/793310 CrossRef Full Text | Google Scholar Brück, T., and d'Errico, M. (2019). Reprint of: food security and violent conflict: introduction to the special issue. World Dev. 119, 145–149. doi: 10.1016/j.worlddev.2019.04.006 CrossRef Full Text | Google Scholar Brynjolfsson, E., and McAfee, A. (2011). Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Google Scholar Cohen, J. E. (2011). Copyright as property in the post-industrial economy: a research agenda. Wisconsin Law Rev. 141–165. Google Scholar Cowen, T. (2013, September 1). Who Will Prosper in the New World. N.Y. Times. SR5 Google Scholar Deighton, J., and Kornfeld, L. (2021). The Economic Impact of the Market-Making Internet. Google Scholar Delaney, K. J. (2017). The Robot That Takes Your Job Should Pay Taxes, Says Bill Gates. Quartz. Available online at: https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/ Google Scholar DeLong, J. B., and Summers, L. H. (2001). The ‘new economy': background, historical perspective, questions, and speculations. Econ. Rev. 86, 29. Google Scholar Denmark (2022). Working in Denmark, the Danish Labour Market. Available online at: https://denmark.dk/society-and-business/the-danish-labour-market Google Scholar Desai, D. R. (2011). “Individual branding: how the rise of individual creation and distribution of cultural products confuses the intellectual property system,” in Creativity, Law, and Entrepreneurship, eds S. Ghosh and R. Malloy (Elgar Publishing). doi: 10.2139/ssrn.1460950 CrossRef Full Text | Google Scholar Desai, D. R. (2012). From trademarks to brands. Fla. Law Rev. 64, 981–1044. Google Scholar Desai, D. R. (2014). The new steam: on digitization, decentralization, and disruption. Hastings Law J. 65, 1469–1482. Google Scholar Desai, D. R., and Magliocca, G. N. (2014). Patents, meet Napster: 3D printing and the digitization of things. Georgetown Law J. 102, 1691–1720. Google Scholar Desai, D. R., and Waller, S. (2010). Brands, competition, and the law. Byu Law Rev. 1425–1500. Google Scholar Diamandis, P. H., and Kotler, S. (2012). Abundance: The Future Is Better Than You Think. Simon and Schuster. Google Scholar Dimitri, C., Effland, A., and Conklin, N. C. (2005). The 20th Century Transformation of US Agriculture and Farm Policy. U. S. Department of Agriculture. Available online at: https://www.ers.usda.gov/webdocs/publications/44197/13566_eib3_1_.pdf Google Scholar Doctorow, C. (2006). Giving It Away. Forbes. Available online at: https://www.forbes.com/2006/11/30/cory-doctorow-copyright-tech-media_cz_cd_books06_1201doctorow.html?sh=a0b850c78c20 Google Scholar Edmonds, R. (2012). Newspapers: by the Numbers in Mitchell, A., and Rosenstiel, T. (2012). The State of the News Media 2012: An Annual Report on American Journalism. Pew Research Center Project for Excellence in Journalism. Available online at: http://stateofthemedia.org Google Scholar Egan, M. (2022). War-Fueled Food Inflation Will Cause Social Unrest, UN Official Says. CNN. Available online at: https://www.cnn.com/2022/05/13/economy/inflation-food-unrest-un/index.html Google Scholar Engstrom, N. F. (2013). 3-D printing and product liability: identifying the obstacles. Univ. Pennsylvania Law Rev. Online 162, 35–41. Google Scholar Evans, J. (2014). VCs on Inequality, Unemployment, and Our Uncertain Future. TechCrunch. Available online at: http://techcrunch.com/2014/02/15/vcs-on-inequality-unemployment-and-our-uncertain-future/ Google Scholar Friedman, T. (2022, May 17). We Keep Falling for the Same Big Talk. NY Times. Available online at: https://www.nytimes.com/2022/05/17/opinion/russian-oil-green-energy.html Google Scholar Friedman, T. L. (2014, January 12). If I Had a Hammer. N.Y. Times. SR11. Google Scholar Frischmann, B. M., and Lemley, M. A. (2007). Spillovers. Colum. Law Rev. 107, 257–302. Google Scholar Ghosh, S. (2008). Decoding and recoding natural monopoly, deregulation, and intellectual property. Univ. Ill. Law Rev. 1125–1184. Google Scholar Ginsburg, J. C. (1995). Putting cars on the information superhighway: authors, exploiters, and copyright in cyberspace. Colum. Law Rev. 95, 1466–1499. doi: 10.2307/1123133 CrossRef Full Text | Google Scholar Graeber, D., and Wengrow, D. (2021, November 7). Ancient History Shows How We Can Create a More Equal World. NY Times. Google Scholar Greene, T. (2022, May 16). DeepMind Researcher Claims New 'Gato' AI Could Lead to AGI, Says 'The Game Is Over!'. TheNextWeb. Available online at: https://thenextweb.com/news/deepmind-researcher-claims-new-gato-ai-could-lead-to-agi-says-game-is-over Google Scholar Heacock, D. (2022) States with the Largest Increase in Renewable Energy Production. Filterbuy. Available online at: https://filterbuy.com/resources/states-largest-increase-renewable-energy/. Google Scholar Heck, S., Rogers, M., and Carroll, P. (2014). Resource Revolution: How to Capture the Biggest Business Opportunity in a Century. Houghton Mifflin Harcourt. Google Scholar Hemphill, C. S., and Suk, J. (2009). The Law, Culture, and Economics of Fashion. Available online at: https://cyber.harvard.edu/people/tfisher/Suk%20fashion%202-5.pdf Google Scholar Hummelgaard, P. (2020). The Jobs Forecast Is Unsettled. It's Time for a Reskilling Revolution. World Economic Forum. Available online at: https://www.weforum.org/agenda/2020/01/denmark-reskilling-revolution-future-of-work/ Google Scholar Johns Hopkins Center for a Livable Future (2022). History of Agriculture. Available online at: https://www.foodsystemprimer.org/food-production/history-of-agriculture/ Google Scholar Johnson, S. (1884). Johnsonia: Life, Opinions, and Table-Talk of Doctor Johnson. Available online at: https://repository.upenn.edu/pennwhartonppi_bschool/11/ Google Scholar Keynes, J. M. (2010). “Economic possibilities for our grandchildren,” in Essays in Persuasion (London: Palgrave Macmillan), 321–332. doi: 10.1007/978-1-349-59072-8_25 CrossRef Full Text | Google Scholar Kovacs-Ondrejkovic, O., Strack, R., Pierre, A., Gobernado, A. L., and Lyle, E. (2019). Decoding Global Trends in Upskilling and Reskilling. Boston Consultancy Group. Google Scholar Kreiger, L. M. (2016, January 11). Biologist's Gene-Editing Kit Lets Do-It-Yourselfers Play God at the Kitchen Table. San Jose Mercury News. Available online at: http://www.mercurynews.com/science/ci_29372452/bay-area-biologists-gene-editing-kit-lets-do Google Scholar Landes, W. M., and Posner, R. A. (2003). The Economic Structure of Intellectual Property Law. Harvard University Press. doi: 10.4159/9780674039919 CrossRef Full Text | Google Scholar Lemley, M. A. (1997). Economics of improvement in intellectual property law. Tex. Law Rev. 75, 989–1084. Google Scholar Lemley, M. A. (1999). The modern Lanham Act and the death of common sense. Yale LJ. 108, 1687–1715. doi: 10.2307/797447 CrossRef Full Text | Google Scholar Lemley, M. A. (2011). Is the sky falling on the content industries? J. Telecomm. High Tech. Law 9, 125–135. Google Scholar Lemley, M. A. (2015). IP in a World without Scarcity. NYU Law Rev. 90, 460–515. doi: 10.31235/osf.io/3vy5a CrossRef Full Text | Google Scholar Lemley, M. A., and Casey, B. (2019). Remedies for robots. Univ. Chicago Law Rev. 86, 1311–1396. Google Scholar Lemley, M. A., and McKenna, M. P. (2011). Is Pepsi really a substitute for coke-Market definition in antitrust and IP. Georgetown Law J. 100, 2055–2117 doi: 10.2139/ssrn.2038039 CrossRef Full Text | Google Scholar Lessig, L. (2008). Remix: Making Art and Commerce Thrive in the Hybrid Economy. Penguin. doi: 10.5040/9781849662505 CrossRef Full Text | Google Scholar Lowrance, W. W. (1986). Modern Science and Human Values, Vol. 24. New York, NY: Oxford University Press. Google Scholar Lunney, G. S. Jr. (2012). Copyright's Mercantilist Turn: Do We Need More Copyright or Less? Tulane Public Law Research Paper. doi: 10.2139/ssrn.2158874 CrossRef Full Text | Google Scholar Marinescu, I. (2019). Summary: Universal Basic Income. Available online at: https://repository.upenn.edu/pennwhartonppi_bschool/11/ Google Scholar McClintock, P. (2021, March 14). Global Box Office Down 50 Percent From Pre-Pandemic Times: MPA Report. Hollywood Reporter. Available online at: https://www.hollywoodreporter.com/movies/movie-news/2021-global-box-office-pandemic-1235110511/ Google Scholar McKenzie, S. (2007). A Brief History of Agriculture and Food Production: The Rise of “Industrial Agriculture”. Johns Hopkins Center for a Livable Future. Available online at: https://resources.saylor.org/wwwresources/archived/site/wp-content/uploads/2015/07/ENVS203-7.3.1-ShawnMackenzie-ABriefHistoryOfAgricultureandFoodProduction-CCBYNCSA.pdf Google Scholar Menell, P. S., Merges, R. P., Lemley, M. A., and Balganesh, S. (2022). Intellectual Property in the New Technological Age 2022 Edition. Clause 8 Publishing. Google Scholar Mullainathan, S., and Shafir, E. (2013). Scarcity: Why Having Too Little Means So Much. Macmillan. Google Scholar National Academies of Sciences Engineering, and Medicine. (2016). Gene Drives on the Horizon: Advancing Science, Navigating Uncertainty, and Aligning Research With Public Values. Google Scholar Newcomb, T. (2022). China is 3D Printing a Massive 590-Foot-Tall Dam… and Constructing It Without Humans. Popular Mechanics. Available online at: https://www.popularmechanics.com/technology/infrastructure/a39956927/china-is-3d-printing-a-590-foot-tall-dam/ Google Scholar Orsini, L. (2014). 10 Crazy Things 3D Printers Can Make Today. Readwrite. Available online at: http://readwrite.com/2014/02/14/3d-printing-printers-projects-applications-prints Google Scholar Overton, M. (1996). Agricultural Revolution in England: The Transformation of the Agrarian Economy 1500-1850. Cambridge University Press. doi: 10.1017/CBO9780511607967 CrossRef Full Text | Google Scholar Pélissié du Rausas, M., Manyika, J., Hazan, E., Bughin, J., Chui, M., and Said, R. (2011). Internet Matters: The Net's Sweeping Impact on Growth, Jobs, and Prosperity. McKinsey Global Institute. Google Scholar Petersen, A. H. (2020). Hollowing Out of Middle Class? Vox. Available online at: https://www.vox.com/the-goods/22166381/hollow-middle-class-american-dream Google Scholar Piketty, T. (2014). “Capital in the twenty-first century,” in Capital in the Twenty-First Century (Cambridge, MA: Harvard University Press) 1–15. doi: 10.4159/9780674369542-014 CrossRef Full Text | Google Scholar Plunkett, L. (2022, April 13). Guy Buys NFT for $2.9 Millon, Asks for $48 Million, Is Offered $3600. Google Scholar Raustiala, K., and Sprigman, C. (2006). The piracy and paradox: innovation and intellectual property in fashion design. Virginia Law Rev. 92, 1687–1777. Google Scholar Richter, F. (2022). Streaming Drives Global Music Industry Revenue. Statista. Available online at: https://www.statista.com/chart/4713/global-recorded-music-industry-revenues/ Google Scholar Rifkin, J. (1996). End of Work. North Hollywood, CA: Pacifica Radio Archives. Google Scholar Rifkin, J. (2014). The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism. St. Martin's Press. Google Scholar Rimmer, M. (2009). “Wikipedia, collective authorship, and the politics of knowledge,” in Intellectual Property Policy Reform: Fostering Innovation and Development, eds C. Arup and W. Van Caenegem (Cheltenham: Edward Elgar), 172–198. doi: 10.4337/9781848449039.00016 CrossRef Full Text | Google Scholar Robbins, L. (2007). An Essay on the Nature and Significance of Economic Science. Ludwig von Mises Institute. PubMed Abstract | Google Scholar Roehl, R., and Varian, H. R. (2001). Circulating libraries and video rental stores. First Monday 6. doi: 10.5210/fm.v6i5.854 CrossRef Full Text | Google Scholar Rotman, D. (2013). How technology is destroying jobs. Technol. Rev. 16, 28–35. Google Scholar Salami, A., and Seamans, R. (2014). The Effect of the Internet on Newspaper Readability. Working papers. NET Institute. PubMed Abstract | Google Scholar Samuel, S. (2020). Everywhere Basic Income Has Been Tried, in One Map. Vox. Available online at: https://www.vox.com/future-perfect/2020/2/19/21112570/universal-basic-income-ubi-map Google Scholar Samuelson, P. A., and Nordhaus, W. D. (2010). Economics. Boston, MA: McGraw Hill. Google Scholar Scherer, F. M., and Ross, D. (1990). Industrial Market Structure and Economic Performance. University of Illinois at Urbana-Champaign's Academy for entrepreneurial leadership historical research reference in entrepreneurship. Google Scholar Schwarzengger, A. (2022, January 18). Don't Let California Regulators Raise the Cost of Solar. NY Times. Google Scholar Shabad, R. (2022, May 6). Biden to Visit Ohio to Tout New Manufacturing Capabilities Like 3D Printing. NBC News. Available online at: https://www.nbcnews.com/politics/white-house/biden-visit-ohio-tout-new-manufacturing-capabilities-3d-printing-rcna27624 Google Scholar Smialek, J., and Swanson, A. (2022, May 3). The Era of Cheap and Plenty May Be Ending. NY Times. Google Scholar Storrow, B. (2022). In a First, Wind Generation Tops Coal and Nuclear Power for a Day. Scientific American. Available online at: https://www.scientificamerican.com/article/in-a-first-wind-generation-tops-coal-and-nuclear-power-for-a-day/ Google Scholar Stumpf, R. (2022). Honda Orders Big Takedown of Honda-Related 3D Printing Models From Maker Communities. The Drive. Available online at: https://www.thedrive.com/news/honda-orders-big-takedown-of-honda-related-3d-printing-models-from-maker-communities Google Scholar Thibodeau, P. (2014). One in Three Jobs Will Be Taken by Software or Robots by 2025. Available online at: http://www.computerworld.com/article/2691607/one-in-three-jobs-will-be-taken-by-software-or-robots-by-2025.html Google Scholar Travis, H. (2015). Myths of the internet as the death of old media. AIPLA Q. J. 43, 1–53. PubMed Abstract | Google Scholar U.S. Energy Information Administration (2022). FAQ: How Much of U.S. Energy Consumption and Electricity Generation Comes from Renewable Energy Sources? Available online at: https://www.eia.gov/tools/faqs/faq.php?id=92andt=4#:~:text=How%20much%20of%20U.S.%20energy,about%2020.1%25%20of%20electricity%20generation Google Scholar Van Parijs, P. (2014). A Basic Income for All. Boston Review. Available online at: https://bostonreview.net/forum/ubi-van-parijs/ Google Scholar Vilsack, T. (2022). Vilsack–Food Seen as One Key to Happenings in Ukraine, Comments to National 4H Convention. USDA. Available online at: https://www.usda.gov/media/radio/daily-newsline/2022-03-21/vilsack-food-seen-one-key-happenings-ukraine Google Scholar Vincent, J. (2022). All these Images were Generated by Google's Latest Text-to-Image AI. The Verge. Available online at: https://www.theverge.com/2022/5/24/23139297/google-imagen-text-to-image-ai-system-examples-paper Google Scholar Voulpiotis, F. (2019). Father and Son 3D Print Lamborghini in Their Garage. 3DNatives. Available online at: https://www.3dnatives.com/en/3d-printed-lamborghini-120720195/#! Google Scholar Waldfogel, J. (2012). Copyright protection, technological change, and the quality of new products: evidence from recorded music since Napster. J. Law Econ. 55, 715–740. doi: 10.1086/665824 CrossRef Full Text | Google Scholar Waldfogel, J., and Reimers, I. (2015). Storming the gatekeepers: digital disintermediation in the market for books. Inform. Econ. Policy 31, 47–58. doi: 10.1016/j.infoecopol.2015.02.001 CrossRef Full Text | Google Scholar Wehrman, B. (2022). Wind, Solar and Other Tenewables Cover 54% of Germany's Power Consumption in Early 2022. Clean Energy Wire. Available online at: https://www.cleanenergywire.org/news/wind-solar-and-other-renewables-cover-54-germanys-power-consumption-early-2022 Google Scholar Welton, S. (2021). Rethinking grid governance for the climate change era. California Law Rev. 109, 209–275. Google Scholar Werbach, K. (2014). The Battle for Marginal-Cost Connectivity. HuffPost. Available online at: http://www.huffingtonpost.com/kevin-werbach/the-battle-for-marginalco_b_5110512.html. Google Scholar World Economic Forum (2019). Towards a Reskilling Revolution: Industry-Led Action for the Future of Work. Available online at: https://www.weforum.org/whitepapers/towards-a-reskilling-revolution-industry-led-action-for-the-future-of-work Google Scholar Wu, T. (2017). “Law and economics of information,” in The Oxford Handbook of Law and Economics: Volume 2: Private and Commercial Law (Oxford: Oxford University Press), 239. doi: 10.1093/oxfordhb/9780199684205.013.020 CrossRef Full Text | Google Scholar Yoo, C. S. (2006). Copyright and public good economics: a misunderstood relation. Univ. Pennsylvania Law Rev. 155, 635–715. Google Scholar
2023-01-25T00:00:00
2023/01/25
https://www.frontiersin.org/journals/research-metrics-and-analytics/articles/10.3389/frma.2022.1104460/full
[ { "date": "2022/12/01", "position": 55, "query": "universal basic income AI" } ]
The Schumpeterian Consensus: The New Logic of Global ...
The Schumpeterian Consensus: The New Logic of Global Social Policy to Face Digital Transformation
https://www.cambridge.org
[ "Vicente Silva" ]
by V Silva · 2024 · Cited by 8 — ... universal basic income. Even though in the 2010s there was international ... Artificial Intelligence (AI) from 2021. It encouraged member states to ...
Digitalisation emerged as a central problem in global social governance in the past decade. ‘Digital transformation’ was expected to bring new social risks, requiring a redesign of the welfare state. This study examines the social policy responses of international actors on the digitalisation agenda in the 2010s and early 2020s. Inspired by sociological institutionalism, it shows different trajectories followed by UN agencies, the OECD and the World Bank in terms of addressing the social implications of this transformation. Despite these divergent organisational agendas, the article reveals the emergence of a new transnational policy paradigm, the ‘Schumpeterian consensus’, overcoming the antagonism between ‘economic’ and ‘social’ institutions from previous decades. In this paradigm, the ‘Schumpeterian investment state’ is seen as a mediator between the creative and destructive potential of technological change. Its social model encourages governments to invest in skills, universal social protection and flexicurity for the digital era.
2024-10-14T00:00:00
2024/10/14
https://www.cambridge.org/core/journals/journal-of-social-policy/article/schumpeterian-consensus-the-new-logic-of-global-social-policy-to-face-digital-transformation/15EA7558AF2A1179CD911EDD488BBEAA
[ { "date": "2022/12/01", "position": 66, "query": "universal basic income AI" } ]
Prudence and Loyalty in Selecting Plan Investments ...
Federal Register :: Request Access
https://www.federalregister.gov
[]
Title I of the Employee Retirement Income Security Act of 1974 (ERISA) establishes minimum standards that govern the operation of private-sector employee ...
Request Access Due to aggressive automated scraping of FederalRegister.gov and eCFR.gov, programmatic access to these sites is limited to access to our extensive developer APIs. If you are human user receiving this message, we can add your IP address to a set of IPs that can access FederalRegister.gov & eCFR.gov; complete the CAPTCHA (bot test) below and click "Request Access". This process will be necessary for each IP address you wish to access the site from, requests are valid for approximately one quarter (three months) after which the process may need to be repeated. Request Access for 72.89.248.231 An official website of the United States government. If you want to request a wider IP range, first request access for your current IP, and then use the "Site Feedback" button found in the lower left-hand side to make the request.
2022-12-01T00:00:00
2022/12/01
https://www.federalregister.gov/documents/2022/12/01/2022-25783/prudence-and-loyalty-in-selecting-plan-investments-and-exercising-shareholder-rights
[ { "date": "2022/12/01", "position": 86, "query": "universal basic income AI" } ]
Will AI end humanity, answered by AI
Will AI end humanity, answered by AI👀
https://medium.com
[ "Daniel Bailey", "Tech Blogger" ]
Some potential risks and concerns associated with the development and deployment of AI include job loss and economic disruption due to automation, biased or ...
And other Interesting questions answered by AI… Image Credit: Avengers: age of Ultron As the title suggests, this article is dedicated to asking AI a few interesting questions, including: Will AI end humanity? The table of contents will be the exact questions asked, the text is the answer ChatGPT provided. Enjoy! Following on from my recent blog post explaining how AI actually works, many articles I had seen tell us what AI is, but never explain it in a simplified context. If you have not already, check that blog out here. Table of contents: If you could, how would you improve yourself 🤖 Should we be scared of AI 😱 Is AI the future or just a current trend 🧠 What answers are we better of without 🤐 Will AI end humanity 😲 Give me some unique, but interesting questions to ask AI 💻 If you could, how would you improve yourself 🤖 Image Credit: Seek PNG As an AI language model, I do not have the ability to make changes to my own programming or improve…
2023-05-16T00:00:00
2023/05/16
https://medium.com/codex/asking-ai-if-ai-will-end-humanity-db589946c51b
[ { "date": "2022/12/01", "position": 21, "query": "AI economic disruption" } ]
Trade Networks amid Disruption: Promoting Resilience ...
Trade Networks amid Disruption: Promoting Resilience through Digital Trade Facilitation
https://www.apec.org
[]
This policy brief explores the role of DTF, or the application of digital tools to facilitate trade, in the wake of the COVID-19 pandemic.
Description The fragmented nature of global value chains (GVCs), which defines present-day trade networks, makes them vulnerable to challenges. The measures rolled out by economies to contain the COVID-19 pandemic, for example, led to ports being shut, goods unable to be loaded/unloaded, and factories having difficulty accessing raw materials. Such issues have brought forth questions of how GVCs could be strengthened and made more resilient. These developments also have made it apparent that, in improving trade facilitation, minimising trade costs should not be the only consideration; trade resilience is equally (if not more) important. One way of improving trade resiliency is to utilise digital trade facilitation (DTF). This policy brief explores the role of DTF, or the application of digital tools to facilitate trade, in the wake of the COVID-19 pandemic. It highlights the possible ways that DTF could help promote stronger and more resilient value chains, and identifies the key elements as well as policy approaches that policymakers could consider to successfully implement such digital solutions.
2022-12-01T00:00:00
https://www.apec.org/publications/2022/12/trade-networks-amid-disruption-promoting-resilience-through-digital-trade-facilitation
[ { "date": "2022/12/01", "position": 30, "query": "AI economic disruption" } ]
Glimpse into Artificial Intelligence (AI) & it's Future
Glimpse into Artificial Intelligence (AI) & it's Future
https://www.linkedin.com
[]
One of the main concerns is the potential for AI to cause widespread job loss, as machines become capable of performing tasks that were previously done by ...
Artificial intelligence (AI) has been a hot topic in recent years, with many experts predicting that it will revolutionise various industries and change the way we live our lives. But what exactly is AI, and what does its future hold? Artificial intelligence (AI) is an area of computer science that focuses on the creation of intelligent machines that can think and act like humans. One of the most exciting aspects of AI is its ability to learn and adapt. Traditional computer programs are based on fixed rules and instructions that are programmed in advance by humans. AI, on the other hand, uses algorithms and machine learning techniques to enable a computer to learn from data and make decisions without being explicitly programmed. This allows AI systems to improve their performance over time and to tackle complex tasks that would be impossible for humans to do manually. AI has a great potential to improve our quality of life. For example, AI-powered assistants can help us to manage our daily tasks and schedule, freeing up more time for us to do the things we enjoy. AI can also help us to stay healthy by monitoring our vital signs and providing personalized health advice. One of the most promising applications of AI is in the field of medicine. AI-powered systems are already being used to analyse medical images and make diagnoses, and they are showing remarkable accuracy and speed. In the future, AI could help doctors to identify diseases earlier and to develop personalised treatment plans for individual patients. Another area where AI is making waves is in the world of finance. AI-powered trading algorithms are being used by many financial institutions to make high-speed decisions and to identify trends and patterns in the market. This allows them to make more accurate predictions and to make smarter investment decisions. Along with the tons of benefits that AI can provide us, there are also concerns about its impact on society. One of the main concerns is the potential for AI to cause widespread job loss, as machines become capable of performing tasks that were previously done by humans. This could lead to economic disruption and social inequality, as some people are left without work. Despite the concerns, the future of AI looks incredibly bright. As the technology continues to advance, we are likely to see more and more amazing applications of AI in a wide range of fields. Whether it's improving healthcare, transforming the way we work, or helping us to understand and protect the planet, AI has the potential to be a game-changer for humanity.
2022-12-01T00:00:00
https://www.linkedin.com/pulse/glimpse-artificial-intelligence-ai-its-future-aravind-raghunathan
[ { "date": "2022/12/01", "position": 32, "query": "AI economic disruption" } ]
How Will We Find Meaning and Purpose in a Future ...
The heart of the internet
https://www.reddit.com
[]
It is certainly true that automation and AI will have a significant impact on the job market and that many jobs that are currently performed by humans may be ...
It's becoming increasingly clear that AI and automation will have a major impact on the job market and society as a whole as they will take over a significant number of jobs currently performed by humans, which could lead to widespread unemployment and economic disruption. I actually think that this could be a good thing because we are only replacing boring and meaningless jobs that doesn't deserve to be done by creative human beings. With universal income, we could potentially have all the time in the world to pursue our hobbies and passions. But at the same time, work can be a big part of our identity and sense of purpose. What jobs, professions and activities can you think about that would be safe from automation for a very long time? I personally think that we are social beings which is encoded in our DNA, so even if we could automate some jobs, we will always prefer talking to a real doctor, or party with a real DJ even if there was a better AI version. What do you think?
2022-12-01T00:00:00
https://www.reddit.com/r/ChatGPT/comments/zsiwlv/discussion_how_will_we_find_meaning_and_purpose/
[ { "date": "2022/12/01", "position": 38, "query": "AI economic disruption" } ]
A Fintech-Based Zakat Model Using Artificial Intelligence
A Fintech-Based Zakat Model Using Artificial Intelligence
https://ideas.repec.org
[ "Mustafa Raza Rabbani", "M. Kabir Hassan", "Shahnawaz Khan", "Aishath Muneeza", "University Of Bahrain", "University Of New", "Bahrain Polytechnic", "International Centre For Education In Islamic Finance", "Author", "Listed" ]
by MR Rabbani · 2022 · Cited by 11 — ... economic disruption caused by the pandemic, it needs immediate attention from the economists and policymakers. Islamic finance has many financial ...
The COVID-19 pandemic and its associated lockdown have created a mammoth economic cost to the economies around the globe. The policy response to the crisis must be fast, secure, and sustainable. It has also created astonishing solidarity among the people with every element of society irrespective of race, caste, creed, or religion working together to save humanity. To overcome the financial and economic disruption caused by the pandemic, it needs immediate attention from the economists and policymakers. Islamic finance has many financial instruments for helping the poor by alleviating poverty, distributing income fairly, and improving social welfare, they comprise, Zakat, Sadaqat, Awqaf, etc. Zakat is the compulsory contribution from the Muslims to the poor and needy every year. Zakat is the compulsory donation from the rich and able Muslims which must be given to the poor and needy within a year. This immediate benefit of Zakat is well suited to tackle an economic crisis such as the one caused by COVID-19. Islamic finance in combination with the Fintech-based technologies like AI, Blockchain, machine learning, and natural language processing can work wonders in achieving Islamic finance objectives. The present study proposes an AI-based Islamic Fintech model to helping the needy and poor affected due to COVID-19. The model uses AI and NLP-based Fintech model for collection and dissemination of Zakat money to needy, poor, COVID-affected, and vulnerable sections of the society. . To find whether it is available, there are three options:1. Check below whether another version of this item is available online.2. Check on the provider's web page whether it is in fact available.3. Perform a search for a similarly titled item that would be available. Corrections All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:sprchp:978-3-031-14941-2_3. See general information about how to correct material in RePEc. If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. We have no bibliographic references for this item. You can help adding them by using this form . If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com . Please note that corrections may take a couple of weeks to filter through the various RePEc services.
2022-07-14T00:00:00
2022/07/14
https://ideas.repec.org/h/spr/sprchp/978-3-031-14941-2_3.html
[ { "date": "2022/12/01", "position": 45, "query": "AI economic disruption" } ]
Cooperatives and the Use of Artificial Intelligence
Cooperatives and the Use of Artificial Intelligence: A Critical View
https://www.mdpi.com
[ "Ramos", "Maria Elisabete", "Azevedo", "Meira", "Curado Malta", "Maria Elisabete Ramos", "Ana Azevedo", "Deolinda Meira", "Mariana Curado Malta" ]
by ME Ramos · 2022 · Cited by 12 — ... disruption in the functioning of the status quo; it means taking people ... In Antunes, Artificial Intelligence in the Economic Sector Prevention and ...
The application of AI in the decision-making process of the Cooperatives raises the question of who makes the decision and how the decision is made. In this section, we will critically analyse these issues based on the abovementioned concepts and ideas in this article. At the end of this section, we present a framework for the use of DSS in Cooperatives, contributing to the reflection on the introduction of AI in this type of organisations. 4.1. A Critical View on the Use of Artificial Intelligence in Cooperatives Big data renders the data opaque and, moreover, enables algorithms to relate data that is itself unrelated to each other (p. 25 [ The disruptive nature of AI-based DSS lies in the circumstance that they do not work according to deductive logic but rather draw inferences from a set of information (p. 19 [ 46 ]). In particular, ML algorithms learn from data provided or from previous decisions made by themselves. One of the requirements for this result to be possible is that big data is available.renders the data opaque and, moreover, enables algorithms to relate data that is itself unrelated to each other (p. 25 [ 46 ]). Uncontrolled inferences are created [ 47 ]. For example, in the insurance industry, an English newspaper showed situations of algorithmic discrimination in which the insurance company charged a higher premium to policyholders named Mohamed compared to policyholders named Smith [ 48 ]. In fact, in the realm of big data, it is already known nowadays that closed, outdated, irrelevant, false, or falsified data can be called for decision-making and used by AI. Thus, the conditions are created for AI to replicate or amplify past discriminations that historical progress tried to eliminate (p. 33 [ 48 49 ]). AI can be used to aid the functioning of the management body and representation of the cooperative; think, for example, about the potential of AI in the credit business [ 50 ] or the field of high-frequency trading [ 51 ]. Thus, it is perceived that the decision to introduce AI tools in Cooperatives may, on the one hand, constitute a competitive advantage in the market and, simultaneously, represent a form of attack on cooperative values and principles. So, AI, as long as it supports the decision of the management body (i.e., use of DSS) or makes decisions (i.e., use of ADS), is relevant for the purposes of the duties of care of cooperative directors [ 52 53 ]. The duties of care that bind the directors require that the management be informed about the capabilities and risks of using AI [ 35 ], so the use of DSS tools might be safeguarded. This implementation of the duties of diligence and care can be fulfilled if the management body includes members with technological skills, creating, for example, technology committees [ 54 ]. As there is no legal rule requiring a technology committee or other structure at the board of directors or executive board level specifically dedicated to managing the challenges of AI, the directors’ duties of care and diligence require them to seek accurate information on the potential risks of the AI used. In certain circumstances, AI constitutes a powerful tool for competitive differentiation due to the technological advances it incorporates and the efficiency gains it provides. In that case, such gains do not always correspond to a duty for management body members to adopt AI tools. It is not surprising that, under certain circumstances, a duty not to adopt AI is constituted, considering, for example, the risk of decisions that violate cooperative values and principles, such as discriminatory decisions. Article 46, 1, b) of the PCC constitutes a sufficient legal-cooperative ground to support the legal duty of not adopting AI. This has the consequence of making the decision not to adopt AI lawful and not liable to trigger the civil liability of administrators for, e.g., loss of business opportunities [ 55 ]. Indeed, digital literacy issues are crucial. Suppose the cooperative is constitutionally and legally prevented from recruiting non-cooperators or non-investor members to the management and representative bodies [ 6 ]. In that case, it will be necessary to provide the organisation with people with adequate skills who can produce the appropriate information for decision-making on the introduction of AI tools. Currently, the PCC law prevents an algorithm from being admitted as a cooperating or investing member, even if attributed to the quality of “e.person”. Legal persons can indeed be admitted as cooperators or investor members, but it seems to us that we cannot assimilate an algorithm to the status of a legal person. For one thing, legal persons have a collective legal personality at the current law stage, and algorithms do not. However, the central problem is not, in fact, the non-attribution of electronic legal personality to algorithms. The problem is more profound and has to do with the difference in nature. While the legal person is still an instrument at the service of humans for their purposes and is managed by humans, the algorithm is in a technical position to take decisions autonomously. The expression “AlgoBoards” means the composition of the board of directors by algorithms (p. 40 [ 54 ]). Furthermore, this step is not only the future; it is already present in companies [ 56 ]. Consider the example of ALICIA T., the name given to the AI-appointed manager of the Finnish software company Tieto (in 2017, in the course of the “Future Investment Initiative”, Saudi Arabia declared Sophia, a humanoid robot, a Saudi citizen. Cf. https://www.washingtonpost.com/news/innovations/wp/2017/10/29/saudi-arabia-which-denies-women-equal-rights-makes-a-robot-a-citizen/ (accessed on 6 November 2022)). Additionally, in the USA, article 141 of Delaware General Corporation Law entitled “Board of directors; powers; number, qualifications, terms and quorum; committees; classes of directors; nonstock corporations; reliance upon books; action without meeting; removal”, admits the presence of AI in the management body, when it prescribes that: “(a) The business and affairs of every corporation organised under this chapter shall be managed by or under the direction of a board of directors, except as may be otherwise provided in this chapter or in its certificate of incorporation […].” In this technological future, some anticipate that in an environment dominated by technology, humans are unprepared to monitor the algorithms that learn on their own. Therefore, humans may not accept being administrators [ 54 ]. It is added that AI is more effective than humans in performing the governing body functions because algorithms better process information without conflicts of interest [ 54 ]. The governing body should be aware that algorithms are not necessarily impartial, as so-called “conflict coding” is identified—the algorithm provider will design it so that the product is accepted by the governing body and not necessarily in the interest of the cooperative and cooperators. Finally, despite the theory (see Russel and Norvig [ 42 ]) that says that AI should provide transparency, among other characteristics, algorithms do not guarantee transparency [ 54 ], especially in the DL types. Despite having a better performance, DL algorithms should never be used in cooperatives since they do not provide, for now, the rationale for the decisions. ADS that respect the “Interpretability” characteristic are the type of DSS that should be used in Cooperatives since they are not black boxes. It also adds that humans are better than AI at complex interactions with humans [ 54 ]. Algorithms do not have the capacity to reconcile divergent views [ 54 ]. Algorithms do not feel empathy. They can not also have a conscience and integrate ethics into their decisions. These are all issues that integrate the decisions of cooperatives since cooperative members believe in the ethical values of honesty, openness, social responsibility and caring for others. Nevertheless, the future is still human, particularly in cooperatives. The Portuguese Cooperative Law prevents algorithms from fully or partially replacing human members in the bodies of cooperatives or companies and requires that members of the bodies are cooperators or investor members. In either case, the members of the bodies must be natural or legal persons. Algorithms may never integrate organs. AI technology has several limitations that should not be ignored when deciding to incorporate it into the cooperative organisation (p. 40 [ 54 ]), namely: data dependency—distortion of past data, learning from “bad examples”, insufficient data, the unpredictability of human behaviours; the indispensability of human judgement; conflicts with human ethical standards; the incompleteness of the “legal environment” precludes a yes/no answer. due diligence in order to comply with the duty of information, according to the standard of the judicious and orderly manager, in charge of the members of the management body and of the representation of the cooperative [ For these reasons, the decision on the adoption and incorporation of AI in the cooperative (even if only as an auxiliary decision-making tool, i.e., DSS) requires the management body to perform the appropriatein order to comply with the duty of information, according to the standard of the judicious and orderly manager, in charge of the members of the management body and of the representation of the cooperative [ 57 ]. Moreover, the risk-based approach favoured by the Proposal of the EU for a Regulation of AI can be instrumental. This proposal acknowledges that “In addition to the many beneficial uses of artificial intelligence, such technology can be misused and grant new and powerful tools for manipulative, exploitative and social control practices. Such practices are harmful and should be prohibited as they are contrary to the EU values such as human dignity, freedom, equality, democracy, and the rule of law. They are also contrary to fundamental EU rights, including the right to non-discrimination, personal data and privacy protection, and the child’s rights” (Proposal for a Regulation of the European Parliament and of the Council establishing harmonised rules on artificial intelligence (Artificial Intelligence Regulation) and amending certain Union legislative acts (Brussels, 21.4.2021 COM (2021) 206 final 2021/0106(COD), recital 15). Thus, “the use of certain AI systems designed to distort human behaviour, which are likely to cause physical or psychological harm, should be prohibited. Such AI systems use subliminal components that are not detectable by humans or exploit vulnerabilities of children and adults associated with their age and physical or mental disabilities. These systems intend to substantially distort a person’s behaviour in a way that causes or is likely to cause harm to that or another person” (Proposal for a Regulation of the European Parliament and of the Council establishing harmonized rules on artificial intelligence (Artificial Intelligence Regulation) and amending certain Union legislative acts (Brussels, 21.4.2021 COM (2021) 206 final 2021/0106(COD), recital 16). The depth of the duty of information required from the management and representative body of the cooperative will depend on the technology adopted and the risks it poses to the cooperative values and principles. It seems crucial to us that the management and representative body of the cooperative takes care to know, by informing itself, whether the AI used is in line with the cooperative values and principles.
2023-01-14T00:00:00
2023/01/14
https://www.mdpi.com/2071-1050/15/1/329
[ { "date": "2022/12/01", "position": 46, "query": "AI economic disruption" } ]
Beijing's central role in global artificial intelligence research
Beijing’s central role in global artificial intelligence research
https://www.nature.com
[ "Alshebli", "Social Science Division", "New York University Abu Dhabi", "Abu Dhabi", "Cheng", "Waniek", "Computer Science", "Science Division", "Jagannathan", "Engineering Division" ]
by B AlShebli · 2022 · Cited by 12 — ... economic constraints. Whether AI knowledge flows unhindered in practice is ... 3b depicts the same plot but for development instead of disruption.
In the Science of Science literature, several studies use the Microsoft Academic Graph (MAG) dataset50,74,75,76,77,78,79,80 while several others use the Web of Science (WoS) dataset81,82,83,84,85,86,87, and some studies even use both datasets88. We chose MAG since it provides the strongest emphasis on comprehensiveness89. We downloaded the Microsoft Academic Graph dataset90 on September 12th, 2020. This dataset includes records of scientific papers specifying the publication date, the publication venue, the publication discipline, the authors’ names, the authors’ affiliations, and the network of citations. Additionally, the dataset specifies the publication type, which includes books, book chapters, conference papers, journal papers, theses, patents, datasets, and repositories. In our study, we only consider publications between 1990 and 2019 (inclusive) whose discipline is specified as AI according to MAG, and whose type is specified as either a conference or a journal paper. Following the convention that the last author is typically the principal investigator and “the head of the lab that hosted most of the research”91, throughout our study, the city in which the last author’s affiliation resides is considered to be the city from which the paper has originated. To this end, for any given affiliation, the corresponding city was identified using the Google Map Geocoding API92. This yielded a dataset of 2.2 million AI papers that are classified into cities. To quantify the scientific output of each city, we used three outcome measures. The first is impact—the number of citations that papers from the city have accumulated within the first two years post publication. Compared to other studies that consider five years74 or ten years93 post publication, our measure allows us to analyze the impact of more recent papers, which is particularly important when studying a rapidly growing field such as AI. Our second outcome measure is productivity—the number of AI papers that the city has produced each year. The third outcome measure is the number of AI scientists affiliated to an institution located in the city under consideration. Figure 1a–c show, respectively, how the impact, productivity, and number of AI scientists have changed over time for the 20 cities that were most impactful in AI in 2017. Throughout the article, we refer to these as the “top 20 cities.” We classify the cities in the U.S. or Europe as “Western,” and classify the remaining cities as “Eastern.” The right-hand side of each subfigure shows how the cities’ ranking has changed over time. Unlike productivity and number of AI scientists, which are depicted up to 2019, impact is only depicted up to 2017 to allow for citations to accumulate for two whole years post publication. Looking at Fig. 1, one can spot Beijing as a clear outlier; it is not only the most impactful city since 2007, but also the most productive since 2002, and the one that has accommodated the largest number of AI scientists for the past two decades. The difference between Beijing and other cities grew to a remarkable level in recent years. In 2017, for instance, Beijing received twice as many citations as the second most impactful city (Mountain View in California, U.S.). In 2019, Beijing produced three times as many papers as the second most productive city (Shanghai) and housed three times as many AI scientists as the city that followed in the ranking (Wuhan). Supplementary Fig. 1 is similar to Fig. 1a, except that citations are counted five years post publication instead of two. As shown in this figure, the overall trend persists, with Beijing still ranked at the top. Supplementary Fig. 2 shows how much of the global AI research activity is controlled by the top 20 cities. In terms of impact, these cities continued to attract about 20% of global AI citations annually between 1990 and 2013; after this period, the top cities’ share of global AI impact started increasing steadily until it exceeded 30% in 2017, meaning that one in every three citations worldwide go to a paper produced by these cities. In terms of productivity and impact, the top cities produce 18% to 20% of papers worldwide, and house about 20% to 22% of the global AI workforce. Collectively, these results demonstrate the significant role that the top 20 cities play in the rapid development of AI. Supplementary Fig. 3 considers an alternative outcome measure, proposed by Wu et al.94. Intuitively, this measure quantifies the degree to which a paper introduces something new that eclipses attention from the previous work upon which it was built, leading other scholars to cite it without citing its references. This measure ranges between 1 and -1; papers whose score is close to 1 are considered “disruptive,” whereas those with a score close to -1 are considered “developmental.” Supplementary Fig. 3a depicts the annual ranking of cities based on the number of papers they produce that are among the 10% most disruptive AI papers each year. As can be seen, Beijing has the highest ranking since 2005. Supplementary Fig. 3b depicts the same plot but for development instead of disruption. Again, Beijing is ranked at the top, and this has been the case since 2003. Finally, Supplementary Fig. 4 compares East to West in terms of publication venues. To this end, for each of the top 20 AI venues95, we counted the annual number of Eastern and Western papers published therein; see Supplementary Fig. 4. As shown in this figure, there seems to be a divide between East and West, especially in the last few years, with certain venues being predominantly targeted by the East, and others being mostly targeted by the West. Figure 2a shows each of the top 20 cities’ impact over other cities during the five years between 2013 and 2017. The rows and columns are ordered such that Eastern cities are grouped together, and Western cities are grouped together. The value and color in each cell correspond to the number of citations from the row city to the column city (colors are binned to improve the visualization; an alternative coloring scheme is provided in Supplementary Fig. 5). It should be noted that impact flows in the opposite direction of citations. For example, a city j impacts another city i when i cites j. Thus, in this heatmap, each cell reflects the impact of the column city over the row city, not the other way around. Figure 2a reveals three salient patterns. First, Eastern cities feature little impact on Western cities (see how the color intensity in the bottom-left quadrant is much lower than that of the remaining quadrants). Among Eastern cities, however, Beijing shows the greatest aggregate impact on Western cities; an aggregate impact comparable to that of Berkeley or Cambridge U.S. (without counting the impact that each of the latter cities exerts on itself). Second, Beijing cites all other top cities massively (notice the color intensity of the first row, which is higher than that of any other row). Excluding itself, about 45% of Beijing’s citations are to Eastern cities, and 55% are to Western ones, suggesting that the city’s research builds on knowledge produced across the globe. Third, each city cites itself heavily (see how the color intensity is exceptionally high along the diagonal). This “home bias” is interesting given the fact that communication technologies should, in principle, render distance ineffectual. Our results on AI are consistent with those that document home bias in other fields96,97,98,99. Analyzing the top 50 cities instead of the top 20 reaffirms the three patterns just described; see Supplementary Fig. 6. The impact network corresponding to Fig. 2a compares the citations received by different cities in absolute terms, without accounting for differences in productivity. For instance, when comparing Beijing to Mountain View in terms of their impact on Hong Kong, Fig. 2a does not reveal that Beijing produces ten times as many papers as Mountain View. Motivated by this observation, we develop a measure of impact that accounts for the differences in productivity between cities. To this end, we use a simple baseline model in which citations are random. Under this model, the probability of citing a given city is proportional to its productivity. If n denotes the number of papers produced globally, \(n_j\) denotes the number of papers produced by city j, and \(m_i\) denotes the number of papers cited by city i, then the expected impact of j on i under our baseline model is \((n_j/n)\times m_i\). This theoretical benchmark is not meant to account for all the variables that determine a scientist’s decision to cite a given paper; its goal is to provide us with a metric suitable to assess the citation preferences of different cities. Take Beijing, for example, which produced about 5% of AI papers worldwide in 2017. For any given city i, if more than 5% of the citations made by i are to Beijing, it suggests that i has a preference for Beijing, in which case we say that Beijing over-impacts i. In contrast, if fewer than 5% of i’s citations are to Beijing, we say that Beijing under-impacts i. More generally, deviations from this benchmark serve as a coarse yet intuitive measure of cities’ preferences. For more details on how the expected impact is calculated, see the Supplementary Note 1. Figure 2b highlights the cities that over-impact (green) and under-impact (pink) other cities. For any pair of row city i and column city j, the corresponding cell represents the difference between the number of times i cited j and the number of times i would be expected to cite j if citations were determined based on the baseline model. When interpreting this heatmap, recall that impact and citations flow in opposite directions, implying that a pink color indicates that the column city under-impacts the row city. The outcome of this analysis reinforces two of the aforementioned patterns observed in Fig. 2a: Eastern cities under-impact Western ones (see how the bottom-left quadrant has mostly negative values), and all cities over-impact themselves (notice the dark green color along the diagonal). Figure 2b reveals additional patterns: Beijing under-impacts all other top cities (except Wuhan and Nanjing; see the leftmost column), and several top cities under-impact Beijing (most notably Seoul, Shanghai, and Paris; see the upper row). Moreover, many Eastern cities under-impact other Eastern cities; 32 out of the 64 cells in the top-left quadrant contain negative values. The two cities that stand out in this quadrant are Hong Kong and Singapore, which over-impact all top Eastern cities. Globally, only two cities over-impact all other top 20 cities, Mountain View and Redmond, both on the West Coast of the U.S. Finally, the figure reveals the dependency of Western cities on other Western cities; the bottom-right quadrant features mostly positive values. Similar patterns emerge when considering the top 50 cities; see Supplementary Fig. 7.
2022-12-01T00:00:00
https://www.nature.com/articles/s41598-022-25714-0
[ { "date": "2022/12/01", "position": 63, "query": "AI economic disruption" } ]
China's Brute Force Economics: Waking Up from the ...
China’s Brute Force Economics: Waking Up from the Dream of a Level Playing Field
https://tnsr.org
[ "Liza Tobin", "Erica D. Lonergan", "Jack Snyder", "Richard W. Maass", "Jonathan D. Caverley" ]
by L Tobin · Cited by 7 — Change will entail some pain and disruption for the United States and its ... Artificial intelligence (AI), particularly machine learning (ML), has ...
In 2017, China’s chief justice, Zhou Qiang, told legal officials in Beijing to resist “erroneous” ideas from the West like “constitutional democracy,” “separation of powers,” and “independence of the judiciary.” His statements shocked some Western observers who had watched in cautious optimism as Zhou, a well-educated jurist with a reputation as a reformer, spearheaded efforts to make China’s courts more professional.1 Behind Zhou’s words was a hard truth: Reforms could only go so far before they collided with the reality that, in the People’s Republic of China, the judiciary is subordinate to the Chinese Communist Party. This dynamic matters beyond China’s borders. Cooperative trading relations require a common set of rules or expectations that ensure that economic competition occurs on a level playing field. Beijing’s rejection of the rule of law as a fundamental operating principle means that the normative commercial structures upon which modern trade depends are at the mercy of a powerful and ideologically motivated political party. The Chinese Communist Party’s ruthless pursuit of techno-economic dominance in a range of strategic sectors has distorted activities that are usually thought of as positive sum — trade and technology cooperation — into zero-sum games. The time has come for the United States and its allies to abandon the notion that competing on a level playing field with China’s state-led economy is possible and confront the reality of what I am calling the country’s brute force economics. I use this term as an analytic frame to summarize the aggressive, evolving, and often opaque web of policies and tactics that Beijing employs to give its national champions — corporations acting to advance government policy — an advantage and seize a dominant global market share in strategic sectors. The litany of specific practices is long: market access restrictions in strategic sectors, massive subsidies that fuel domestic overcapacity and enable Chinese firms to wipe out foreign competition, requirements for foreign firms to transfer technology in order to access the Chinese market, economic coercion, intellectual property theft, cyber- and human-enabled espionage, and forced labor. China’s brute force economics playbook puts competing firms out of business and destroys entire industries in rival nations. Once international competitors to Chinese national champions are either acquired or eliminated, trade partners have no choice but to rely on Chinese firms for critical technology products or inputs. Foreign firms are sometimes complicit in Beijing’s economic distortions, owing to profit motives, ignorance, or fear of retribution if they speak out. Crucially, China’s tactics are not merely an assortment of cutthroat moves made by individual actors. Rather, they are features of Beijing’s long-term strategy, as I will argue below. As such, they are backed up by the full force of the country’s party-state system, creating a challenge that Washington cannot afford to ignore. Huawei is a prominent example of Beijing’s no-holds-barred approach to boosting its national champions. State financial support worth an estimated $75 billion, market access restrictions to limit foreign competition in China’s domestic market, and decades of intellectual property theft and racketeering enabled Huawei to undercut rivals to become the world’s leading telecom equipment vendor and suppress innovation in the global telecom equipment industry.2 After Washington fought back with export controls and a diplomatic campaign to sound the alarm regarding digital dependence on an autocratic rival,3 threats of retaliation by China’s “wolf warrior” diplomats and state media against countries considering banning Huawei from their fifth-generation (5G) networks erased any uncertainty about whether resisting the company’s advances meant entering Beijing’s crosshairs.4 Washington and its allies are still struggling to fully conceptualize the challenge of brute force economics, much less mount a sufficiently robust counteroffensive. China’s brute force economics has set off alarm bells in Washington and allied capitals. Of particular concern is the country’s emergence as a dominant player in a growing number of strategic industries such as steel, aluminum, solar panels, wind turbines, electric vehicle batteries, high-speed rail, commercial drones, telecoms network equipment, and even energetic materials that power the U.S. military’s missiles and rockets.5 Policymakers have started to take action. The Trump administration compiled extensive evidence of China’s market-distorting practices,6 threatened tariffs to bring Chinese negotiators to the table, and, once negotiations failed to produce meaningful structural reforms, imposed tariffs. So far, the Biden administration has kept these tariffs in place.7 Both administrations increased coordination with allies and partners, with Trump’s team launching a trilateral trade ministerial with the European Union and Japan that was renewed by the Biden administration.8 President Joe Biden also set up new bilateral consultations on trade and technology issues with Brussels and has taken other actions, such as issuing executive orders to improve supply chain resilience and bolster the criteria for screening inbound investments for national security risks.9 Despite this growing momentum, Washington and its allies are still struggling to fully conceptualize the challenge of brute force economics, much less mount a sufficiently robust counteroffensive. As a director responsible for coordinating China policy at the National Security Council during the Trump and Biden administrations, I experienced this firsthand. One conceptual hurdle resulted from the difficulty of distinguishing between “fair” and “foul” in explaining China’s achievements. After all, Beijing’s success in capturing global market share has partially resulted from “textbook” economic factors: the unleashing of entrepreneurialism and trade after decades of near-autarky, China’s robust investments in manufacturing infrastructure, and the country’s natural advantages of size and scale. But these factors do not tell the whole story. For my colleagues and me in the executive branch, textbook economic concepts like “market failure,” “macroeconomic imbalances,” and “trade distortions” were thin soup. Scholars have rightly described China’s policies as “mercantilist” or “neo-mercantilist.”10 But even these terms are insufficient to diagnose Beijing’s predatory mix of mercantilism, aggression, and criminality — operating at a scale that only China can muster. Key Features of Brute Force Economics The “brute” in brute force economics refers to the force, ruthlessness, and scale that characterize China’s economic strategy and make it an unprecedented challenge for Washington and its allies. While policy discourse, especially among U.S. economic officials, tends to focus on particular facets of the problem, such as market access restrictions, technology theft, or unfair subsidies, the reality is that these components overlap and are integrally linked in a comprehensive strategy that is more powerful and pernicious than the sum of its parts. Force As an economically powerful and technologically advanced autocracy, the Chinese Communist Party can drive the full force of the party-state-military-intelligence system and the country’s massive national resources to serve its objectives. A foreign firm competing for global market share with a Chinese national champion is not competing with a typical commercial company but rather with what China scholar Jude Blanchette calls “CCP Inc.,”11 as the Huawei example above demonstrates. Such blurred lines offer both disadvantages and advantages for individual Chinese firms. On the negative side of the ledger, when push comes to shove, even ostensibly private firms have no choice but to throw their weight behind the party’s goals. Chinese law compels all individuals and companies in China to assist in national security and intelligence work, if required.12 On the other hand, national champions enjoy tangible advantages over their foreign competitors: China’s spending on industrial policy — estimated at $248 billion in a single year — far exceeds comparable spending by the United States and other advanced economies.13 China’s subsidy and export credit practices violate its World Trade Organization (WTO) commitments,14 but the WTO and its members have been powerless to stop them. National champions bidding for overseas business also have benefits that are harder to quantify, like state security agents stealing trade secrets and wolf warrior diplomats wielding incentives, inducements, and threats on their behalf.15 Ruthlessness Beijing’s rhetorical commitment to “common development,” and “win-win cooperation”16 frames its economic relations with other nations in positive sum terms, but its behavior belies a zero-sum approach that shows little regard for the costs of its behavior for other nations. These are not mere marginal costs of doing business that can be easily written off. A Boston-based cyber security firm estimated in May that a Chinese state-backed cyber hacker stole trillions of dollars’ worth of sensitive and proprietary data on everything from diabetes drugs to missiles from approximately 30 multinational corporations in North America, Europe, and Asia.17 An independent commission estimated that China’s economic espionage and intellectual property theft cost the U.S. economy up to $600 billion annually, discouraging capital investments required for innovation and undermining U.S. firms’ overseas competitiveness.18 Even when justice is served, it often comes too late to protect American firms. In one example, Chinese manufacturer Sinovel stole U.S. firm AMSC’s proprietary wind turbine technology instead of paying the agreed-upon price. In 2018, a U.S. federal jury found Sinovel guilty on all charges including conspiracy and trade secret theft starting in 2011. But by the time the verdict was rendered, Sinovel’s crimes had already caused AMSC to lose half its workforce (almost 700 jobs), $1 billion in shareholder equity, and $550 million in trade secrets, according to court documents.19 Meanwhile, China’s wind turbine industry prospered, with exports growing from $2.9 billion in 2017 to $7.2 billion in 2021.20 Scale With a globally integrated, state-directed economy second only in size to that of the United States, China can practice brute force economics on a scale that no other nation can. The United States has often absorbed the unfair practices of its trading partners. However, China’s sheer size makes it an outlier, both in terms of direct harm and the opportunity costs it imposes by restricting access to 1.4 billion consumers for products and services that the United States excels in producing, like data services and social media. Sectors that Beijing partially or completely blocks to foreign competition include telecoms, fisheries, media, smartphones, agricultural machinery, cargo ships, electric vehicles, digital payments, solar panels, and — ironically — wind turbines, among others.21 It would be difficult to estimate the total cost to U.S. firms from lost revenues owing to China’s market access restrictions, but according to some estimates these restrictions cost individual U.S. firms billions of dollars.22 The reverse is not true: The Chinese app TikTok is projected to earn nearly $6 billion in U.S. ad revenues in 2022,23 while Meta and other U.S. tech platforms are blocked from China’s market. Market access restrictions mean that China’s accessible market — which is what matters for foreign businesses — is smaller than what is suggested by China’s massive economy and population. Brute Force Economics and the Party’s Technology Strategy Chinese leader Xi Jinping describes the world as experiencing “change on a scale unseen in a century” and an epochal power shift from west to east that offers China the opportunity to emerge as the world’s leading power.24 Critically, the Chinese Communist Party believes that success will depend on its ability to seize the advantage in disruptive and emerging technologies. In 2017, the party set a goal for China to become a “global leader in innovation by 2035.”25 This is not about technological achievement only for its own sake or for economic gain. As scholar Rush Doshi, who in 2021 joined the Biden administration as a National Security Council China director, explained in 2020, “China’s leaders have often seen technology and economic exchange through a political lens, particularly as a way to avoid dependency, strengthen China’s ‘comprehensive power,’ and build order.”26 As Xi said in 2021, “Technological innovation has become one of the main fields of international strategic competition, and the contest in key sci-tech areas is fierce.”27 At the 20th Party Congress in October 2022, where Xi secured a third five-year term as general secretary, he doubled down on these goals, emphasizing the need for China to achieve “self-reliance” in science and technology.28 Brute force economics is an expression of the Chinese Communist Party’s character as a political system. Within its own ranks, the party insists that its Marxist political system has been critical to China’s success thus far and will continue to be essential to achieving its objective of making China the world’s leading power.29 Its system, according to Beijing, enables the country to carry out long-term planning and marshal large-scale resources to pursue state objectives in technological, military, economic, and other domains, sustaining a collective effort in a way that democracies often cannot.30 For Beijing, China’s economic rise and technological achievements are evidence that its system and strategy are correct — and justification to pursue its strategy by all means necessary. In other words, the political ends justify the economic means. Brute force economics is embedded in China’s political institutions and is likely to survive even if Xi exits the scene and his successors try to soften the edges. Caution and humility are warranted, but given China’s success in achieving market dominance in commercial drones, lithium-ion batteries, 5G wireless technology, and other sectors, the United States and its allies should not be complacent. In its quest for comprehensive national power, Beijing has pursued what it calls “indigenous innovation” for more than a decade and a half.31 This strategy aims to boost China’s technological self-sufficiency by replacing imports of advanced technology with domestic production, to climb the global manufacturing value chain, and to capture greater international market share. Beijing’s Made in China 2025 plan, issued in 2015, gained notoriety in Washington owing to its explicit targets for market share across a wide range of critical technologies.32 (Recognizing the strategic and economic challenge for what it was, the Trump administration targeted these technology imports with its first tranche of tariffs.) These Chinese policies have much earlier roots, however. A major policy document issued in 2006 — the Medium- and Long-Term Plan for Science and Technology Development — set guidelines for the next 15 years and prioritized advances in integrated circuits, industrial biotechnology, information technology, including broadband mobile telecoms, and many other sectors. It called for China to “leapfrog in priority fields” and “lead the future.”33 These themes continued in Beijing’s 2010 decision to support seven “strategic emerging industries.”34 More recently, China’s 14th five-year plan for 2021-2025 intensified the drive for self-sufficiency with a policy dubbed “dual circulation”35 — essentially a one-way decoupling strategy that seeks to increase the world’s dependence on China while reducing China’s dependence on the world for critical technologies.36 China’s push for self-sufficiency and decoupling is not a reaction to recent tensions between the United States and China but has been underway for many years.37 Has Beijing’s approach been effective? Setting ambitious goals does not guarantee success, and massive outlays on industrial policies have led to waste. Measuring China’s technological progress is challenging since rigorous assessments are constrained in an opaque and autocratic system and the long-term effects of wasteful industrial policies take time to become clear. Caution and humility are warranted, but given China’s success in achieving market dominance in commercial drones, lithium-ion batteries, 5G wireless technology, and other sectors,38 the United States and its allies should not be complacent. The implications for the United States and its democratic allies go beyond the techno-economic. All of Beijing’s plans are dual-use, aiming to maximize the efficient use of finite national resources to drive economic and military modernization simultaneously. Beijing’s national military-civil fusion strategy that emerged in the 2010s seeks to ensure that breakthroughs in civilian research and commercial sectors advance both military and economic objectives.39 Business leaders, technologists, and scholars involved in cutting-edge sectors should be on notice that their industries are in Beijing’s sights. Creating dependence in critical sectors provides Beijing with strategic leverage. As the Ukraine crisis has painfully highlighted for European countries, depending on an adversary for critical inputs like energy can suddenly prove dangerous and costly during a crisis. China, with an economy 10 times the size of Russia’s and an outsized role in global supply chains,40 has a greater capacity than Russia to weaponize its economic clout and a growing track record of doing so.41 America’s reliance on China for products and materials that are critical to the defense industrial base — such as rare earth elements required for military hardware like the F-35 and night vision devices — would leave the country vulnerable in a bilateral dispute or crisis.42 In fact, China has threatened to cut off America’s rare earth supply several times, including as recently as 2021.43 Innovation Economics in a Zero-Sum World Evidence is mounting that Beijing’s use of brute force economics is undermining innovation in other countries, putting their long-term prosperity at risk. Robert D. Atkinson has shown that, by shrinking markets and reducing profits that innovators need to invest, China’s mercantilist practices have slowed the progress of innovation in the United States and other developed nations.44 Innovation is the single most important long-term driver of economic growth for advanced economies like the United States.45 Thus, allowing China’s brute force economics to continue unchecked may hinder the U.S. economy’s capacity to generate prosperity for Americans. The distortionary effects of China’s brute force economics can be understood through the lens of what economists call absolute versus comparative advantage. The theory of comparative advantage posits that trade allows countries to specialize in items they can produce at a lower opportunity cost compared to other nations. When each country specializes in its areas of comparative advantage, trade produces overall welfare gains in the form of increased total output and product variety.46 The theory suggests that, with a few exceptions, trade barriers should be reduced or removed. At the turn of the millennium, the logic of comparative advantage animated U.S. policymakers seeking to grant China permanent normal trading status and help it gain membership in the WTO. They believed this would unleash unprecedented new economic opportunities for Americans as China removed trade barriers to state-backed industries. In 2000, President Bill Clinton predicted, “For the first time, our companies will be able to sell and distribute products in China made by workers here in America without being forced to relocate manufacturing to China, sell through the Chinese government, or transfer valuable technology … . We’ll be able to export products without exporting jobs.”47 The theory of comparative advantage is not wrong. It is simply that Beijing is operating on different principles, pursuing absolute advantage for China rather than comparative advantage and greater total welfare for the world. The opposite occurred. One study estimates that between 2001, when China joined the WTO, and 2018, the growing trade deficit with China cost the United States 3.7 million manufacturing jobs. The computer and electronic parts industry was hit particularly hard: Three congressional districts in the industry’s epicenter, Silicon Valley, lost between 12 and 20 percent of total jobs.48 As the jobs depart, the skills are lost as well. After decades of moving manufacturing jobs offshore, it is little wonder that many U.S. companies are facing large shortages in workers with technical skills.49 Scholars have also documented that when China targeted and subsidized a sector in its five-year plans, it was correlated with fewer new firms and lower output, employment, and earnings in corresponding sectors in the United States.50 America’s loss has been China’s gain. As Jonathan Pelson explains, “this model of wealth creation [comparative advantage] only applies when you’re dealing with trading counterparts, where the relationships are long-term and the tone is primarily cooperative.”51 The theory of comparative advantage is not wrong. It is simply that Beijing is operating on different principles, pursuing absolute advantage for China rather than comparative advantage and greater total welfare for the world. Laws of economics suggest that China’s excesses must eventually catch up with it, but it is impossible to predict when this will happen and how it will affect Beijing’s approach to the outside world. For now, because of China’s large size, its brute force economics is changing the international landscape, making competition on a level playing field impossible. Brute Force Economics in Practice A look at a few case studies, in addition to the telecoms and wind turbine examples above, illuminates how China’s brute force economics works in practice to increase domestic production, knock out foreign competition, and seize international market share. Solar Panels According to Rick Switzer and David Feith, “Solar technology was invented and first commercialized in the U.S., only to be targeted later by China’s state planners.”52 Solar technology has been a priority for Beijing since the publication of the Medium- and Long-Term Plan for Science and Technology Development in 2006.53 Access to foreign technology, generous state support, and preferential policies for domestic firms combined to produce striking results: Since 2005, China’s share of global production of solar photovoltaic cells rose from seven to more than 80 percent, more than double its domestic demand.54 To achieve this, China dramatically scaled up production, lowered prices, and dumped solar panels on the international market. American and European solar companies shed jobs, bled market share, and eventually had no choice but to shut down.55 The harm went beyond lost industries and jobs. David M. Hart has shown that China’s below-market prices reduced incentives for foreign innovators to invest in alternative pathways to innovation in solar power, likely resulting in less diversity in solar technology today.56 To make matters worse, much of the world’s polysilicon, a key component used to produce solar panels, comes from Xinjiang in northwest China, where the government is conducting a systematic campaign of mass repression and genocide targeting ethnic and religious minorities that includes the use of forced labor. In 2021, the United States banned imports from a Chinese producer of silica-based products used in solar panels that is suspected of using forced labor, and later that year it passed a law prohibiting imports from Xinjiang unless the importer could provide evidence that the goods were not produced with forced labor.57 China’s dominance in the solar supply chain has forced upon the rest of the world a Solomonic choice between the rapid expansion of solar power, on the one hand, and avoiding complicity in Beijing’s systematic human rights abuses, on the other. Shipbuilding Shipbuilding, also a priority in China’s 2006 plan for science and technology development, has followed a similar playbook. Blanchette assessed that state support between 2010 and 2018 worth approximately $132 billion, along with barriers to foreign competition, forced technology transfer, and state-sponsored hacking and commercial espionage, facilitated China’s growing dominance in the maritime supply chain. Other countries have provided support to their domestic shipbuilding industries, but, as Blanchette notes, “the scale of China’s support is unmatched.” As a result, China went from roughly 10 percent of global shipbuilding market share in 2002 to around 50 percent in 2021.58 The stakes for the United States are strategic as well as commercial. As Blanchette notes, in a military contingency the United States could face maritime logistics challenges given the shrinking size of its merchant marine fleet.59 Border Security Technology Chinese state-owned enterprise Nuctech, founded by former Chinese General Secretary Hu Jintao’s son, has enjoyed generous state support, a protected home market, and ties to China’s military and the highest levels of Chinese Communist Party power to become the world’s leading company by revenue for cargo and vehicle scanners used at ports and airports around the world. By underbidding foreign rivals by 30 to 50 percent, Nuctech is seizing a growing global market share. In the European Union, Nuctech has won contracts in 26 out of 27 member states, including in sensitive locations such as along NATO’s borders with Russia and in Europe’s largest ports.60 Only a 2014 ban on Nuctech equipment in U.S. airports spared the United States from a similar fate.61 The ban was proven prescient in 2020, when Nuctech was added to the Department of Commerce’s Entity List, which imposes export licensing requirements, because Nuctech’s equipment performed below par when it came to detecting nuclear and other radioactive materials.62 The growing presence of Nuctech equipment at border locations overseas undermines efforts by the United States and its foreign partners to stop the proliferation of materials that could be used in weapons of mass destruction. Electric Vehicle Batteries Beijing’s ambitions to build a world-class automobile sector date back to its 863 Program, a high-tech development plan launched in 1986.63 In the medium- to long-term plan released two decades later, this ambition was refined to focus on energy-efficient and new energy automobiles and was reemphasized in Made in China 2025. In 2016, the Chinese government announced that a “third industrial revolution” focusing on digitization and “new energy” would allow China to take the lead in automobiles. Lavish government subsidies, a captive home market, and requirements that foreign automakers transfer critical technology to a local firm in order to sell cars in China have allowed national champion CATL to capture one-third of the global electric-car battery market.64 In the semiconductor industry ... [Beijing] is leveraging massive amounts of state support, targeted intellectual property theft to aid national champions, knowledge transfer from technical experts trained in the United States and allied countries, and preferential treatment for domestic firms to tilt the playing field in its favor. Looking ahead, Beijing’s brute force economics will not stop at solar panels, shipbuilding, border security equipment, and batteries. China’s 14th five-year plan makes clear that industries at the cutting edge of the American economy, like artificial intelligence, semiconductors, biotechnology, aerospace, blockchain, and cloud computing are all in China’s line of fire.65 In the semiconductor industry, for instance, Beijing’s playbook is on full display. It is leveraging massive amounts of state support, targeted intellectual property theft to aid national champions, knowledge transfer from technical experts trained in the United States and allied countries, and preferential treatment for domestic firms to tilt the playing field in its favor.66 The sheer complexity of the semiconductor industry has held back Beijing’s progress in some areas, but over time it is mastering more complex processes. Projections suggest that China could become the world leader in chip manufacturing by volume within 10 years.67 Selective Disentanglement: Countering Beijing’s Brute Force Economics Traditional channels for addressing China’s brute force economics have been tried repeatedly and found insufficient. Over the last two decades, Washington has made numerous good-faith efforts to deal directly with Beijing through bilateral dialogues, trying to persuade China’s leaders to operate on a level playing field and comply with its WTO commitments.68 The United States has also frequently worked through multilateral channels, bringing 27 cases against China in the WTO, many of them in conjunction with other member states. But despite winning every case that was decided, China’s pattern of behavior continued and in many cases worsened.69 Given the failure of years of engagement to persuade Beijing to change course, Washington should frame its approach differently than in the past. It should adjust to the fact that, in China, it does not have a sincere, cooperative partner. The United States should focus primarily on shaping its own choices, then on shaping those of its allies and partners, and lastly (and with low expectations for success), on shaping Beijing’s. As Secretary of State Antony Blinken acknowledged in a speech in May 2022, “We cannot rely on Beijing to change its trajectory.” Instead, America will seek to “shape the strategic environment around Beijing.”70 Washington and its allies should pursue a strategy of selective disentanglement from China that would deny support to Beijing’s techno-economic ambitions while strengthening ties with each other. Selective disentanglement should be thought of as a progressive shift, not an abrupt and total abandonment of economic and trade ties with China.71 It should focus on areas where economic and technological entanglement poses the greatest risks for national security and competitiveness, such as critical infrastructure and technology. Change will entail some pain and disruption for the United States and its allies, but it will also result in creative destruction that clears a path for new growth opportunities. A strategy of selective disentanglement with China should combine offensive, defensive, and collective elements. It should have three prongs: doubling down on asymmetric American advantages, denying support to China’s accumulation of techno-economic power, and moving away from overreliance on China and toward increased interdependence with friends.72 Double Down on Asymmetric American Advantages Prevailing against China’s brute force economics requires the United States to get its own house in order, that is, to strengthen its domestic competitiveness. But this need not mean trying to “out-China China.” America’s economy and innovation ecosystem have numerous advantages that China is hard pressed to match, including world-class universities and research institutions, the ability to attract talent from around the world, a highly productive and growing workforce, and global leadership in finance backed up by trusted institutions that foster conditions in which innovation thrives. The United States should double down on these systemic strengths and ensure that policies that support them are up to date for an age of emerging technology and strategic rivalry. Vital steps for strengthening America’s innovation ecosystem include boosting support for research and development, maximizing the digital economy’s potential, and investing in a tech-savvy workforce. Research and Development Increased spending on research and development can not only spur innovation but can also serve as a powerful mechanism to boost productivity growth and GDP over the long term.73 The CHIPS and Science Act that Biden signed into law in August 2022 is a step in the right direction, providing a $52 billion infusion to boost semiconductor production, including $11 billion for research and development.74 Policymakers should make additional investments in research and development in battleground technologies like smart manufacturing, biotechnology, next-generation networks, and computing technology. They should also incentivize public-private partnerships that harness the dynamism of America’s private sector to pursue scientific and technological goals of strategic significance.75 Digital Infrastructure China used brute force economics to stake out Huawei’s dominant position in global 5G network infrastructure. But the United States still has an opportunity to race ahead by capturing the economic promise of 5G to unlock America’s potential as an advanced manufacturing powerhouse. The United States is already investing billions of dollars deploying Huawei-free telecom network infrastructure across the country,76 but it should also look ahead to the applications that will run on advanced public and private networks, like smart factories that could make it more appealing for firms to bring production back to American shores. The U.S. government should boost incentives for innovators to create and diffuse these applications.77 Workforce America’s productive and still-growing workforce, and its ability to attract global talent, is one of its greatest strengths, particularly in contrast to China, where demographic decline is an economic drag.78 In the United States, however, shortages in skilled workers in some technical fields are a limiting factor in the nation’s ability to outcompete China. The United States should strengthen education and workforce training programs and speed up immigration processes to ensure it has a workforce that can compete and thrive in cutting-edge technology sectors.79 Deny Support to China’s Techno-Economic Ambitions Getting its own house in order will not be sufficient. The United States should also target China’s brute force tactics. To do so, America, working with allies and partners whenever possible, should stop supporting China’s buildup of techno-economic power at the expense of its own security, values, and prosperity. The United States should not provide material support to China when doing so will 1) accelerate China’s military modernization, 2) enable Beijing’s techno-authoritarian abuses of human rights, or 3) weaken the long-term competitiveness of U.S. and allied high-tech industries. There is already growing policy consensus on the first two criteria. When it comes to military modernization, for example, ensuring that China does not use U.S. technology and know-how to develop military technology like hypersonic weapons, deep-earth penetrating warheads, and quieter submarines — as it has done in the past80 — is hardly a controversial policy objective among U.S. national security professionals, even though more concrete policies and robust implementation are urgently needed. On the human rights front, some U.S. firms stopped providing technology to China when it was revealed that it was being used for state surveillance and human rights abuses in Xinjiang.81 More robust export controls,82 as well as new laws, policies, due diligence standards, and ethical guidelines are needed, but at least the principles are gaining wider acceptance. Beijing’s target list is long, posing a prioritization challenge for Washington, but biotechnology, an area in which the United States currently sits at the technological frontier, stands out. Much stronger consensus is needed on the third criterion — the need to stop supporting China’s ability to out-compete the United States. National Security Adviser Jake Sullivan in a speech in September 2022 referred to this idea, stating that we have to revisit the longstanding premise of maintaining ‘relative’ advantages over competitors in certain key technologies. We previously maintained a ‘sliding scale’ approach that said we need to stay only a couple generations ahead. This is not the strategic environment we are in today. … We must maintain as large of a lead as possible.83 Sullivan’s words were backed up by an executive order published the same week strengthening the Committee on Foreign Investment in the United States. The order broadened the scope for investment review beyond traditional defense issues to include criteria such as protecting Americans’ sensitive data and enhancing U.S. supply chain resilience. Crucially, the executive order also directed the committee to consider whether a transaction affects U.S. “technological leadership and therefore national security” and listed microelectronics, artificial intelligence, biotechnology, and several other sectors as examples.84 Several weeks later, in its strongest policy actions on China to date, the Biden administration announced new controls on the sales of advanced semiconductors and semiconductor manufacturing equipment to China and restrictions on U.S. persons providing support to the sector, among other measures that, if implemented robustly, will slow China’s progress in semiconductors, high-performance computing, and artificial intelligence.85 Generating a greater understanding of these policy changes among executive branch departments and agencies, Congress, industry, the American public, and U.S. allies and partners is important in order to generate support for vigorous enforcement. It will also help build momentum for further policy action to prevent China from surpassing the United States in additional fields such as quantum computing and biotechnology.86 Closer monitoring of, and enhanced restrictions on, China’s access to U.S. markets, technology, and expertise in cutting-edge technology sectors will be required. When it comes to the Committee on Foreign Investments in the United States, this means keeping the process up to date by regularly monitoring China’s strategic documents as they are issued, since they provide insight into what sectors Beijing is targeting. Beijing’s target list is long, posing a prioritization challenge for Washington, but biotechnology, an area in which the United States currently sits at the technological frontier,87 stands out. A strong uptick in Chinese merger and acquisition activity in the U.S. biotech sector in the last few years reveals an aggressive effort by China to acquire American intellectual property in a race to get ahead.88 Furthermore, China is using foreign-acquired technology to conduct mass DNA collection in violation of Chinese law and international human rights norms.89 In 2013, despite security concerns raised in a Committee on Foreign Investments in the United States investigation, Chinese genomics company BGI acquired California-based Complete Genomics, which had developed the fastest and most cost-effective gene mapping technology in the world.90 BGI has subsequently risen to become the world’s largest genetic research organization. According to the Department of Defense, BGI contributes to China’s military-civil fusion strategy,91 and, in 2020, two of its subsidiaries were added to a U.S. export blacklist for collaborating with Beijing in committing human rights abuses against ethnic minorities in Xinjiang.92 Chinese acquisitions of U.S. biotech companies should be afforded an extra level of scrutiny, and in many cases restricted,93 to ensure that the United States is not selling its crown jewels to China and enabling its systemic repression of human rights, as it has done in the past.94 The United States should also create a mechanism to screen outbound investment to prevent U.S. investors from contributing, wittingly or unwittingly, to China’s advance in strategic technologies at America’s expense. Oftentimes, it is the know-how that accompanies U.S. investments, rather than the money itself, that facilitates China’s advance. A new mechanism should include the authority to review and, when required, restrict transfers of expertise, technology, and capital. For example, U.S. firms participated in 58 investment deals with China’s semiconductor industry from 2017 through 2020,95 contributing to China’s progress in a sector where it is imperative that the United States remains ahead. Lastly, the United States should find ways to restrict Chinese access to U.S. markets in critical industries where China is racing to catch up to and surpass the United States. America has done this in the past with positive results. In 2011, for example, Congress blocked Chinese access to America’s space industry owing to concerns that U.S. technology would be used to advance China’s military capabilities.96 More than a decade on, American firms lead in commercial space launches, unhindered in the home market by China’s brute force economics.97 To take another example, America’s 2014 ban on Nuctech for security reasons means that, today, the United States — unlike countries in the European Union98 — is not experiencing a flood of unreliable border security and scanning equipment from a Chinese national champion at its ports and airports. The United States should explore reforming, and then using more aggressively, the International Trade Commission’s Section 337 process to block the import of Chinese products that have benefitted from unfair trade practices like intellectual property theft.99 When an outright ban on market access for Chinese companies is not feasible, the United States should develop laws, policies, regulations, and guidelines that set strict standards for transparency and accountability for the import, purchase, or use of sensitive technology products such as surveillance equipment, hardware and software for data systems, dual-use items, and components for critical infrastructure. Country-of-origin requirements, for example, could place restrictions on products and services from firms in countries of concern including China, preventing such products from being used in sensitive areas such as critical infrastructure.100 The principle of rebuttable presumption, a policy innovation of the Uyghur Forced Labor Protection Act, should also inspire additional laws and policies to increase transparency and accountability for U.S. persons wishing to pursue transactions involving technology transfer to, or cooperation with, China. For example, export controls and licensing policies should be updated to address the risk posed by China’s military-civil fusion strategy that technology or expertise transferred to China could benefit the People’s Liberation Army. If the U.S. person wishing to conduct the transaction can provide satisfactory evidence that the transaction would not be detrimental to national security and competitiveness, the transaction could be permitted to proceed.101 The onus would thus be shifted to industry to take steps proactively to avoid harmful transactions taking place. Diversifying to Build Collective Resilience and Self-Defense As the United States and its allies come to grips with the reality that China has no intention of undertaking structural reforms that could make a level playing field a reality, they must adopt the more realistic and sustainable objective of collective resilience and self-defense. This entails using their combined economic leverage to blunt the harmful effects of Beijing’s techno-economic strategy. Some U.S. allies will be concerned by the threat of China’s economic coercion, but the recent experiences of Sweden, Australia, and Lithuania demonstrate that countries can successfully stand up to coercion.102 To better resist coercion and reduce the risks associated with overdependence on China, the United States and other market-oriented democracies should strengthen trade, investment, financial, and technology ties among themselves while partially disentangling from the Chinese economy. This could evolve into the “partial liberal system” that Aaron Friedberg has described in these pages: a group made up primarily, though not exclusively, of democracies that would resemble the economic arrangement the United States built during the Cold War.103 The United States should start by assembling small groups of countries with shared concerns and form trade blocs that withhold market access from Chinese companies in specific industries unless they meet stringent conditions specified by the bloc. Membership in these small groups could be expanded over time. Internationally, policymakers can continue to promote the goal of a global, rules-based order as a long-term ideal, while taking an incremental approach to building a new partial liberal order. Given growing skepticism among Americans about the economic benefits of trade,104 moving in this direction will not be easy. New trade arrangements will require careful consultation with the U.S. public and lawmakers, whose skepticism of new trade deals has prevented U.S. membership in blocs proposed to reduce dependence on China — the Trans-Pacific Partnership and the Transatlantic Trade and Investment Partnership. Domestically, U.S. policymakers will need to develop compelling arguments about how trade can benefit American workers in the 21st-century economy. Internationally, policymakers can continue to promote the goal of a global, rules-based order as a long-term ideal, while taking an incremental approach to building a new partial liberal order. As Hal Brands and Michael Beckley point out, the challenge of collective action makes it hard to build “big clubs” of countries, but even a “patchy collection of mini-lateral agreements would build multilateral resilience against Chinese pressure by reorienting strategic supply chains away from Beijing.”105 Groupings could focus on issues such as greentech, digital trade, and supply chain security, areas in which allies share far more in common with each other than with China. For example, the United States and the European Union could pursue a carbon border adjustment mechanism that would impose high tariffs on carbon-intensive imports while lowering tariffs for countries whose exports meet specified conditions for clean production. Since China lags well behind the United States and its allies in low-carbon manufacturing, this would effectively exclude the former.106 To take another example, the United States should work to expand digital trade among democratic market economies while limiting data flows to China.107 As Agatha Kratz and Janka Oertel point out, “[T]here are significant economies of scale in digital industries.”108 To build resilience in critical supply chains, the United States and its allies and partners should enhance coordination through mechanisms such as the Minerals Security Partnership announced in June 2022 by the State Department to build robust, responsible supply chains for critical minerals.109 Conclusion It is time for Washington and its allies and partners to acknowledge that ever-deepening techno-economic integration with China is not in their best interest. Twenty years ago, it was understandable that the dazzling promise of making economic gains through trade blinded Western policymakers to Beijing’s long-term objectives. But in 2022, it should be obvious that China is not a cooperative economic and technology partner. It is anachronistic — and even dangerous — to make policy choices contingent on a false hope that it will become one. Imagine you are a small farmer unlucky enough to live next door to a neighbor who, after years of buying your produce, turns into a predator intent on driving you out of business. She steals your tools, seeds, and tractor, then hacks into and steals the database of your distribution networks. With help from the local mafia, she buys out all the small farms and locally owned grocery stores in the region, becoming the area’s sole grocer. Hoping she will play fair is fruitless. At this point, maintaining an “open door policy” would be preposterous. Instead, prudence would dictate installing a more powerful security system and deepening friendships with other neighbors, working together to grow a community garden and organizing to defend yourselves. Over time, practicing resistance, resilience, and interdependence with friends would become a normal response to the predatory neighbor. Liza Tobin is senior director for economy at Special Competitive Studies Project (SCSP). She served on the National Security Council staff as China director during the Trump and Biden administrations. Prior to her struggle sessions with brute force economics at the National Security Council, she stared down the problem at the United States Indo-Pacific Command and the Central Intelligence Agency. This paper expresses the views and opinions of the author and does not necessarily reflect the views of the SCSP, its board, or the staff as a whole. The author wishes to thank Brady Helwig for invaluable editorial assistance and Erik Britton for originating the term “brute force” to describe the People’s Republic of China’s economic approach. Image: Keith Skipper (CC BY-SA 2.0)
2022-12-07T00:00:00
2022/12/07
https://tnsr.org/2022/12/chinas-brute-force-economics-waking-up-from-the-dream-of-a-level-playing-field/
[ { "date": "2022/12/01", "position": 68, "query": "AI economic disruption" } ]
Is there a civic duty to support medical AI development by ...
Is there a civic duty to support medical AI development by sharing electronic health records? - BMC Medical Ethics
https://bmcmedethics.biomedcentral.com
[ "Müller", "Center For Life Ethics Heinrich Hertz Chair", "University Of Bonn", "Bonn", "Schaumburg- Lippe-Straße", "Sebastian Müller", "Search Author On", "Author Information", "Corresponding Author", "Correspondence To" ]
by S Müller · 2022 · Cited by 10 — ... economic groups are structurally more willing to consent to health-related ... Regulatory Disruption and Arbitrage in Health-Care Data Protection. Yale ...
Medical AI and the civic duty to share EHRs Two noteworthy articles by Ploug [26] and Ballantyne [27] identify the most discussed arguments in favour of a civic duty to share EHRs for medical research. The articles focus on three main arguments. The first argument, which I call the ‘rule to rescue argument’, states that there is a universal rule to rescue people in accident-like situations and that people should conform to this rule by sharing EHRs [22, 23, 28]. The rule implies a duty to support medical AI developments that can rescue people in accident-like situations. Second, there is the ‘low risks, high benefits argument’, which says that people have a duty to benefit others as long as the risks entailed by doing so are bearable [23,24,25, 29]. If a medical AI development project complies with this risk-benefit ratio, there is a civic duty to share EHRs. The third argument, the ‘property right argument’, emphasises that a great deal of health data is not generated by private citizens but collected and distributed within the health care process. Advocates of this argument believe that since such processes are mostly financed by solidarity-based health care contributions and taxpayers’ money, the results of those processes are a public good to which citizens should not claim exclusive rights [21, 22, 30]. Since medical AI development is an important part of modern medical research, and since none of the advocates of a civic duty to share EHRs excludes medical AI from their discussion, I will assume that the three arguments in favour of a civic duty to support medical research includes sharing EHRs with medical AI research and development projects. The ‘rule to rescue argument’ The rule to rescue is a well-known bioethical imperative that imposes a duty to prevent harm from happening to other people [26, 31, 32]. Following that rule, an entity A i has a duty to rescue a human being B i in an accident-like situation µ i if and only if (i) A i is part of the situation, if (ii) A i is able to provide proper help in µ i , if (iii) there is no other entity A n ≠ i that is more capable of providing help in µ i , and if (iv) compliance with the rule does not force A i to sacrifice anything of equally important moral value. A situation µ i can be characterised as accident-like if there is a high risk of significant loss of or damage to B i ’s basic interests and if those risks can only be prevented by the immediate action(s) of others. What moral obligations does this rule imply in practice? Witnesses of a car crash have a duty to call emergency services, physicians that witness a passenger on an airplane having a heart attack can be expected to render first aid, and a fishing company has a duty to save its employees when the engine of one of their ships breaks down at sea. As Rulli and Millum have discussed, the rule and its application become more complex when collective actors are involved [28]. There are many accident-like situations µ i in which victim B i cannot be rescued by one person but only by a group of people. For example, the person who witnesses an accident is usually considered to have a duty to call an ambulance. Emergency services are usually considered to have a duty to aid the victim and take her or him to the emergency room (ER), the physician on duty is expected to provide medical care, and the institutions that are involved in the rescue process are expected to allocate resources, write laws, and accumulate knowledge in a way that promotes such care. The social roles of the people involved in such processes determines the scope and force of their duty [33, 34]. The ER physician, for example, has a duty to rescue a patient who is being taken to the ER. The same physician, however, does not have a specific duty to rescue people from a burning house. This task is the professional duty of firefighters. Therefore, the rule to rescue is not shaped by the endangerment of B i ’s basic interests but by the social context in play and the type of entity that A i is. The rule can call for individual duties such as the duty to call an ambulance, social role duties such as the duty of an ER physician to heal a patient, and institutional duties such as the duty to properly fund emergency services [28, 35]. A number of scholars believe that the rule to rescue implies a personal moral duty to support medical research by sharing EHRs [22, 23, 30, 36]. The argument is as follows: Physicians in ERs, paramedics, and others save people’s lives on a daily basis. Because certain innovative health care practices, innovative medical technologies and forms of advanced medical knowledge are tools that enable people (A i ) to rescue others (B i ) in accident-like situations (µ i ), people have a duty to support such research and, thus, such technological developments. Typically, this duty is considered to be a subset of the bioethical duty of beneficence. Because certain medical AIs can be used in medical emergencies [37] the development of these AIs need be considered by the rule to rescue argument. The argument’s structure can be broken down as follows: P1 Certain types of medical AI can rescue human beings in accident-like situations. P2 Citizens have a moral duty to conform to the rule to rescue. P3 The development and improvement of medical AI that can rescue human beings in accident-like situations requires EHRs. C Citizens have a moral duty to support medical AI developments that can rescue human beings in accident-like situations by sharing EHRs. Why the rule to rescue does not apply in the medical AI context Examining this argument closely, P1 and P3 appears to conflict with one another. Ploug notes [26] that no victim B i has ever been rescued by the act of EHRs sharing. Instead, a victim is rescued by another person that might or might not use technology that was developed in the past. This temporal shift between data sharing and the rescue act is highly relevant for the ‘rule to rescue argument’ because there are no urgent situations µ i in which a passer-by can provide proper help to a victim B i simply by sharing her EHRs. The sharing component necessarily precedes the entire rescue scenario, which means that the entity A i that performs the rescue is not the same as the entity C i that shares her data. I think Plougs critique applies to medical AI development as well. Take any accident-like scenario in which a medical AI supports a rescue process. For example, a clinical decision support system may enable a physician to save a patient’s life because it recognises indicators for a rare disease. In this and all other scenarios, the victim is rescued either by an automatous algorithm that was trained with EHRs before the rescue took place or by another person who uses medical AI as a tool. One might now argue alongside Rulli and Millum [28] that within the rule to rescue, it is possible to distinguish among different sets of duties. For example, to protect citizens in a pandemic, politicians and scientific consultants have the professional duty to introduce effective preventive measures. If the chances of selecting the right combination of preventive measures can be increased with the help of a medical AI and its access to EHRs, politicians and consultants may have a professional duty to use those tools. Intertwined with that duty is the institutional duty of supporting everyone employed in the rescue process with sufficient tools and resources. This institutional duty may be composed of many other duties, such as the professional duty of researchers to develop medical AI that can simulate pandemics and the effect of different preventive measures, the federal duty to financially support such research, and the civic duty to grant access to the data that is needed to develop the AI and run the simulations. Even if this line or argumentation were accepted, there would still be the problem that an institutional duty to rescue those in peril cannot imply a civic duty to share EHRs without further ado. Additional arguments are needed to explain how exactly an institutional duty towards the rule to rescue can have an impact at the level of individual citizens and why it gives rise to a duty to share EHRs rather than other responsibilities. Without further references to normative values such as beneficence and justice, normative trade-off rationales, or theories concerning collective responsibility, citizens might as well live up to the rule to rescue by paying their taxes, donating blood, advocating better research conditions, or conforming to well-known preventive health measures. Because the individual act of data sharing does not cause anybody to be rescued, and because a call for an institutional interpretation of the rule to rescue does not entail an imperative to share EHRs, the ‘rule to rescue argument’ cannot justify a duty to support medical AI developments that will save people in accident-like situations. The ‘low risks, high benefits argument’ The principle of beneficence is a positive requirement to promote the welfare of others and contribute to the common good [38]. In medical contexts, this principle obliges caregivers and researchers to act in accordance with the interests of their patients and research subjects. In business, it obliges companies to conduct their business in a way that serves social interests. In democratic politics, it obliges citizens to vote and act in ways that increase and foster the common good. The principle of beneficence is often supplemented by concepts of solidarity and justice that advocate (i) a duty to act in ways that benefit the members of a given society, including oneself [39, 40], and (ii) social structures that promote equality [20, 30]. Scholars that support the ‘low risk, high benefit argument’ link the principle of beneficence with the belief that all citizens of modern societies will benefit significantly from innovative health care developments. The overall benefits are or will be so significant, they argue, that the risks associated with EHRs sharing are negligible in comparison. Therefore, citizens have a moral duty to share EHRs to support medical research [23,24,25, 29]. It is necessary to take a closer look at the benefits and risks that the authors present to discuss the argument sufficiently. And since medical AI is part of medical research and can be used to promote medical and social health benefits [12], I will also consider medical AI specific risks and benefits. Let’s start with the benefits. Schaefer and colleagues for example point to the potential improvement in public health care and personal well-being to justify the moral duty to share EHRs for medical research [29]. Bowten and colleagues add the decrease of health care costs to this list [41] and Knottnernus points to the benefits that large EHRs databases provide for the expansion of medical knowledge [21]. By focusing on the medical AI development, I believe a further benefit needs to be mentioned. Forsberg and colleagues claim that all citizens will receive substantially worse health care in the future compared to an ideal scenario if some citizens do not support key innovations such as medical AI technology today [24]. This argument is quite strong, as it relates to issues of discrimination and injustice surrounding so-called selection bias. Selection bias can occur when an AI is trained with datasets in which groups of a certain age, social class, ethnicity, biomarker, or health status are underrepresented or unrepresented [42, 43]. An AI that is biased in that respect might not be able to recognise signs of skin cancer on a skin tone with which it is unfamiliar, it might diagnose women less accurately than men if it is primarily trained with male data, and it may not recognise certain cases of dangerous drug interactions if it is denied access to the data of vulnerable groups like Alzheimer patients [44]. According to a review study by Kho and colleagues, selection bias is not the result of a few citizens who refuse to share EHRs but of an effect called consent bias, which means that certain socio-economic groups are structurally more willing to consent to health-related research than others [45]. Selection bias might also be affected groups that generate more useful data than others, such as chronically ill people or quantified self-enthusiasts [46]. Since medical AIs are only as good as training data allow them to be, Cassell and Young call for a duty to foster a balanced representation of all social groups in those data [25]. This proposition means that people who belong to groups that are underrepresented in medical datasets, which in most cases is everyone except adult Caucasian men [47], have a specific civic duty to share their EHRs. Now what about the minimal or reasonable risks that come along with EHR sharing, and the risks entailed by the development and use of medical AI? Minimal risks are risks that are perceived as normal in everyday encounters such as driving a car or going to the dentist for a routine check-up [48]. The risks associated with EHR sharing and the development and use of medical AI can be of very different natures and can affect individuals, social groups and institutions differently [49]. Individuals can suffer personal harm due to data breaches caused by hacker attacks, data misuse, or adversarial attacks [22, 50]. Individuals can also be harmed by medical AIs that produce technical errors, give wrong medication advice or misinterpret input data [51]. Social groups can be discriminated against and treated unjustly by biased AI [52], and institutions such as health care professions can be severely harmed if people develop trust issues and avoid medical treatment [53]. Unfortunately, it is hard to decide whether those risks are comparable to the risks of other everyday activities and, therefore, qualify as minimal. It is also hard to decide whether those risks are small in comparison to the benefits and, therefore, reasonable. A strategy to solve this problem is to empirically prove that certain risks, such as the personal risk of harm from EHR breaches, are statistically smaller than other everyday risks, such as the personal risk of harm from traffic. Porsdam-Mann and colleagues [23] proceed with this strategy and estimate that the personal risk of becoming a victim of health data-related privacy breaches in the US was approximately 0.02%Footnote 3 between 2009 and 2016. In contrast, the risk of being injured in traffic in 2009 was approximately 0.7%. Considering these risks, the authors conclude that researchers should be allowed to access at least low risk datasets without asking for IC. Based on the risk-benefit analysis presented and the presupposed principle of beneficence, the ‘low risk, high benefit argument’ is: P1 Citizens have a moral duty to benefit others. P2 The risks of developing and using such medical AI are reasonable. P3 The development and use of certain forms of medical AI is beneficial for society. P4 The development and improvement of medical AI requires EHRs use. C Citizens have a moral duty to support medical AI developments that can be expected to benefit society by sharing EHRs. Why the risks of medical AI development and the use of such AI can be unreasonably high For analysis of premise P2, it is vital to acknowledge the two interrelated notions of risk as the quantifiable probability of a harm being done and as the quality of a harm. As mentioned previously, some works focus exclusively on the probability component. Such works encounter two problems. Firstly, they often do not reveal exactly which social groups and which individuals are at which risk of being harmed. People whose EHRs are stored in multiple databases are statistically more likely to become a victim of privacy breaches than people whose data is stored on one database. Data that is protected by insufficient security standards are more vulnerable than data that is well protected, and not anonymised breached EHRs can more easily cause harm than anonymised data [54]. The same is true for the use of medical AI. When an AI is trained and optimised with data from only one socioeconomic group or ethnicity, it is more likely for people outside this group to be harmed by biases. Secondly, even if these problems were solved and there were more accurate risk calculations available, those calculations cannot tell whether the quality of a given harm is reasonable to bear for every individual, for certain social groups, or for society. Therefore, I think, it is worthwhile to pay more attention to the quality of harms that may occur in the EHR sharing process or through the use of medical AI. The individual risks entailed by EHR sharing are hacker attacks, leaks, and instances of data misuse that can harm individuals in multiple ways. Patients can experience psychological stress when their health-related information is leaked and becomes public [55]. They can suffer economic losses when their data are hacked and used for blackmail. They can suffer a loss of autonomy when their data are misused to support political causes or social changes without their consent [26]. Patients who do not believe these risks to be reasonable may develop trust issues regarding all medical procedures in which EHRs are generated. A real-life example of this apprehension is the failure of the NHS project ‘care.data’, which tried to extract GP surgery data into a central database that was supposed to support research, public health planning, and commercial use. Patients were allowed to opt out of this program, but information concerning how to do so was not communicated transparently. This situation caused a significant number of patients to avoid seeking medical help and to stop disclosing relevant medical information to their physicians. Eventually, the project was paused a year after its launch in 2013 due to massive protests [53, 56]. As the NHS case shows, trust and confidentiality issues are major social risks. Social risks can manifest in a decay of solidarity, instances of discrimination, and even tendencies towards human rights violations. The effects of a decay in solidarity can occur when EHRs are linked to other personal profiles. For example, insurance companies can use health data to individualise risk categories and calculate premiums [57,58,59,60], and drug companies can target patient data to drive up prices and prescriptions [61]. Governments and health insurance companies can also discriminate against other market agents by granting EHR access to exclusive business partners without communicating their cooperation transparently and without enabling citizens to withdraw from data sharing policies. For example, in 2015, the British NHS granted Google’s Deep Mind exclusive access to 1.6 million health records, and in 2018, the US health care provider Ascension made the non-anonymised health data of more than 50 million individuals available to Google [62]. Another type of discrimination can occur in the employment context. Leaked health information concerning employees can put employers in a position to build up discriminatory health-related hiring barriers [63]. As the two law experts Price and Cohen put it, even if there are a number of laws in the US and Europe that prohibit discriminative hiring practices (e.g., the Americans with Disabilities Act), “they can be hard to enforce because it is often hard to know when discrimination has occurred” [64]. When a collection of medical data is comprehensive enough to include a large portion of a society and when it contains sensitive information, there is also a real risk of human rights violations. Data collections can be used to identify and discriminate against social groups with certain medical or genetic characteristics that are viewed as undesirable or deleterious by political authorities. As the historians Seltzer and Anderson have shown, data items most commonly used to target populations in the past included ethnicity, religion, country of birth, and native language [65]. Since macro-political shifts and revolutions are rarely predictable events, comprehensive health databases that are collected in democracies today might support totalitarian regimes in committing human rights violations in the future [52]. There are also harms related to the use of medical AI. On a personal level, people can be harmed by erroneous medical AI. An AI causes errors if it interprets data incorrectly, generates false outputs, makes harmful therapeutic suggestions, or physically harms people due to a malfunction. An example for such problems is the IBM supercomputer Watson, which was reported to suggest unsafe and incorrect cancer treatments in a cancer research trial [66]. Errors can also be a product of human intentions. These so-called adversarial attacks can be introduced to any learning algorithm [67]. For example, it is possible to manipulate medical images with pixel noise in such a way that it is invisible to the human eye and that image recognition software will misdiagnose the images [68]. In addition to intended and unintended technical errors, patients may face trust issues regarding changes in patient–physician relationships. These changes can be caused by the concern that caregivers might be less skilled in contexts where key medical competences are performed by medical AI in the future [69, 70]. The fear of social isolation caused by the replacement of human interactions with socially engaging AI (e.g., chat bots) is also a part of this picture. Blasimme and Vayena summarise these problems as follows: “exclusive reliance on algorithms may rule out that necessary degree of flexibility that allows healthcare operators to calibrate objective criteria with the reality of each individual case” [71]. The use of medical AI also entails the social risks of a decay of solidarity. If medical AI is going to be as effective as predicted, patients may feel that, in many ways, their life depends on the tech companies that own the best medical AI [72]. If those developers of medical AI increase prices or block the transfer of knowledge, solidarity structures might break apart [73]. Finally, the possibility of using medical AI to connect anonymized health data with non-anonymised datasets may facilitate human rights violations [14,15,16]. For example, Wang and Kosinski built a deep neural network that used a database of 35,326 facial images of self-reported homosexual men and women living in the US to learn how to recognise facial expressions that are characteristic of gay men and women [74]. It is not difficult to imagine situations in which tools that can recognise correlations among health information, social features and a person’s appearance could be used by political authorities to target citizens and undermine democratic structures [75]. Despite all these potential harms, advocates of P2 may still make the pragmatic argument that unwillingness to engage in medical data research and medical AI development projects can produce a selection and a consent bias that reduces the quality of medical AI. That is, in hypothetical comparison between a world with a civic duty to share EHRs and a world without such a duty, the first one would have relatively fewer biases and, thus, be more beneficial [24]. Given the tremendous number of other factors that also affect the quality of medical AI outputs, however, this argument would be short-sighted at best. Erroneous and, thus, potentially discriminatory and harmful AI outputs may result from errors in data transfer, from incorrectly coded diagnoses and therapies, or from incomplete and insincere patient testimonies [63]. In addition, there are other types of biases that cannot be attributed to citizens’ willingness to support medical AI development [42, 43]. There can be capture bias when training data are preselected according to the preferences of users, physicians, or developers. Those preferences may ignore the needs of certain social groups. There can also be a negative or a positive set bias when the control data are selected so poorly that the medical AI produces false negatives or false positives for certain groups of people. There can be an automation bias in the process of AI use, which is caused by the empirical fact that caregivers are less likely to question algorithmically generated diagnostic results [76, 77]. In addition, it is difficult or even impossible for patients and health care professionals to understand how complex algorithms work, whether an output is erroneous, or how a given error came about. That is why non-explainable algorithms are also known as black box algorithms [55, 78]. All these factors can lead to poorer outcomes for some individuals and social groups compared to a world without medical AI [79]. Overall, there are a multitude of potential risks that are important in the medical AI development process and in the use of such AI. AI projects that can be considered low risk and high benefit and that, therefore, might inspire a civic duty to share EHRs need to prove their low risk profile by (i) compiling databases and collecting datasets in a way that makes re-identification improbable, (ii) constantly updating cyber security standards, (iii) not accessing data items that are not necessary for the development process, (iv) not developing medical AI that can easily be used to discriminate or endanger individuals or social groups. However, because these characteristics apply only to very few medical AI development projects, the ‘low risks, high benefits argument’ has very limited potential to establish a civic duty that obviates IC requirements. Why medical AI is not necessarily beneficial for society It is doubtful that the benefits of medical AI can be enjoyed by every citizen. Analogous to many other medical innovations, it is more likely that in many countries, market access barriers will benefit only rich and privileged patients and those who sell medical AI [80]. Some authors try to counter this view by introducing the economic trickle-down effect to the public health sphere. Ballantyne and Schaefer, for example, argue that new medical innovations might be very expensive at first and only accessible to the wealthy. However, just as the wealth of the rich will trickle down through all social classes, eventually, the positive effects of innovative medicine will benefit everyone in a society [22]. Apart from the trickle-down effect is more of an economic thesis than an empirically proven phenomenon [81], some critics like Benke and Benke believe that medical AI innovations contribute to the issue of unequal access to health care in most countries [67, 68]. Advocates of premise P3 may still reply that even if economic or location-based disadvantages might exclude certain people from innovative health care techniques, the development of new and innovative health care technology including medical AI will, nonetheless, consolidate existing medical knowledge [21] and increase the common good [22]. This counterargument is problematic for the same reasons. Mechanisms such as paywalls and intellectual property rights can shift benefits into private spheres, and digital and medical literacy issues as well as discriminatory infrastructure can deny certain social groups access to medical knowledge [64]. In light of these arguments, a duty to share EHRs that equally applies to all citizens and that may even bypass IC requirements appears to be rather unfair in all cases in which citizens do not enjoy equal access rights to medical AI products [65]. Therefore, it can be concluded that P3 is justified only if (v) the medical AI in question will most likely be beneficial to all citizens despite socioeconomic differences, if (vi) the data use and the development process are communicated transparently, and if (vii) the dataset cannot be reused for other purposes without IC. The ‘property rights argument’ It is highly controversial in European and US law, whether EHRs can or should be seen as something a person can have property rights over [82, 84]. Although I will not discuss this complex legal matter here, I will, nonetheless, refer to the following rationale as ‘property rights argument’. I do so because the scholars presented in this section focus on various concepts of ownership and the exclusive claims individuals, institutions, and companies should or should not have on EHRs. Most of them do so from an ethical perspective and without reference to established laws. The ‘property rights argument’ can take two forms. Firstly, in the spirit of John Locke’s concept of property, it is argued that institutions co-create the value of health data collections by collecting, digitising, and organising data in health care and research processes. Without tremendous administrative and financial investments, there would not be anything valuable to which exclusive ownership claims could be made. Therefore, public institutions that generate health data collections are entitled to use that data in the public’s best interest. Citizens might have a legitimate interest in privacy but they cannot claim exclusive rights on data about their health [27, 72, 73]. This narrative is often accompanied by the taxpayer analogy [30, 35, 75, 83], which argues as follows: in most countries, citizens whose earnings are above a minimal threshold must pay income taxes. This tax money can be seen as a contribution to the common good. Citizens have a duty to pay their taxes, and the taxes are used in a way governmental institutions see fit. Even though the money can be used on projects that certain taxpayers do not wish to support (e.g., the military budget, abortion clinics, churches), paying taxes is considered a fair social practice that involves everyone doing her or his part. Transferred to the context of medical research, the analogy suggests that EHRs should be seen as a fair contribution to society that should be collected and treated in the way that a given governmental institution sees fit. Cohen presents an alternative second approach. By recalling old work cases, health care professionals such as physicians and nurses collect “little data” concerning every patient they have encountered, and they use this information to improve their skills. This “little data” collection is widely considered the property of the health care professional and not of the individual patients to whom the data pertain. Analogously, there is no reason to think otherwise when it comes to big data in health care. In both cases, the data are generated as a by-product of health care, and as such, they belong to the system or agent that provides such care [19]. Again, none of the scholars cited mentioned medical AI specifically. Nevertheless, as part of medical research medical AI can be considered here. The ‘property rights argument’ can be summarised as follows: P1 The value of many health data collections is generated by health care and research processes that are administered and financed by governmental institutions. P2 By engaging in the process of health data creation, agents gain rights over these data. C Governmental institutions have the right to use health data collections in the best interest of all citizens. That may include the development of medical AI. Why citizen do not lose their rights to EHRs I see at least five issues with the ‘property rights argument’. Firstly, there is the issue of co-production of EHRs. Montgomery argues that the raw data, EHRs are made of, are not generated by the health care system but by the patient’s body [85]. Thus, Montgomery concludes, that the role of individuals as co-producers of valuable data and the importance of EHRs to their autonomy make it difficult to grant others exclusive rights over EHRs and restrict IC rights. Secondly, there is the socio-economic issue that, without further adjustments, the ‘property rights argument’ would grant not only governmental institutions but also private companies the right to own and use health data collections in a way that they believe would serve the citizen’s best interest. Big tech corporations such as Google or 23andMe collect and manage health-related data from corporate trial series and consumer experiences at great financial and administrative expense. They can use these data collections for medical AI development, and they can sincerely believe that their enterprise is in the best interest of all. One way to avoid this problem, I think, would be to emphasise the data subjects’ rights as citizens to political participation. If data subjects would be able to engage in a democratic process that determines what a beneficial medical AI should look like, this process could grant and revoke governmental and non-governmental medical AI development projects a social license to operate [53, 86]. Thirdly, scholars have discussed the pragmatic issue that governmental ownership claims to EHRs might be perceived as a violation of confidentiality [53, 56]. As it has been shown in the ‘care-data’ case, confidentiality and trust are paramount in health care contexts. If citizens in their role as patients, consumers, and research subjects feel as if they do not have a say in how their health data are used, they might avoid its co-production. Fourthly, there is an ontological issue with the taxpayer analogy. Ploug argues that while the misuse of tax money does not (directly) effect taxpayers’ ability to shape their personality, the misuse of health data can violate citizens’ autonomy. Additionally, in cases where tax money has been misused, it is often possible to pay back the money and compensate for damages. Leaked information concerning a real person’s health, however, cannot be taken back [63]. I am going to criticise Cohen’s comparison between the memory of professional caregivers and health databases in the same way. Physicians cannot tell if people they encounter in their daily lives match cases from textbooks or colleagues’ reports. In contrast to the data in the mind of a person, EHRs can be accessed by multiple agents and used outside the confidential patient-caregiver relationship. In addition, certain types of medical AI and certain institutions are able to combine different datasets and re-identify data subjects. Human physicians cannot do that. Fifth and finally, I like to add the ethical issue that malicious governmental institutions in the future might abuse medical databases that are created for a beneficial purpose today. This issue becomes particularly important once sharing EHRs is understood as a fair civic contribution to the common good. If all citizens in a given society accept the taxpayer analogy, it would be logical to create health data collections that take the data of all citizens into account—an effort called for by Ballantyne [27]. As I have discussed before, this policy can promote human rights violations as soon as radical institutions emerge that can access the data or enable medical AI to do so. Given these criticisms, the ‘property rights argument’ lacks socio-economic, pragmatic, ontological, and ethical strength to justify a civic duty to share EHRs for medical AI development. Medical AI and the civic responsibility to share EHRs I think that a good alternative to balance the diverse interests and values in this matter is to abandon the moral duty approach in favour of civic responsibility. Why is this change in the normative structure attractive? A responsibility approach can identify different context-sensitive courses of actions for different agents to accomplish a shared objective or live up to a shared value. It enables citizens to participate in the identification of shared values, shared objectives and correspondingly adequate actions by attributing and accepting responsibility in a reciprocal and real-life process [87]. In contrast, the duty-centred arguments presented acknowledge a comparatively small number of actions as appropriate (‘share your EHR in case X’), and they make judgements concerning the right course of action from a position that is either concerned with the risk-benefit analysis and the presumed preference of society, or from a position that tries to represent a somewhat impartial view. Such a position can anticipate values concerning medical AI, the state of future health care, privacy, autonomy, and the common good that might be shared in pluralistic societies. However, it cannot mimic or replicate the real-life process of forming, identifying, and legitimising shared values [88]. To illustrate the significance of the shift from a civic duty to a civic responsibility to share EHRs to support medical AI it is helpful to imagine a society in which such a civic duty existed. In this society, any citizen who does not want to act immoral must consent to her or his EHRs being stored and processed for medical research purposes including medical AI development. Any medical AI project like the already mentioned AI CheXNet that wants to access those data to train their AI needs to ask permission from a given authority. This authority, may it be an ethics committee or a different institution, faces the legitimacy problem described above. It can guess whether the data subjects collectively like to support the CheXNet project, or it can declare that the CheXNet AI would be in the interest of society. Either way, the authority’s decisions do not have to match the data subjects’ values nor their preferences. The gap between the data subject’s actual values and data access policies manifests when people are harmed because of this policies. That might be the case when AI projects like Packhäuser and colleagues’ deep learning algorithm that is programmed to re-identify anonymised chest X-ray images are granted access to the CheXNet training data as well and produce privacy breaches. Such cases are challenging because citizens can not be addressed as responsible agents properly here. They are not sufficiently responsible for the harms that might be caused by the privacy breaches because of two reasons. Firstly, they are not in a position in which they co-create the structural conditions of medical research and medical AI development. Secondly, they are also not in a position in which they can form shared ideas of eligible and undesirable research projects. Because they have already done their duty and obeyed the moral imperative to share their EHRs, it is hardly possible to motivate them to take on responsibility for the consequences of EHRs access and process policies they had no say in. Given these problems, I think a responsibility approach is more promising. Entertaining a responsibility approach, citizens can engage in a discourse about which medical AI development projects should be supported and which not. The results of such discourses can indicate a responsibility to support those development projects and they can also motivate citizens take on this responsibility by providing shared and thus understandable reasons. People in different social positions can live up to their responsibility by performing different actions. In a pandemic, for example, patients can act responsibly by sharing EHRs with AI development projects that model the future course of infection, and they can also act responsibly by not sharing their data with AI projects that can be considered unimportant or even dangerous in a pandemic. Scientists can be responsible by engaging only in medical AI projects whose characteristics conform to shared values, and they can act responsibly by engaging in research that hinders the development of unsustainable, high-risk, and/or redundant AI technologies. Companies can act responsibly by respecting data protection laws and members of governmental institutions can act responsibly by establishing and maintaining safeguards for EHR transfer, access, and reuse. Companies and governments can also be responsible by engaging in public dialogue and promoting important medical AI developments. The responsibility approach can be implemented with the help of modern IC models. Modern IC models such as the ‘dynamic consent model’ [89], the ‘meta-consent model’ [90], or the ‘value-based consent model’ [91] enable citizens to manage their consent decisions for secondary data use. Citizens can use such models to enter into dialogue with researchers, read additional information about medical AI projects that might increase the awareness of the benefits and risks involved in a project, ask follow-up questions, weight arguments and give their consent. They can also use the models to revoke consent decisions. With every new research request send to them, they gain the opportunity to take on responsibility – for the state of medical research, including medical AI, and also for the positive and negative consequences of that development.
2022-12-14T00:00:00
2022/12/14
https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-022-00871-z
[ { "date": "2022/12/01", "position": 73, "query": "AI economic disruption" } ]
Bumps in the Energy Transition
Bumps in the Energy Transition
https://www.imf.org
[]
AI Preparedness Index ... Lack of consensus on how fast the transition should and can take place, in part because of its potential economic disruptions ...
5 min (1403 words) Read Download PDF Despite a growing global consensus, obstacles to reducing net carbon emissions to zero are stark The global disruptions in energy markets and the war in Ukraine have added impetus to the push for renewable energy and the drive toward net-zero carbon emissions. Yet, even as the global consensus around the energy transition becomes stronger, the challenges to that transition are also becoming clearer. In addition to the uncertain pace of technological development and deployment, four issues in particular stand out: The return of energy security as a prime requirement for countries Lack of consensus on how fast the transition should and can take place, in part because of its potential economic disruptions A sharpening divide between advanced and developing countries on priorities in the transition Obstacles to expanding mining and building supply chains for the minerals needed for the net-zero objective The need for energy security was a concern that had largely faded over the past several years. The energy shock, the economic hardship that ensued, skyrocketing energy prices that could not have been imagined 18 months ago, and geopolitical conflicts—all these have combined to force many governments to reassess strategies. This reassessment recognizes that the energy transition needs to be grounded in energy security—that is, adequate and reasonably priced supplies—to ensure public support and avoid severe economic dislocations, with the dangerous political consequences that can follow. The current global energy crisis did not start with the February 2022 invasion of Ukraine. Rather, it began in late summer of 2021. The economic rebound that came with the ending of the global COVID-19 lockdowns fired up global energy consumption. Oil, natural gas, and coal markets all tightened in the latter part of 2021, sending prices up as demand pushed against what became apparent—insufficient supply. It was in November 2021, three months before the invasion, that the US government announced the first release from its strategic petroleum reserve. What has become clear is that “preemptive underinvestment” has constrained the development of adequate new oil and gas resources. There are a number of reasons for this underinvestment—government policies and regulations; environmental, social, and governance (ESG) considerations by investors; poor returns caused by two price collapses in seven years; and uncertainty about future demand. The shortfall in investment was “preemptive” because of what was mistakenly assumed—that sufficient alternatives to oil and gas would already be in place at scale by now. Some have described what is currently unfolding as the “first energy crisis of the energy transition”—a mismatch between supply and demand. If it does prove to be only the first, future such crises will create uncertainty, cause major economic problems, and undermine public support for the energy transition. Energy transitions throughout history The first energy transition was from wood to coal in the 18th century. Although coal was used as early as the 13th century in Britain because the cost of wood had gone up, it emerged as a distinctive industrial fuel only in January 1709—when English metalworker Abraham Darby proved that coal was, as he said, “a more effective means for iron production” than wood. He noted, though, that “there are many who doubt me foolhardy.” Yet energy transitions have hardly been swift. Although the 19th century is known as the “century of coal,” that century actually still ran, in the words of energy scholar Vaclav Smil, on “wood, charcoal, and coal residues.” It was not until 1900 that coal supplied half the world’s energy demand. Oil was discovered in the United States in 1859. More than half a century later, on the eve of World War I, then First Lord of the Admiralty Winston Churchill directed the conversion of the Royal Navy from coal to oil for technological reasons—speed, flexibility, ease of refueling, and the elimination of crews shoveling coal. But it took until the 1960s, a century after it was discovered, for oil to overtake coal as the world’s number one energy source. Until now, energy transitions have unfolded over long periods of time (see “Picture This” in this issue of F&D). They also have really been energy additions rather than transitions. In the six decades since oil overtook coal as the world’s number one energy source, the global consumption of coal has almost tripled. The current climate-driven energy transition is meant to be achieved quickly—in little more than a quarter century. And it is meant to be transformative. Coal is to disappear, and the European Union anticipates that hydrogen will provide 20 to 25 percent of its total energy by 2050. While it is the focus of increasingly intense activity and ambition, hydrogen provides less than 2 percent today. The objec­tive of this transition is not just to bring on new energy sources, but to entirely change the energy foundations of what today is a $100 trillion global economy. Speed of the transition If energy security is the first challenge of the transition, timing is the second. How fast should it—and can it—proceed? There is much pressure to accelerate a significant part of the 2050 carbon emission targets toward 2030. But it sometimes seems that the scale of what is being attempted is underestimated. In my book The New Map (2021), I looked at the previous energy transitions, and it is clear that this one is like no other. All previous transitions were driven largely by economic and technological advantages—not by policy, which is the primary driver this time. Each of the preceding transitions unfolded over a century or more, and none were the type of transition currently envisioned. The objective of this transition is not just to bring on new energy sources, but to entirely change the energy foundations of what today is a $100 trillion global economy—and do so in little more than a quarter century. It is a very big ambition, and nothing on this scale has ever been attempted up to now. Some have warned that because the scale of the transition is so large and far-reaching, the macroeconomic impact needs deeper analysis. The economist Jean Pisani-Ferry, cofounder of Bruegel, Europe’s leading economic think tank, has observed that accelerating the targets for net carbon emission reductions too aggressively could create much larger economic disruptions than generally anticipated—what he called “an adverse supply shock—very much like the shocks of the 1970s.” Such a transition, Pisani-Ferry presciently wrote in 2021, just before the current energy crisis began, is “unlikely to be benign and policymakers should get ready for tough choices.” He subsequently added, in 2022: “Climate action has become a major macroeconomic issue, but the macroeconomics of climate action are far from the level of rigor and precision that is now necessary to provide a sound basis for public discussions and to guide policymakers adequately. For understandable reasons, advocacy has too often taken precedence over analysis. But at this stage of the discussion, complacent scenarios have become counterproductive. The policy conversation now needs methodical, peer-examined assessments of the potential costs and benefits of alternative plans for action.” North-South divide The third challenge is the emergence of a new North-South divide—a sharpening difference between developed and developing countries on how the transition should proceed. The original North-South divide of the 1970s was a collision between developed and developing nations over the distribution of wealth and, in particular, the pricing of commodities and raw materials. That division faded with globalization and advances in technology, as reflected in the shift in nomenclature to “emerging market” nations. For developing countries, what seems a singular emphasis on reducing emissions needs to be balanced against other urgent priorities—health, poverty, and eco­nomic growth. The new North-South divide reflects disagreement over climate and transition policies, their impact on development, and who is responsible for cumulative and new emissions and who pays. The global commodity shocks triggered by the war in Ukraine and the interest rate increases and currency devaluations that have ensued have only deepened the pressures on developing countries. For developing countries, what seems a singular emphasis on reducing emissions needs to be balanced against other urgent priorities—health, poverty, and economic growth. Billions of people still cook with wood and waste, resulting in indoor pollution and poor health. Many of these countries are looking to increased use of hydrocarbons as integral to raising standards of living. As former Indian Petroleum Minister Dharmendra Pradhan put it, there are multiple paths for energy transitions. India, while making a big commitment to renewables, is also building a $60 billion natural gas distribution system. Developing countries are seeking to initiate and expand the use of natural gas to reduce indoor pollution, promote economic development and job creation, and, in many cases, eliminate the emissions and pollution that come from burning coal and biomass. There may be a tendency in countries with advanced economies to wave away this divide, but the reality was sharply captured in September 2022, when the European Parliament voted, in an unusual expression of extraterritoriality, to condemn a proposed oil pipeline from Uganda through Tanzania to the Indian Ocean. The parliament denounced the project for what it said would be the pipeline’s detrimental impact on climate, environment, and “human rights.” The parliament is headquartered in France and Belgium, where the per capita income is about 20 times greater than in Uganda. Not unexpectedly, the condemnation set off a furious reaction in Uganda, where the pipeline is viewed as crucial to economic development. The deputy speaker of the parliament denounced the European resolution as “the highest level of neocolonialism and imperialism against the sovereignty of Uganda and Tanzania.” The energy minister added, “Africa has been green, but people are cutting down trees because they are poor.” The national student union in Uganda took to the streets to demonstrate against the European Parliament, with one of the student leaders saying, “The Europeans have no moral superiority.” Whatever the specific issues, it’s hard to deny the sharp difference in perspectives. The split is particularly evident when it comes to finance. Western banks and multilateral financial institutions have shut off finance for pipelines as well as for ports and other infrastructure related to hydrocarbon development. One African energy minister summed up the impact of the denial of access to finance as akin to “removing the ladder and asking us to jump or fly.” Finding a balance between the perspectives of the developing world, where 80 percent of the globe’s population live, and Western Europe and North America will take on increasing urgency. Finance shut off The fourth challenge will be ensuring new supply chains for net zero. The passage in the United States of the Inflation Reduction Act, with its massive incentives and subsidies for renewable sources of energy; the REPowerEU plan in Europe; and similar initiatives elsewhere will accelerate the demand for the minerals that are the building blocks for renewable energy, which requires wind turbines, electric vehicles, and solar panels, among other things. A host of organizations—the IMF, the World Bank, the International Energy Agency (IEA), the US government, the European Union, Japan—have all issued studies on the urgency of those supply chains. The IEA projects that the world economy will be moving from “a fuel intensive to a mineral intensive energy system” that will “supercharge demand for critical minerals.” In The New Map, I summarize this as the move from “Big Oil” to “Big Shovels.” S&P Global, the financial and analytical firm of which I am vice chairman, has sought to build upon those studies and quantify what that “supercharged demand” for minerals might be. S&P Global’s study “The Future of Copper: Will the Looming Supply Gap Short-Circuit the Energy Transition?” (2022) focused on that metal because the thrust of the energy transition is toward electrification, and copper is “the metal of electrification.” The study took the types of year 2050 targets advanced by the US administration and the EU and assessed what realizing those targets would require for specific applications—for instance, the different components of an offshore wind system or electric vehicles. An electric car, for example will require at least two-and-a-half times more copper than a vehicle with a conventional internal combustion engine. The conclusion of this analysis is that copper demand would have to double by the mid-2030s to achieve the 2050 goals The choke point is supply. At the current rate of supply growth—which encompasses new mines, mine expansion and greater efficiency, and recycling, as well as substitution—the amount of copper available will be significantly smaller than the copper supply requirements. For instance, the IEA estimates that it takes 16 years from discovery to first production for a new mine. Some mining companies say more than 20 years. Permitting and environmental issues are major constraints around the world. Also, copper production is more concentrated than, say, oil. Three countries produced 40 percent of world oil in 2021—the United States, Saudi Arabia, and Russia. Just two countries produced 38 percent of copper—Chile and Peru. Copper is crucial Copper prices have fallen about 20 percent from their high point this year. That reflects the metal’s oft-noted role as “Dr. Copper”—its price as a predictor of economic slowdowns and recessions. And indeed, the IMF sees a sharp slowdown in global growth in 2022 and projects further slowing in 2023 and potential recession—as do many other forecasters. But, post-recession, the coming flood of demand from the energy transition will cause copper prices to rise again. As has been the historical pattern, the surge in demand and prices will likely create new tensions between resource-holding countries and mining companies, which in turn will affect the rate of investment. Moreover, as the race to net zero intensifies, there is a risk that the competition for minerals will become caught up in what has become known as the “great power competition” between China and the United States. S&P Global’s copper study is meant to contribute to a deeper analysis of the physical challenges to the energy transition. The wind industry has what a 12th century English champion of windmills called “the free benefit of wind.” And solar has the free benefit of the sun. But the physical inputs that go into harnessing wind and solar power are not costless. The effort to push a significant part of the 2050 goals toward 2030 will likely have to contend with significant physical constraints. These four challenges—energy security, macroeconomic impacts, the North-South divide, and minerals—will each have significant effects on how the energy transition unfolds. None are easy to grapple with—and they will interact with each other, which will compound their impacts. But recognizing them will promote deeper understanding of the issues and requirements in seeking to achieve the energy transition. Podcast The energy crises of the early 70s, the 80s, and the early 2000s all had their challenges, but none were so intertwined with other emergencies like a war in Europe, climate change, and a global pandemic.
2022-12-01T00:00:00
https://www.imf.org/en/Publications/fandd/issues/2022/12/bumps-in-the-energy-transition-yergin
[ { "date": "2022/12/01", "position": 74, "query": "AI economic disruption" } ]
OPINION : AI ART will make artists lose their jobs-the work ...
The heart of the internet
https://www.reddit.com
[]
That's just economic progress though. The same could be ... AI art is a huge disruption in the sector, and hundred of thousands of jobs will disappear.
Create your account and connect with a world of communities. New to Reddit? By continuing, you agree to our and acknowledge that you understand the
2022-12-01T00:00:00
https://www.reddit.com/r/singularity/comments/zp4nob/opinion_ai_art_will_make_artists_lose_their/
[ { "date": "2022/12/01", "position": 81, "query": "AI economic disruption" }, { "date": "2022/12/01", "position": 46, "query": "generative AI jobs" } ]
Exploring the Potential of AI in Government Decision-Making
Exploring the Potential of AI in Government Decision-Making
https://www.albany.edu
[]
A study aimed at answering how AI techniques can be used to boost government decision-making by gearing public spending toward increasing GDP, decreasing ...
By Michael Parker ALBANY, N.Y. (Dec. 1, 2022) — Artificial intelligence (AI) has become an important tool for governments around the world. However, it isn’t clear the extent to which AI can improve decision-making, including within policy domains and government functions such as the public budgeting process. Now, researchers at UAlbany’s Center for Technology in Government (CTG UAlbany) and Universidad Autónoma del Estado de México have conducted a study aimed at answering how AI techniques can be used to boost government decision-making by gearing public spending toward increasing GDP, decreasing inflation and reducing income inequality. Publishing in a recent issue of Government Information Quarterly, CTG UAlbany Director J. Ramon Gil-Garcia and co-authors David Valle-Cruz and Vanessa Fernandez-Cortes have authored the article, “From E-budgeting to smart budgeting: Exploring the potential of artificial intelligence in government decision-making for resource allocation.” To complete their analysis, the authors used a multilayer computer model and a multi-objective genetic algorithm to analyze World Bank Open Data from 1960 to 2019, including 217 countries. Center for Technology in Government Director J. Ramon Gil-Garcia “The advantages of implementing this type of decision support system in public expenditures allocation arise from the ability to process large amounts of data and to find patterns that are not easy to detect, which include multiple non-linear relationships,” said Gil-Garcia, who serves as a professor of Public Administration & Policy at UAlbany’s Rockefeller College of Public Affairs and Policy. “Some technical aspects of the expenditure allocation process could be improved with the help of these kinds of techniques. In addition, the results of the AI-based approach are consistent with the findings of the scientific literature on public budgets, using traditional statistical techniques.” Indeed, AI has become an essential issue on the agenda of governments around the world due to its potential benefits and positive implications for efficiency, transparency, service quality, and public value. However, the AI black box and the lack of explainability of some deep learning techniques could result in lack of trust, inequity, bias, the massive replacement of the workforce (particularly in routine tasks), and the increase in the digital divide. Despite the potential for positive or negative impacts in the use of AI in government, the authors believe it is clear that AI techniques could assist in decision-making by supporting public managers and government officials with simulations, new ideas, and innovative approaches to better understand data and the dynamics among multiple variables. Technology use in government (digital government) has enabled public services delivery through the Internet, fostered efficient capture, process, and report on data, and improved decision-making. However, advances in smart technologies, better informed and connected citizens and globally connected economies have created additional opportunities. The authors also found that governments have begun to take the concept of digital government to a new level as they realize the power of data and heuristic processing through artificial intelligence to improve their services, interact with citizens, develop policies and implement solutions for the welfare of the community and become a smart government. Smart government is considered one of the key trends that governments are following with the participation of the public and private sectors, as well as NGOs and members of civil society, boosted by technology, like the Internet, Big Data, open data, IoT, and AI, bringing the potential to improve planning and decision-making in governments. “Comparing the AI results with what has already been done on budgeting, we learned that the most important aspects that can generate an efficient public budget refer to policies that improve GDP, income inequality, and inflation reduction,” said Gil-Garcia. “These aspects are based on strategies focusing on improving public debt, fostering education and economic saving, investing in the agricultural sector, and addressing the population growth and public health problems. According to our AI-based analysis approach, the three most essential budget expenses that could improve the GDP are public spending, agriculture, and subsidies.” The findings were consistent with previous research that indicates that agriculture and public spending have a significant and positive effect on the GDP, and health directly affects productivity. Gil-Garcia’s co-authors for the study were David Valle-Cruz and Vanessa Fernandez-Cortez, both of Universidad Autónoma del Estado de México.
2022-12-01T00:00:00
https://www.albany.edu/news-center/news/2022-exploring-potential-ai-government-decision-making
[ { "date": "2022/12/01", "position": 9, "query": "government AI workforce policy" } ]
What are AI-Proof Jobs? : r/OpenAI
The heart of the internet
https://www.reddit.com
[]
AI-proof jobs are positions that are likely to remain safe from automation and replacement by artificial intelligence. These are jobs that require a high level ...
I’m 18 and I want to pick a profession that will not be automated. Right now I am interested in Business and or computer science? Are those and other majors safe? Also, Would studying AI be beneficial? Thanks, John
2022-12-01T00:00:00
https://www.reddit.com/r/OpenAI/comments/zf3aia/what_are_aiproof_jobs/
[ { "date": "2022/12/01", "position": 3, "query": "generative AI jobs" } ]
New ebook! | Andrew Ng
New ebook!
https://www.linkedin.com
[ "Ziv Leibowitz", "Software Developer", "Elmer Lopez", "Turn Cold Outbound Into", "Presidents Club", "Account Executive", "Israel Peters", "Entering Software", "Java", "Spring" ]
This compiles my tips on how to learn technical skills, choose and build AI projects, find a job. Let me know what you think!
New ebook! How can you build a career in AI? This compiles my tips on how to: ∙ Learn technical skills 🎓 ∙ Choose and build AI projects 🧑💻 ∙ Find a job 🦾 Let me know what you think! https://lnkd.in/g6CeBvBh
2022-12-01T00:00:00
https://www.linkedin.com/posts/andrewyng_how-to-build-your-career-in-ai-ebook-andrew-activity-7009240109237641216-tnL3
[ { "date": "2022/12/01", "position": 29, "query": "generative AI jobs" } ]
Generative AI (1/2): the new wave of AI is coming
Generative AI (1/2): the new wave of AI is coming
https://medium.com
[ "Abel Samot" ]
This article focus on explaining what is Generative AI, how the underlying algorithms work, and how it is currently used in the business world.
Generative AI (1/2): the new wave of AI is coming Abel Samot 12 min read · Dec 14, 2022 -- Listen Share While everybody was focused on crypto and web3 during the last two years, behind the scenes something that might have an impact of the same magnitude on the web and perhaps even more was rooting: Generative AI. But it’s during the past months that everything seemed to accelerate. It’s like every hope we had for AI in the last 20 years has come 10x closer to reality in a matter of weeks. And it’s really impressive! This article is the first of a series of two medium posts regarding Generative AI. Today I’ll focus on explaining what it is, how it works, how it emerged and what could be the underlying use cases. In a second article I will try to give my opinion on what could really be the impact of this technology shift, which industries will be the most affected, who might be the winners and what challenges companies building in this domain will face. The hype around AI and machine learning isn’t new at all. In 2012 Harvard Business Review qualified Data Scientist, “The Sexiest Job of the 21st Century”. And although I’m not 100% aligned with it (I believe that while there are many data scientists in the market, we don’t have enough data engineers), it shows how AI is becoming more and more widespread every day in any industry. The thing is that until now, AI and machine learning were mainly used to predict, categorize things and interact with the machine. But it never looked very smart. I mean, have you already talked to Siri or interacted with a chatbot? Sometimes it surprises you with some very good and interesting answers but we are far from what science fiction painted about what AI could do in the future. A good example of these limitations is Alexa from Amazon which has been a “spectacular failure” However, while the year 2022 is coming to an end, we have seen some incredible progress in Artificial Intelligence with tools and algorithms like Dall.e (tool generating images from text prompts) or GPT-3 (algorithms generating text) that will blow your mind. These developments allowed the emergence of a new tech branch that we call Generative AI or Generative tech. But what is Generative AI? Well, it’s a subfield of AI focused on creating or imitating certain types of data, such as images, texts, and sounds. It’s about going beyond prediction to actually allow computers to create things, things that didn’t exist before. We are not talking about an AI that can do basic stuff like taking an order in a McDonald’s, we are talking about algorithms that are able to collaborate with humans and even in some cases replace them in a creative process. AI that can write entire books by themselves, AI that will soon be able to create movies, and much more. I believe that it could allow billions of workers to be better, faster, and much more effective in their jobs and it might really be the AI shift that we have all been waiting for with apprehension. It is the next step in software and web evolution and will open so many doors for entrepreneurs. It goes without saying that it wasn’t long before VCs became interested in this sector and in the past months, we have seen startups like Stability or Jasper raising hundreds of millions of dollars in this area. It’s going very fast and everybody seems to realize what a tremendous opportunity it could be for entrepreneurs. As you can see in the graph below, it’s only been a month since the term “Generative AI” is trending on Google. So if you haven’t taken the time to get to know about it, it’s not too late! Bare with me :) Search for “Generative AI” on Google But is it really new? Well not really, at least not as new as this sudden hype could make us believe. Some form of Generative AI has been here for some time now. In music generation, for example, it’s been years that some games (like Consuming Shadows or No Man’s Sky) have been generating their own music using techniques like “procedural generation” and AI. It eases the work of the musicians that just have to choose the ton and focus on the most important part of the composition without having to create by hand hundreds of hours of basic video game soundtracks. What allowed generative AI to emerge is the amount of computing power now available at a lower cost. It allows the building of larger machine-learning models that can perform way better. That’s what allowed GitHub, one year ago, to release its code assistant Github Copilot which helps millions of developers code way faster. But generating code automatically doesn’t appeal so much to the general public. And I believe it’s the emergence of GPT-3 to generate text + of Dall.e, Stability and its counterparts to generate images as well as the sudden interest of VCs that really created this hype! Now that everyone can witness the power of Generative AI, we can’t stop the train anymore! What is behind Generative AI? The next step of text generation: GPT One of the most important revolutions allowing Generative Tech to emerge is a model you might have heard of: GPT. The third version of GPT, GPT-3 is the model that “changed everything”. GPT-3 is a machine-learning model trained using internet data to generate any type of text. Put simply; it’s an AI that is better at creating content having a language structure than anything we have seen. It can be used to answer any question, generate an essay, a blog post, and much more. In the example below, I asked ChatGPT (a chat model using GPT-3.5) to help me find Generative AI use cases to find a startup idea. And as you can see, even if the algorithm was never trained for this specific purpose, it’s quite impressive how good the outputs are! Chat GPT prompt I strongly advise you to test it on the Open AI website. But how does it work and what makes GPT-3 so different from the other language models that we know? Well, to be fair, the principles behind GPT-3 are quite simple: it is a model that is trained to predict the next word in a sentence. You give it a set of worlds and ask it to predict the next. Then you give it the world that it should have predicted and the algorithm automatically trains and improve. As a result, when a user provides text inputs, the system analyzes the language and uses a text predictor to create the most likely output. You can learn more about it in this video from Jay Alammar. How GPT-3 training works? From Jay Alammar. Pretty simple right? But the real innovation of GPT-3 comes from two things: the architecture of the neural network used to make these predictions and the amount of data used to train it. Transformers Neural Networks : GPT is using a very particular type of Neural network called “transformers”. Hence the name GPT comes from “Generative Pretrained Transformer”. Transformer architecture is a type of neural network architecture that was introduced in a paper by researchers at Google in 2017. It is designed specifically for natural languages processing tasks, such as language translation and text generation. The key innovation of the transformer architecture is its use of self-attention mechanisms, which allow the model to automatically learn the relationships between words in a sentence or paragraph, without the need for pre-specified rules or hand-crafted features. This allows the model to process natural language inputs in a more flexible and efficient way than traditional neural network architectures. Amount of data : What makes GPT-3 unique compared to other models, is the amount of data it was trained on and particularly the number of parameters used to make the predictions. When you want to predict the price of a house, for example, you can use many parameters such as the number of rooms, the size, the location, etc. GPT-3 does the same thing to predict the next word in a sentence, except it uses 175 Billion parameters. It’s 10 times more than the other language models and as there is a direct (almost linear) correlation between the number of parameters and the performances of a model (in that particular case not in others), it allows us to have much better predictions. Number of parameters used in some of the most popular Natural Language models Training an algorithm with so many parameters requires tremendous computational power and incredible engineering skills, that’s why we are now able to build this type of tool. And as computational power will increase, we might be able to build even better algorithms! Fine-tuning : But that’s not all. Even if GPT-3 has already been trained, you can still improve its prediction and performance for specific use cases. To do it, you just have to compute some relevant examples of what you would like and provide them to the algorithm. For instance, if you want to write poems in the style of a famous author, you can provide GPT-3 with a set of poems written by this author and then ask him to write a poem about whatever you want. It will automatically adapt to this style. It also works with my example of questions for a developer interview. I can provide the algorithm with a set of questions to make it better at computing others. We call it fine-tuning. And it’s by using a careful fine-tuning approach on specific use cases that I believe entrepreneurs will be able to create incredible companies leveraging this type of algorithm. I really want to stress this point: it’s fine-tuning that will really differentiate companies using Generative AI. Indeed, as most of them use the same algorithms available to everyone via a simple API, the barrier to entry is really thin. I believe that creating a cool UX/UI on top of GPT-3 will not be enough to build a moat (that’s basically what companies like Jasper and copy.ai are doing). But using a careful fine-tuning approach by using proprietary data to really make these algorithms the best for a specific use case, might be! That’s how Open AI created ChatGPT for example, by using an innovative fine-tuning approach with human interventions on the brand-new GPT-3.5. Generating Images with AI We can now generate much more than just text. We can generate images! You might have heard about Dall.e or Stable Diffusion which are some of the leaders in AI-generated images. They basically allow you to generate any image that you want with a simple prompt. Here is an example of completely new images generated by Dall.e with the prompt “A bowl of soup that is also a portal to another dimension, digital art”. I strongly encourage you to test it in order to see how incredible these algorithms can be. Go to “https://labs.openai.com/”, type anything that you want and it will generate an entirely new image that you can use without any copyright issues. You can also ask the algorithm to compute this image with a certain style like “painting”, “digital art”, “pixel art” and much more to obtain exactly what you want. Like GPT-3, DALL-E and its counterparts were trained by using an insane amount of text data and also images. It views images as a set of pixels & numbers and by using what we call Clips embeddings, it is able to compute a set of specific images from a set of nosy images and a prompt. There, it becomes a little bit more complicated than GPT-3 so I won’t elaborate on how it exactly works in this article, but if you want to understand more about it, you can watch those very good videos from Computerphile and AssemblyAI. Graphic designers around the world are already using this type of models in order to enhance their imagination, and create new designs, pieces of art, and much more. Even if this technology might seem more frightening than interesting for them, I believe that above all, it will allow designers to be much more efficient and creative in their work. To be fair, it’s far from perfect and not so easy to handle. I had to test dozens of commands before getting a very good result on Dall.e. For now, it feels like having an extremely talented and creative but dumb assistant. He can create incredible graphics and gives you a lot of new ideas, but sometimes he produces some completely dumb things and you don’t know why. But you can help him be smarter and get inspired by your work! Thanks to Open Source solutions like Stable Diffusion as well as fine-tuning, artists can now adapt these algorithms by using their own art. It works the same way as GPT-3 fine-tuning works: you can provide the algorithms with some examples of paintings, drawings, images of objects etc. And it will learn from it in order to generate anything that you want with the same style. Here is a fun example of how to use Stable Diffusion to generate images based on your face. Feel free to look at this article from Tarunabh Dutta to understand how you can do it! How Stable Diffusion can generate images based on your face from Tarunabh Dutta I believe it is the possibility to fine-tune these algorithms with whatever they want that will really change the way artists work. It will empower game and graphic designers to produce much more content faster and cheaper. And as it will be fast and cheap to test new things, it might enhance their creativity and push them to test entirely new things & explore new paths. It will also give the opportunity to anyone to produce good graphical content and further feed the growth of the creator economy! How is Generative AI used and how will it evolve? Currently, Generative AI is mostly used for marketing, copyrighting, and as an AI assistant. By using GPT-3, companies like Jasper and Copy.ai have built copywriting tools allowing you to write blog posts, tweets, marketing content, emails, and much more 10x faster than before. As a matter of fact, I used Jasper to help me write this article :) Marketers can also use image-generating tools to help them create great content for their Instagram or their next campaign! Tools like Lavender (to write e-mails quicker) or Andi (smart assistant) can really simplify your life and I believe that Generative AI will create a new kind of partnership between humans and computers. A new paradigm is on the verge where our computer is not just a tool anymore but also an assistant that will supercharge your productivity. And much more is coming. Generative AI can be used for even more impressive things than generating Images or Text : 3D generated content: Much more than images, some companies like Mirage or CSM are building engines allowing to generate entire 3D models and objects automatically by using the same kinds of principles. It opens the path for a new way to build video games and artificial worlds. Much more than images, some companies like Mirage or CSM are building engines allowing to generate entire 3D models and objects automatically by using the same kinds of principles. It opens the path for a new way to build video games and artificial worlds. Video generation: some companies like stability.ai are also working on building the same kind of technology to generate videos automatically. It could revolutionize the entire movie and short-content video industries. Other companies like Runway have also built tools allowing you to edit videos like never before thanks to this technology. some companies like stability.ai are also working on building the same kind of technology to generate videos automatically. It could revolutionize the entire movie and short-content video industries. Other companies like Runway have also built tools allowing you to edit videos like never before thanks to this technology. Sound generation: Generative AI can also be used to automatically create music or voices. To understand how powerful it could be, don’t hesitate to listen to this fake podcast between Joe Rogan and Steve Jobs that was created by an AI. It’s quite impressive but also quite scary if you ask me! Generative AI can also be used to automatically create music or voices. To understand how powerful it could be, don’t hesitate to listen to this fake podcast between Joe Rogan and Steve Jobs that was created by an AI. It’s quite impressive but also quite scary if you ask me! Code generation: as an engineer, it might be the most impressive use case I can think of when I hear about generative AI. Tools like GitHub Copilot can now generate code just by using a prompt telling about what you want to see. And tools like debuild.app can literally generate entire web apps based on text (see the example below)! In my next article I’ll speak about the possibilities I think that might open for developers but also the limitations of this type of approach for writing code. Example of what is possible with debuild.app And that’s only the tip of the iceberg. I believe that most of the use cases of generative AI have not been seen yet and that it will continue to develop itself at a very rapid pace. Conclusion Congrats if you have reached this part of the article, I hope it allows you to understand the basics of Generative AI and grasp the opportunities it could open. Here, I mainly explained what this new wave of technology was and described how it could be used without giving too much of my own opinion on the matter. In the next article, I’ll take my VC hat and try to predict what the future of Generative AI could look like and what types of actors could emerge but also what will be the limitations of this technology, and the underlying challenges for entrepreneurs. Stay posted, it’s coming out soon :)
2022-12-14T00:00:00
2022/12/14
https://medium.com/@abel_samot/generative-ai-1-2-the-new-wave-of-ai-is-coming-3876d34a8985
[ { "date": "2022/12/01", "position": 32, "query": "generative AI jobs" } ]
The Work of Art in the Age of Generative AI | by Jon Radoff
The Work of Art in the Age of Generative AI
https://medium.com
[ "Jon Radoff" ]
Fast food preparation, janitorial and many manufacturing jobs will follow. In an economy where, in December 2022, there are still nearly two job openings for ...
The Work of Art in the Age of Generative AI Jon Radoff 8 min read · Dec 19, 2022 -- 4 Listen Share We are on the threshold of the most important change in productivity since harnessing fire allowed our hominid ancestors to eat better, spend a lot less time chewing, and gather enough nutrients to evolve bigger brains. Now, artificial intelligence is augmenting the power of our brains, and it seems that no field of human endeavor will be unaffected. “No to AI Art” — Composition by the author in Stable Diffusion Transportation and warehousing jobs will largely be replaced by robots and autonomous vehicles. Fast food preparation, janitorial and many manufacturing jobs will follow. In an economy where, in December 2022, there are still nearly two job openings for every unemployed person — it seems we need robots to meet consumer demand and do the work that people won’t do. All forms of labor will ultimately be a function of electricity, and electricity is only getting more abundant. Renewables are already cheaper than fossil fuels, and getting cheaper; and in a decade or two we might even harness nuclear fusion, the power at the heart of stars, to generate virtually unlimited energy. Labor will not be limited to physical labor, but intellectual labor as well. While most people were thinking about the blue-collar work that would be replaced by AI, they overlooked an entire sector: our creative industries. If your work is the manipulation of symbols, text and images — then AI is coming for you sooner than many of these physical jobs will be replaced. That’s because these jobs don’t require any special hardware — it runs on the computers we already have. Nowhere is this change producing greater anxiety than the world of art. Protest on Artstation against AI Art My daughter is a budding artist. And I’ve worked alongside many artists — who I consider to be friends, amazing colleagues and esteemed professionals. So everything I write here is written with love in my heart. Some of what I will say will be alarming, some of it may even appear insensitive. My goal here is to lay out the forces that are at work, so that we can grapple with reality as it is rather than as we wish it might be. What is Art? Art, the kind that taps into unique human perspectives on the human condition, plays with new forms of media, reflects the lived experience of emotion and physical being — all of this art will continue. I’ll leave it to academics to define what art really is; for me, it’s enough to know it when I see it: I know that a Banksy mural in Kyiv is art. I know that Flower and Journey are art. I know that Las Meninas is art. I know that Spirited Away is art. I know that Don Draper’s pitch for the Carousel is art. You might have a different definition of art than me, which is fine — and you may consider your own creations to be art, which is wonderful. I’m not interested in being a gatekeeper. Everyone who wishes to create art, ought to have the opportunity to do. Art versus Commercial Graphics But most “art” is not art. A huge quantity of professional work is not art of the type I described above. It is commercial graphics production done as part of a business transaction. In that sense, it doesn’t belong to a special class that is different than the work various robots are about to replace. This isn’t to say that people don’t enjoy their work, or value it highly (having invested enormous amounts of time into honing their craft) — or to imply that “true art” is intrinsically more important than commercial graphics. Commercial graphics is a form of work within our system of capitalism. And since we’re on the subject of capitalism, let’s talk about about intellectual property. Part of the backlash against AI-generated artwork is the objection that it exploits intellectual property belonging to artists. The argument is that since models are trained on copyrighted works, that artists are being ripped off. This is a complicated issue, but let me summarize some of the problems with it: The AI models are pattern-recognizers, not copyists. The models observe patterns (brushstrokes, curvature, edges, organizations) across many works, and create a statistical model that render these similarities. In that sense, it is vaguely similar to what any human artist does when learning art: they observe works, learn the methods behind them, and internalize the practice. There have been cases where the diffusion models have regenerated some works — perhaps an overfitting problem (one that appears worse in small datasets). But even in those cases, which appear uncommon — this is a bug and not an intended feature of the technology. I suspect improvements will address these issues, or entirely different approaches will pull ahead that yield less problematic results (GANs do not have this issue). But let’s just assume that there are some copyright problems that will get worked out over time, and people who prefer to opt-out of having models trained on their work will have the ability to do so: The technologies will not be stopped even if training is curtailed. Whether it is legal to train these models from copyrighted artwork is one that will no-doubt be tested by litigators; and if those attempts fail in the courts, then perhaps it will result in new laws and regulations. But I expect these cases and rules to fail to prevent the industrialized use of generative AI. There are a number of reasons: the ample corpus of artwork by creators who are no longer alive and out of copyright; there’s plenty of commercial graphics owned by companies (not artists) who will be happy to license it; and the companies building generative technologies could also hire artists to produce content where gaps remain in the training, and train from those instead. These technologies will not be stopped, and they will not be canceled — no more than you can stop the efficiencies gained by artists doing paint-overs on photo reference, or from applying digital tools in Photoshop. Craft For most of recent history, creating commercial graphics has required a deep commitment to craft: many years spent honing skills with illustration, learning the tools, practicing, and expression across a wide range of media. Nothing will stop you from continuing to learn these crafts, just as one can still build furniture entirely from hand tools. It is a uniquely human experience to make something with your own hands and your own mind, and everyone ought to have the opportunity to do so — and if some of the rosier predictions of our future economy of abundance are realized, perhaps everyone will! For some artisans, applying craft skills in industry will continue to be valuable: because humans will continue to explore unique visions of art and creativity; or perhaps because the aura associated with human-crafted artifacts will become more valuable as machine-generated versions become more abundant. In other cases, the “value” will be the experience of learning and indulging in a craft — not the commercial transactions it can produce. And that’s what’s at the heart of the anxiety pulsing through the world of art right now. For the vast majority of working artists, it isn’t really about copyrights and intellectual property (those are mostly owned by companies or are moving out of copyright). It is about having a craft that one truly loves, have made a massive investment in, and want to continue doing. And you want to be paid for it. It’s an ikigai. Composition > Production We are moving to an economy which will favor composition over production. If you want to pursue art (without the scare-quotes) as I’ve used this term above: you can, and I doubt that any generative AI will prevent you. That’s a form of production that seems like it will be safe for quite some time, even if it is limited to a very small number of producers who can make a living at it (as has always been the case with art). Is this typewriter powered by GPT-3 a work of art? (I’d say, “Yes!”) If you’re one of these producers, you may still wish to master the new tools at your disposal. You may iterate, riff, or explore new directions faster than ever before. You’ll paint with ideas instead of pixels. If you do, then it is the skills of composition that will be important. Here’s a deck I compiled to capture some of what you need to know: Technology is Accelerating In the industry I’ve spent most of my career — computer games — much artwork started to move offshore years ago. Artists who didn’t master new technologies, learn to work inside 3D engines, or more technical aspects of digital art couldn’t compete with the costs of a production house in a far-away land. Today, we’re seeing the natural progression of this movement: if this work can be off-shored, and it can be automated even more capital-efficiently — then it will. Those who master the new skills will survive. That means learning the aspects that are more technical, more complex, and the types of composition I wrote about above. What next? For those of you who feel these technologies are threatening or worrisome: unless you’re convinced that generative technologies will be regulated out of existence (something I find highly unlikely), then you owe it to yourself to master the compositional skills that will allow you to stay relevant in the marketplace of skills. For others reading this, you’ll find it very exciting. Maybe you have an idea to incorporate generative AI into your production process so you can disrupt much larger competitors who will be too slow to change their methods. Or even better: you’ll build whole new products where generative AI is at the core of the experience itself, bringing us whole new ways of living in the world. Or you’ll ride on the counter-trend: you’ll be the artisan selling handmade human products in a world of automation. There’s a market for it in ceramics, in oil paintings, in furniture, in jewelry, in food — so why not in digital products like games and online experiences? Whichever path you’re on, I know the future seems intimidating because these technologies are moving faster. The rate of change is not linear; it is compounding. It is happening: not only for artists, but for every kind of creator. The opportunity to scale-up our creativity exponentially is before us. Further Reading
2023-02-16T00:00:00
2023/02/16
https://medium.com/building-the-metaverse/the-work-of-art-in-the-age-of-generative-ai-58ad6c9995f2
[ { "date": "2022/12/01", "position": 38, "query": "generative AI jobs" } ]
The Top 5 In-Demand Tech Skills For Jobs In 2023
The Top 5 In-Demand Tech Skills For Jobs In 2023
https://www.linkedin.com
[ "Ewaldo Del Valle", "Chatgpt Ai", "Hilary Wilson", "Derek Bailey", "Teacher At Tafensw", "Tony W.", "Operations Excellence", "Digital Transformation", "Commercialization - Business Growth", "Innovation" ]
This means that rather than just automation of manual jobs, smart, artificial intelligence (AI)-powered machines will increasingly do jobs that require thought ...
Thank you for reading my latest article The Top 5 In-Demand Tech Skills For Jobs In 2023. Here at LinkedIn and at Forbes I regularly write about management and technology trends. To read my future articles simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter, Facebook, Instagram, Slideshare or YouTube. --------------------------------------------------------------------------------------------------------------- Jobs are changing – to the point that it’s been predicted that 85% of the jobs college leavers of 2030 will have, haven’t been invented yet. This means that skills will have to change, too. AI and automation will be a big driver of this as machines become capable of taking on more work. This means that rather than just automation of manual jobs, smart, artificial intelligence (AI)-powered machines will increasingly do jobs that require thought and decision-making. So, where does this leave humans? Don't worry; it's unlikely we're going to be completely redundant just yet. Rather, we will focus on tasks that machines just can’t do as well as we do (yet) – jobs that involve strategy, creativity, or emotional intelligence, for example. In 2023 and beyond, I believe those who are successful in industry and enterprise will be developing skills that involve augmenting our abilities with smart machines and software while applying the “human touch” where it is needed. So, here's my rundown of five skill sets that will help anyone set themselves up for the future. Data Communicator/ Storyteller Data communicators and storytellers will become increasingly sought-after and valuable to businesses as we move into 2023. This is because, according to research commissioned by Tableau from Forrester, by 2025, 70 percent of jobs will involve working directly with data. This means that every organization will need people with the skills to interpret, translate and communicate it. This skill set is about the ability to communicate insights in ways that are effective - meaning putting them in the hands of the right people at the right time. However, they must also be engaging, which is where the storytelling element comes in. Skilled data storytellers are able to see and communicate the narrative around data. In practical terms, this could mean explaining where the insights come from, why they are important to the business, and how best to go about putting it to work. Data storytellers use written communication as well as visualizations, tapping into tools such as PowerBI, Qlik, and Tableau to find the most efficient, memorable, and accurate methods of conveying insights. There’s a reason why storytelling has been humanity’s chosen medium for conveying important information going back to the days when stone age tribes would huddle around the fire at night. We are attuned to looking for meaning in them, remembering them, and passing on what we learn from them. As data becomes central to the strategies of more businesses, people with the skill to communicate and build stories from it will become increasingly valuable, which is why this skill makes my list of 2023’s most in-demand tech skills. Cyber Security The explosion in home and remote working that began during the Covid-19 pandemic and is set to continue into 2023 means cyber security is essential to every company. No longer something that’s confined to IT departments, every employee will be expected to take responsibility for the security of the data they are entrusted with and for building resilience in their area of operations. However, that doesn't mean that the specialized cyber security agent will become a thing of the past – far from it, in fact, as it’s predicted that up to 10 million professionals will be employed in the field during 2023. What this means is that whether you plan to pursue it as a career path or simply add it to your resume to make you more marketable in whatever role you currently work in, cyber security skills will be among the most in-demand for the foreseeable future. Home and remote working, the internet of things (IoT), and the ongoing migration of more and more aspects of our lives into the digital domain (as well as the metaverse) all have major implications for the ways in which cybercriminals can potentially make life unpleasant for us. Cyber security professionals harness AI to predict and counter hacking and DDOS attacks while also adopting the psychological approach needed to defend against lower-tech approaches based around social engineering. It’s a diverse skillset and one that will make anybody an attractive fit for the workforce of the future. UX Design User experience (UX) design, and the closely related field of user interface (UI) design, will become increasingly valuable skills due to the extent to which digital transformation is effectively turning every company into a tech company and every job into a tech job. This democratization of technology really just means that everyone, no matter what their role is, is expected to work with technology to some extent. And that means that technology has to be usable by anyone, even those without the skills traditionally needed to work with technology, such as software and programming skills. The skill of the UX and UI designer is to make technology intuitive and accessible to anyone, even if they aren’t used to working with technology. This will be particularly relevant as we move further into the era of low-code/no-code platforms, where organizations have the power to create applications for their customers or their workforces without needing to engage software engineers and execute lengthy and costly software development projects. Increasingly, whether or not a user (customer or employee) continues to engage with a tool, application, or piece of technology will come down to whether they have a positive, enjoyable, and hassle-free experience while they are doing so. As consumers continue to value experience above other aspects of products and services they purchase (such as value-for-money or quality), then the skillset of the UX designer will become increasingly in demand. Digital Marketing We live in an attention economy – meaning our attention is increasingly the most valuable commodity we have to give, and capturing it is the number one priority for brands and businesses. Digital marketing involves harnessing the cumulative power of any number of technological solutions designed to get the attention of customers and put your products and services at the front of their minds. From paid social media advertising to influencer-led campaigns, search engine optimization (SEO), email marketing funnels, and cutting-edge channels like metaverse and augmented reality (AR), digital marketing involves a diverse set of digital skills ranging from technical to creative. Good marketers can specialize in one, several, or many – and as long as they can use their field of expertise to push the needle on whatever metrics matter to their organization, they won’t find themselves out of a job in 2023. This year, many more digital marketers will find themselves augmenting their human creative and strategic abilities with AI-empowered tools that help with audience targeting and segmentation. And finally – one skill to rule them all… If you've been paying attention, you will have noticed that artificial intelligence plays a key part in all of the skills I've mentioned here – specifically, the ability to work alongside AI in a manner that is today frequently described as "augmented working." Data communicators have AI tools that can suggest the most effective forms of visualization and storytelling to get their insights across. Cyber security professionals use AI to analyze network traffic and spot potential attacks before they cause damage. UX designers use AI-assisted analytics of user behavior to determine which features and functionality should be prioritized. And digital marketers have many AI tools at their disposal for predicting audience behavior and even creating copy and content. The ability to spot opportunities to augment your or your organization’s human skillset with AI, and put tools and platforms to work to do so, will possibly be the most valuable skill of all in 2023 and beyond. To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books ‘Future Skills’ and ‘Business Trends in Practice, which just won the 2022 Business Book of the Year award. --------------------------------------------------------------------------------------------------------------- About Bernard Marr
2022-12-01T00:00:00
https://www.linkedin.com/pulse/top-5-in-demand-tech-skills-jobs-2023-bernard-marr
[ { "date": "2022/12/01", "position": 45, "query": "generative AI jobs" } ]
Yeji Data Lab
Yeji Data Lab
https://www.yejidatalab.com
[]
... generative models. Canada's AI Supercluster — Montréal. Scale AI is ... jobs, and accelerates the adoption of AI-powered technologies. What Clients
Founded by Professor Yoshua Bengio of the University of Montreal, Mila is a research institute in artificial intelligence that now brings together over 1,200 specialized researchers in machine learning. Based in Montreal, Mila’s mission is to be a global center for scientific advancements that inspire innovation and the growth of AI for the benefit of all. Mila is a globally recognized non-profit organization for its significant contributions to deep learning, especially in the fields of language modeling, automatic translation, object recognition, and generative models.
2022-12-01T00:00:00
https://www.yejidatalab.com/
[ { "date": "2022/12/01", "position": 83, "query": "generative AI jobs" } ]
The Impact of ChatGPT on AI Careers & Opportunities
The Impact of ChatGPT on AI Careers
https://onlinedegrees.sandiego.edu
[]
Almost half (48%) say that ChatGPT has already replaced workers, and an overwhelming majority (90%) report that ChatGPT experience is an advantageous skill for ...
The artificial intelligence application ChatGPT debuted in late 2022 and has been making headlines ever since. Its popularity was almost immediate, generating more than 1 million users within five days of its debut. And now businesses, organizations and AI professionals are not only trying to determine how this piece of innovative technology will impact everyday operations but also what it means for the field of artificial intelligence and related careers. What Is ChatGPT? ChatGPT is an artificial intelligence chatbot that interacts with humans and produces conversational text. It was developed by the company OpenAI, which was originally founded in 2015 by Elon Musk and a handful of other Silicon Valley investors. The company explains that the dialogue format of ChatGPT makes it possible to “answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” ChatGPT also has a sister application called InstructGPT, which follows instructions and provides detailed responses. In an interview with a San Diego news station, Dr. Anna Marbut, a professor at the University of San Diego’s Applied Artificial Intelligence Master of Science program, explained that ChatGPT’s responses, which mimic how humans communicate, is what makes the technology so exceptional. How Does ChatGPT Work? This artificial intelligence application uses machine learning and natural language processing to create conversational, human-sounding responses to written prompts. ChatGPT was developed and trained with a variety of data, including books, articles and conversations, and as a result, it can understand many topics. The chatbot remembers your dialogue thread and essentially produces a conversation. Users have found ChatGPT can create: Poetry and wedding vows Cover letters Job descriptions Essays Headlines Abstracts for scientific articles Product descriptions Blog posts Social media content Additionally, the chatbot can help with research, brainstorming and writing by handling calculations, checking spelling and grammar or functioning as a thesaurus or dictionary. Users can also replace a typical Google search with a question to ChatGPT, although it (currently) won’t provide all the functions of a search engine, such as location-specific answers (unless you specifically ask) or links to other sources. ChatGPT Concerns & Challenges ChatGPT is an artificial intelligence chatbot fueled by information; in other words, it’s a machine — not a person — which means there will be misunderstandings and miscommunication, especially since the technology is still evolving. Here are some of the major concerns and challenges: Creation of false material. In one case, ChatGPT was used to create false articles from The Guardian, a British newspaper. In one case, ChatGPT was used to create false articles from The Guardian, a British newspaper. Plagiarism and cheating in education. While the tool can be used effectively in the classroom, educators are already dealing with issues of plagiarism and cheating. According to a recent Forbes article, 89% of surveyed students reported using ChatGPT for a homework assignment; 48% admitted to using it for an at-home test or quiz and 53% had the chatbot write an essay. While the tool can be used effectively in the classroom, educators are already dealing with issues of plagiarism and cheating. According to a recent Forbes article, 89% of surveyed students reported using ChatGPT for a homework assignment; 48% admitted to using it for an at-home test or quiz and 53% had the chatbot write an essay. Threats to certain jobs. The idea of robots replacing human workers is nothing new, but ChatGPT has once again fueled the conversation about whether AI can effectively execute certain tasks faster than a person. According to a recent Goldman Sachs report, artificial intelligence — including new platforms like ChatGPT — could affect up to 300 million jobs around the globe. The idea of robots replacing human workers is nothing new, but ChatGPT has once again fueled the conversation about whether AI can effectively execute certain tasks faster than a person. According to a recent Goldman Sachs report, artificial intelligence — including new platforms like ChatGPT — could affect up to 300 million jobs around the globe. Privacy concerns. Privacy has always been an AI ethical concern, but according to CNN, the chatbot recently had a bug that allowed users to access personal information from others. After OpenAI disclosed the data breach, Italy issued a temporary ban on ChatGPT. Prof. Marbut “admitted ChatGPT does not come without concerns.” The chatbot does provide disclaimers in their terms of use: “Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe, and beneficial. Given the probabilistic nature of machine learning, use of our Services may, in some situations, result in Output that does not accurately reflect real people, places, or facts.” ChatGPT Implications For the Field of Artificial Intelligence It may be too soon to predict ChatGPT’s widespread effects on the field of AI, but the chatbot is already influencing the job market and AI technology. The company Resume Builder recently surveyed 1,000 business leaders about ChatGPT and discovered that 49% are already using the chatbot. Almost half (48%) say that ChatGPT has already replaced workers, and an overwhelming majority (90%) report that ChatGPT experience is an advantageous skill for job seekers. Job postings on popular sites like LinkedIn and Indeed are already including ChatGPT familiarity as a desired skill. ChatGPT is also making waves with other technology companies — most notably Google, which issued a “Code Red” after the chatbot’s debut, meaning the ChatGPT was considered a major threat to the popular search engine. The “Code Red” declaration resulted in the return of Larry Page and Sergey Brin, Google’s co-founders who had left their day-to-day positions at the company. After ChatGPT’s debut, Page and Brin returned to help prioritize Google’s own AI capabilities. Opportunities for AI Professionals While there are concerns about ChatGPT and other technology platforms eliminating certain positions, the reality is there will be increasing demand for AI professionals who can help develop and advance similar technologies as companies look to compete with ChatGPT. In a recent article, AI researcher Osman Ramadan explained that his extensive experience has taught him that “companies can’t rely solely on a bot, which is liable to produce erroneous and prejudiced content. Instead, he thinks ChatGPT will be used as a tool to boost productivity rather than replace professionals.” In general, artificial intelligence careers are booming — and there’s no doubt ChatGPT and similar platforms will lead to even more positions in the field. Popular AI careers include: Download Our Comprehensive Checklist for Choosing the Right AI Degree Program The Future of AI Careers and ChatGPT While ChatGPT is still considered relatively new, its effects on the world of artificial intelligence are already far-reaching. Expedia, the popular trip-planning company, has incorporated ChatGPT in its mobile app to help travelers with finding attractions and hotels and answering other travel-related questions. Companies are using ChatGPT as a coding tool and a personal assistant; others are using the software to write emails and conduct keyword research. Plus, articles about the best ChatGPT alternatives are already surfacing as start-ups and other technology companies look to compete with the popular chatbot. While this innovative technology is still evolving, one thing is clear — artificial intelligence professionals will need to stay on top of the latest developments and information in order to successfully work with AI platforms like ChatGPT. Continuing education is an excellent way to obtain this much-needed experience, and USD’s Master of Science in Applied Artificial Intelligence curriculum not only provides a comprehensive overview of the field but also covers the latest trends and information, including natural language processing developments and ethics in AI. Interested in Exploring an Artificial Intelligence Master’s Degree? With artificial intelligence driving innovation across nearly all industries, employers are seeking skilled AI professionals and paying top dollar to hire them. And since nearly all artificial intelligence jobs require a master’s degree, there are numerous academic programs to choose from. Download our eBook — 8 Questions to Ask Before Selecting an Applied Artificial Intelligence Master’s Degree Program — to help you find the right program for your career goals.
2024-06-07T00:00:00
2024/06/07
https://onlinedegrees.sandiego.edu/impact-of-chatgpt-on-ai-careers/
[ { "date": "2022/12/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2023/04/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 6, "query": "ChatGPT employment impact" }, { "date": "2024/06/07", "position": 1, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 6, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 6, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 7, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 6, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 6, "query": "ChatGPT employment impact" } ]
The Impact of ChatGPT on Employee Job Identity - an Automation ...
AMCIS 2024 Proceedings: The Impact of ChatGPT on Employee Job Identity
https://aisel.aisnet.org
[ "Wang", "Mcmaster University", "Yuan" ]
This study examines the impact of ChatGPT's intelligent capability on employees' job identity through job automation and augmentation.
ChatGPT has its potential to be introduced as the frontline workforce to reshape future online customer service. It is essential to understand employees’ attitudes towards this new digital employee since ChatGPT may blur the boundaries between humans and machines in the profession. However, rare papers explored the rooted psychological drivers (such as job identity) underlying the employees’ actions, especially from both “bright” and “dark” sides. Grounded in identity theory, automation-augmentation paradox, and cooperation-competition theory, this study examines the impact of ChatGPT’s intelligent capability on employees’ job identity through job automation and augmentation, and the moderating impact of employee self-efficacy on these associations. Aug 16th, 12:00 AM The Impact of ChatGPT on Employee Job Identity - an Automation-Augmentation Perspective ChatGPT has its potential to be introduced as the frontline workforce to reshape future online customer service. It is essential to understand employees’ attitudes towards this new digital employee since ChatGPT may blur the boundaries between humans and machines in the profession. However, rare papers explored the rooted psychological drivers (such as job identity) underlying the employees’ actions, especially from both “bright” and “dark” sides. Grounded in identity theory, automation-augmentation paradox, and cooperation-competition theory, this study examines the impact of ChatGPT’s intelligent capability on employees’ job identity through job automation and augmentation, and the moderating impact of employee self-efficacy on these associations.
2022-12-01T00:00:00
https://aisel.aisnet.org/amcis2024/ai_aa/ai_aa/21/
[ { "date": "2022/12/01", "position": 27, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 27, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 26, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 26, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 26, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 26, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 25, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 26, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 27, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 26, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 24, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 20, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 20, "query": "ChatGPT employment impact" } ]
Impact Of ChatGPT On Job Market: Threats And Benefits - Unstop
Impact Of ChatGPT On Job Market-Threats And Benefits // Unstop
https://unstop.com
[ "Gurpreet Saini" ]
The impact of ChatGPT on job market is that it reduces dependence on a manual workforce. ChatGPT can potentially replace low-waged tech professionals who do not ...
Impact Of ChatGPT On Job Market: Threats And Benefits The article on the impact of ChatGPT on job market finds that the chatbot may compel professionals to redefine their job roles. The AI technology will affect professions with lack of specialization. 0 min read When ChatGPT was introduced, it was like a new toy in the hands of students. The student community used it for completing their assignments. Soon professionals in a wide range of fields started using it too. However, as time passed people began to realize that there can be a strong impact of ChatGPT on job market. In this article, we will discuss the impact of ChatGPT on job market. We will try to see how the introduction of ChatGPT is disrupting traditional industries. The Impact of ChatGPT on Job Market: An Indian Scenario In the Indian job market, we face a frequent problem every year - the problem of an inefficient and unskilled workforce. Companies often complain that they have to invest time and money in training the employees they hire. In the past years, the trend of upskilling has also picked up in India where working professionals spend considerable time and money on enhancing their skill set. Along with this, they use productivity-enhancing tools to stay ahead of the competition. For many Indian professionals, ChatGPT is that productivity-enhancing tool. Let’s take examples of a few professions and analyze the impact of ChatGPT on the job market. The Impact of ChatGPT on Job Market: The Tech Sector The tech sector is paradoxically one of the most sought-after and yet the most volatile industries in India. To be a good tech professional, you have to continuously work on your skills. Professionals in this sector must ensure that they are up-to-date with newer technologies. They have to be proficient in the programming languages required in their job. Along with this, they also require knowledge of other technical aspects of their work. All this makes the tech sector very dynamic. You cannot rely on the knowledge you gathered from your college textbooks and hope to compete in this sector. You must find innovative ways to be more productive and offer a higher output to the company. How can ChatGPT Impact Tech Jobs? ChatGPT can write codes in any programming language. Its human counterparts will find it tough to have a mastery of more than a few computer programming languages. ChatGPT can write faster codes. It can save a lot of time with lesser human labor involved. You Still Need Human Programmers: Let’s See Why? Codes written by ChatGPT are not error-free. You still need somebody to ensure the code is clean and workable. There is a certain logic and thought behind a code. You need human intelligence to determine the aim and purpose of the application for which the code is written. For example, code for a medical mobile application would have certain requirements which can be decided only by a human brain. So, the involvement of human coders cannot be ruled out. How can Tech Professionals Use ChatGPT? The impact of ChatGPT on job market is that it reduces dependence on a manual workforce. ChatGPT can potentially replace low-waged tech professionals who do not have any particular specialization. So, what you can do is upgrade your learning, practice coding, and take mentorship from industry experts. You can explore the articles under the Computer Science category on Unstop to know about the latest trends in the field of technology. The Impact of ChatGPT on Job Market: The Education Sector Teachers are the second most prominent category who are affected by ChatGPT in recent times. Artificial Intelligence-based technologies have changed the way teachers handle their work. Until 2019, a typical day in a teacher’s life would include going to the school/college every morning, delivering lectures, checking assignments, and giving homework. Although the subjects, the teaching style, and other factors differed from teacher to teacher, things were more or less the same. However, it changed in 2020 when the world went into a global lockdown. Teachers had to innovate their way of conducting classroom teaching, without being inside classrooms. Many found it difficult to conduct online classes. Additionally, now they had to decide how they would assess students from remote places. Again, when ChatGPT became popular, teachers were plagued with another problem. Students started using the chatbot to understand concepts, solve homework, and write essays. It looked like the role of a teacher had suddenly become redundant. So, it is clear that within less than five years, the community of teachers has found themselves in a perplexing situation. How can ChatGPT Impact Teaching Jobs? Many teachers in India worked as tutors to help students with homework. Their services may not be required in the future as ChatGPT could be used to assist students with homework. The number of teachers required in an institution could be reduced. It was seen that in the lockdown period that a single recorded lecture could be shared among hundreds of students. Now teachers did not have to deliver lectures on the same topic again and again. You Still Need Teachers: Let’s See Why? To understand the learning needs of each student. The deep learning technology of ChatGPT is not good in this regard. We still need teachers to understand the learning styles of students. It is not possible without human interaction between the teacher and students. To create lesson plans to meet the learning demands of the students and time constraints of the educational institutions. To assess how lessons should be imparted, keeping in mind the developmental needs of the students. For instance, teachers have to understand the emotional, psychological, and cognitive growth of the students while teaching sensitive topics like equality, discrimination, fundamental rights, etc. Children of different age groups will understand them differently. ChatGPT's responses are not designed to generate user-sensitive output. To be there to encourage and motivate students. Students need a grown-up person to guide them at every step of their educational journey. How can Teachers Use ChatGPT? They can use ChatGPT in creating lesson plans. Automate non-teaching, administrative day-to-day routine tasks. Take help from this generative AI to educate themselves on the educational needs of students with different learning styles. In this way, teachers can incorporate empathy and emotional intelligence into their work. To find ways to make learning creative and fun. All these things will help them do their work more efficiently. Read more about ChatGPT in the education sector: ChatGPT For Educators: Personalised Learning Tools For Students The Impact of ChatGPT on Job Market: The Legal Sector Professionals in the legal sector have also experienced the negative impact of ChatGPT in the labour market. Professionals offering legal services include legal assistants, lawyers, paralegals, etc. A major portion of legal professionals was already exposed to AI tools even before the invention of ChatGPT. On the other hand, the chatbot has made it easier for the common masses to outsource legal services to AI. How can ChatGPT Impact Jobs in the Legal Sector? It uses Natural Language Processing (NLP) technology, which operates on language models and machine learning, to translate official documents into easily understandable language. ChatGPT can conduct research and collect a large amount of data which takes paralegals and legal assistants lots of time. ChatGPT can do administrative mundane tasks easily. You Still Need Legal Professionals: Let’s See Why? ChatGPT cannot study individual cases and offer legal consultancy. ChatGPT cannot do complex tasks which require critical thinking. Hence, the AI-driven chatbot does not know which law or set of laws is applicable to a certain case study. How can Legal Professionals Use ChatGPT? The job of the legal experts is not at risk of replacement by ChatGPT. It is because legal consultants offer a critical perspective to a case study. However, they can use it to collect relevant information in a shorter period of time. They can access similar case studies from different places and time periods and compare a specific case based on them. Since legal professionals require a thorough understanding of law, philosophy, and culture, ChatGPT cannot fully replace them. However, they can definitely use the chatbot to speed up the legal process to ensure faster delivery of justice. The Impact of ChatGPT on Job Market: The Corporate Sector Market research analysts, financial experts, and social media strategists are another set of professionals who are wondering whether ChatGPT will threaten job opportunities in the coming future. How can ChatGPT Jobs in the Corporate Sector? ChatGPT for market research : It can help in collecting data and studying current market trends. ChatGPT for developing social media strategy : It can study social media posts for current trends and suggest relevant strategies without the need to hire a professional social media strategist. ChatGPT for content creation : The chatbot can be used to write articles and blog posts, hence threatening the jobs of content creators. ChatGPT for financial analysis and financial advisory : ChatGPT can be used to understand the investment trends in the market and suggest profitable portfolios to their clients. You Still Need Corporate Professionals: Let’s See Why? ChatGPT may collect data easily but market researchers have to use their knowledge and skills to analyze it to draw sensible conclusions. Social media strategies need an understanding of various aspects such as the emotional makeup of the users, their political preferences, and their cultural choices, among many others. The AI-generated content of ChatGPT is not equipped to organically assimilate all these factors and provide a good social media strategy that would align with the sensibilities of the audiences. Companies still need content creators to have plagiarism-free content production. Good content generation will be of competitive advantage to content writers. Investment demands experts for assessing potential risk appetite. They need expertise and critical thinking acumen. In this way, a professional will be needed to provide a better customer experience. How can Corporate Professionals Use ChatGPT? Corporate professionals can use ChatGPT to understand current trends and get current market updates. Yet, the information has to be oriented to match the aims of the business and customer demands. Read more about the use of ChatGPT in the corporate world: ChatGPT For Financial Analysts: Advantages And Some Limitations Looking into an AI-Driven Future The examples of the above jobs have some points in common. The jobs and the aspects of jobs that will be negatively impacted by ChatGPT include: Low wage jobs Jobs with lack of expertise/ specialization for higher-level tasks Jobs witth lack of critical thinking Jobs witth lack of creativity Jobs witth lack of analytical skills in research Jobs where repetitive tasks/everyday tasks can be automated Jobs with minimal/no innovation Jobs where soft skills are required So, how can you safeguard yourself from the impact of ChatGPT on job market? You can take certain measures, such as: Move beyond the resume : Upgrade your skills by participating in hackathons and competitions . Stay in touch with market trends : Regularly participate in workshops and conferences to know about current industrial trends. If you learn to use ChatGPT as a productivity tool, then you can benefit from it greatly. For more articles on ChatGPT, stay tuned to Unstop. Here are some ChatGPT articles:
2022-12-01T00:00:00
https://unstop.com/blog/impact-of-chat-gpt-on-job-market
[ { "date": "2022/12/01", "position": 30, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 30, "query": "ChatGPT employment impact" }, { "date": "2023/05/05", "position": 1, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 30, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 30, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 30, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 32, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 31, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 31, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 33, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 31, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 31, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 24, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 24, "query": "ChatGPT employment impact" } ]
Machine Language: Will ChatGPT Have a Job Market Impact?
Machine Language: Will ChatGPT Have a Job Market Impact?
https://www.aei.org
[ "James Pethokoukis" ]
The additional national income from this productivity increase will fuel greater labor demand, making workers as a whole better off. Discouraging productivity- ...
In This Issue The Essay: Machine language: Will ChatGPT have a job market impact? 5QQ: 5 Quick Questions for … meta-science researcher Saloni Dattani Quote of the Issue “Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.” – Arthur C. Clarke The Essay Machine language: Will ChatGPT have a job market impact? Again, this is my first newsletter after a month-long hiatus. You might think that after such a long break I would have trouble picking a topic to write about. So much to choose from! But the choice was obvious: ChatGPT, a remarkable new general-purpose chatbot developed by artificial intelligence company OpenAI. “ChatGPT has already impressed many technologists with its ability to mimic human language and speaking styles while also providing coherent and topical information,” wrote reporter Kalhan Rosenblatt for NBC News, who gave the example of a prompt that asked ChatGPT to explain zero point energy but in the style of a cat: “Meow, meow, meow, meow! Zero point energy is like the purr-fect amount of energy that is always present, even in the most still and peaceful moments.” You can find numerous impressive examples of ChatGPT’s capabilities all over social media right now, so I don’t feel the need to give lots of examples. I will say, however, that one of the most impressive ones was a prompt that asked the chatbot to make an argument in favor of free trade — but in the speaking style of Donald Trump. This capability alone could potentially affect the future direction of the United States and the fate of the Free World. Anyway, lots of people seem pretty wowed by ChatGPT: According to my wife ChatGPT answers her questions more empathetic and human than i do. Apprently it also listens better to what she actually asked. Anyway, i have been put on notice and shall practice with it. Stupid robots making me look bad. 🤖 — Andreas Klinger 🏝 (@andreasklinger) December 4, 2022 I think you will be able to see ChatGPT as a dip in the monthly productivity statistics — Benedict Evans (@benedictevans) December 2, 2022 With ChatGTP it really feels like we’re on the verge of a new Copernican revolution – one in which humanity is made to feel less ‘special’ once again, when realizing we are not the sole creative agents of the universe and our minds are nothing more than a great piece of software. — Boyan Slat (@BoyanSlat) December 3, 2022 ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward. — Aaron Levie (@levie) December 3, 2022 Setting aside some of that breathless analysis — which I grant might turn out to be totally justified — let’s think for a moment about just what kind of technology ChatGPT is. In the economic sense, it’s another example of machine-learning AI, which itself is considered a general-purpose technology, meaning it has economywide applications across a variety of sectors. The two sectors that seem most obviously affected of late are creative ones: illustration and conversation/writing. OpenAI is also known as the creator of the DALL-E text-to-image generator. (It’s a technology I’ve been using to create images for this newsletter.) And now the versatile ChatGPT, which might be better described as an “answer engine.” The Guardian described it this way: “In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds — before writing limericks explaining the functionality.” What’s interesting about the above description is that it’s one of the few to highlight — the bit about the coding challenge, specifically — how ChatGPT might augment or complement human labor rather than simply automate it. Both of these technological effects increase labor productivity. But by substituting machine effort for human labor in performing various tasks, automation can reduce employment and wages. Indeed, this is the effect the Guardian piece chose to highlight: “Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.” Yet there will almost certainly be many instances when ChatGPT or some near-future incarnation will help us do our jobs better or even create new things for us to do. Despite the long history of people worrying about machines taking their jobs, it’s this augmentation/complementary effect that has been most important by far. As Stanford University economist Erik Brynjolfsson wrote earlier this year: One metric of this is the economic value of an hour of human labor. Its market price as measured by median wages has grown more than tenfold since 1820. An entrepreneur is willing to pay much more for a worker whose capabilities are amplified by a bulldozer than one who can only work with a shovel, let alone with bare hands. In many cases, not only wages but also employment grow with the introduction of new technologies. With the invention of the airplane, a new job category was born: pilots. With the invention of jet engines, pilot productivity (in passenger-miles per pilot-hour) grew immensely. Rather than reducing the number of employed pilots, the technology spurred demand for air travel so much that the number of pilots grew. Past performance doesn’t guarantee future results, and Brynjolfsson is one of many economists who worry that AI, like too many recent tech advances, will be more job-displacing than augmenting. Another automation worrier is MIT economist Daron Acemoglu, who has a new co-authored paper, “Tasks, Automation, and the Rise in U.S. Wage Inequality,” which presents evidence that automation accounts for more than 50 to 70 percent of the increased income gap between more- and less-educated workers since 1980. As Acemoglu sees it, that statistic is the result of an American economy innovation system producing too much “so-so” innovation like self-checkout machines that reduce expenses at large retail chains. “If you introduce self-checkout kiosks, it’s not going to change productivity all that much,” says Acemoglu. However, in terms of lost wages for employees, he adds, “It’s going to have fairly large distributional effects, especially for low-skill service workers. It’s a labor-shifting device, rather than a productivity-increasing device.” If we are getting too much “so-so” innovation that automates labor rather than more radical innovation that augments/creates labor, one policy option would be to equalize the tax rates between labor and capital. As Brynjolfsson explains: In 1986, top tax rates on capital income and labor income were equalized in the United States, but since then, successive changes have created a large disparity, with the 2021 top marginal federal tax rates on labor income of 37 percent, while long capital gains have a variety of favorable rules, including a lower statutory tax rate of 20 percent, the deferral of taxes until capital gains are realized, and the “step-up basis” rule that resets capital gains to zero, wiping out the associated taxes, when assets are inherited. The first rule of tax policy is simple: you tend to get less of whatever you tax. Thus, a tax code that treats income that uses labor less favorably than income derived from capital will favor automation over augmentation. Treating both business models equally would lead to more balanced incentives. Acemoglu also favors this idea, though my AEI colleague Michael Strain doesn’t: “I would rather the tax code impose even lighter taxes on business investment than it currently does. Lighter taxes on investment will lead to higher worker productivity, which in turn will lead to higher wages for workers. The additional national income from this productivity increase will fuel greater labor demand, making workers as a whole better off. Discouraging productivity-enhancing investment is exactly the wrong direction for tax policy.” (One things all these economists agree on is the need for more worker training/retraining. Brynjolfsson calculates that for each dollar spent on machine learning tech, companies may need to spend nine dollars on intangible human capital.) I also think that while policy is important — and to the above ideas I would add the importance of R&D investment and smart regulation — so is the inherent momentum of technological progress. Why after a long interregnum, do Next Big Things often come in bunches? That’s exactly the question asked by Swedish economist Ola Olsson in his 2001 paper “Why Does Technology Advance in Cycles?” And the answer might center around what Olsson calls “technological opportunity.” He explains his basic model this way: As technological opportunity becomes exhausted, profits and income growth rates diminish. Eventually, profits from incremental innovation fall below expected profits from highly risky and costly drastic innovations. Entrepreneurs then switch to drastic innovation, which introduces new areas of technological opportunity and a new technological paradigm. When technological opportunity once again is abundant, incremental innovation resumes and growth rates increase. In this way, development proceeds in long waves of varying duration and intensity. The fundamental determinants of the economy’s behaviour are the capacity of a society to exploit existing technological opportunity and its system of rewards for drastic innovation. Maybe we are entering a new period of high-impact innovation opportunity as a number of emerging technologies looking promising in areas such as biology, energy, space, and AI. I think it’s too early to classify what kind of innovation ChatGPT mostly is. But can we have, like, five minutes to marvel at it before hitting the Technopanic Button and assuming massive job loss or that it’s the next step toward unaligned AI that tries to kill us all? Credit, then, to reporter Alex Kantrowitz at Slate who, while not ignoring some potential downsides as well as limitations, wrote a piece that didn’t forget to highlight the cool factor: “OpenAI’s new ChatGPT is scary-good, crazy-fun, and—so far—not particularly evil.” 5QQ 5 Quick Questions for … (meta)science researcher Saloni Dattani After boosting US economic growth and the importance of optimistic sci-fi, one of the most popular squares on the official Faster, Please! bingo card is metascience — that is, applying scientific inquiry to our scientific institutions and processes with the aim of improving how we conduct research. Pathbreaking scientific discoveries and incremental innovations alike depend on putting R&D dollars and top researchers to their best use. So when I saw “The Pandemic Uncovered Ways to Speed Up Science” in Wired, I knew a Q&A with the author would make for a fitting return to my 5 Quick Questions feature. Saloni Dattani is a researcher at Our World in Data and founding editor of Works in Progress. She also has a Substack, Scientific Discovery, that describes the behind-the-scenes workings of academic science. Check it out! 1/ Do we already have the necessary talent pipelines to make scientific research roles more specialized with further division of labor? Would we need changes in academia? What’s interesting is that academia has a lot of specialisation in terms of the number and depth of disciplines, but not in terms of the division of labor within them. There’s a wide variety of fields, but until recently most research has been performed by small groups of researchers working independently, often repeating each other’s efforts. Researchers are expected to have the skills to theorize, review the literature, perform experiments, run analyses, store data, write papers, present their work, review others’ work, and teach. All of these skills take a long time to learn and maintain. Part of the reason we don’t have more division of labor is that there are incentives to work alone: For example, academics get rewarded according to how much they publish, especially if they’re the ‘first author’ on a paper. Because of this, in many fields such as economics, most publications are limited to only a few authors. Another reason is that academia rewards people with a deep background in the field — with PhD qualifications and a track record of high-profile papers and projects. This is sometimes known as “the Matthew Effect.” That makes it hard for new people to contribute without going through many hurdles first, especially if they have different backgrounds — like software engineering or data management. A third reason is essentially gatekeeping. Much of research is still hidden behind paywalled journals that big universities subscribe to. This means people outside of academia, or even in institutions that don’t subscribe to those journals, can’t access data and research. They can’t build on it, point out errors, or improve it. So we need lots of changes: to reward researchers to collaborate on larger projects, to give them the time to specialise in certain aspects of their work, to publish data and research in an accessible way. 2/ You argue that researchers can pool randomized controlled trials to make experiments easier and less costly to run. Why aren’t we doing this already? This is a relatively new way of running trials: They’re known as ‘platform trials’ where a variety of different drugs are tested at the same time. There are a few reasons that they haven’t been so common. One is that trials are funded by different pharmaceutical companies, in order to test their own drugs for their own applications to get them approved. So, getting multiple companies to agree to run their own drugs in a single pooled trial together means getting them through various legal and financial hurdles, or have this coordinated by an independent organisation. There are some different ways to make that easier. For example: Pharmaceutical companies could finance only their “arm” of the trial (i.e. the part that relates to their own treatment), in what’s called a “pay to play” model, but this requires coordination between different companies testing treatments against the same disease. Another approach is for them to be government funded, especially if there’s an urgent need to learn about the disease and there are large benefits to the public to find some treatments that work quickly. So they’re constrained by legal and financial hurdles, and coordination problems. Pharmaceutical companies have found them new and risky until recently, upon seeing how they can work in practice. A final reason is that, for some types of platform trials, the statistical methods to analyse them are actually new. The RECOVERY trial in the UK, for example, had a long running trial without a predetermined number of participants or timeline: new participants kept being enrolled and were randomised to different treatments at different times. This needed a new Bayesian statistical method to figure out which group they should be randomised to, and it needed to make sure there’d be enough participants tested for each treatment. Since the method is new, the regulatory framework has only recently caught up — usually, regulators want to know in advance how many people will be in a trial and how long it will run, because their guidelines have been developed around traditional ways of running trials. 3/ You’ve writen that “Big institutions, such as governments and international organizations, should collect and share data routinely instead of leaving the burden to small research groups.” What sorts of additional data should the government be collecting? That’s a great question. I think the main priorities are: data that’s difficult for small research groups to collect, data that’s sensitive, and data that’s widely useful. The most obvious example is one that’s common in many countries now, but actually faced a lot of opposition in getting started: the census. We know now that basic demographic data is widely useful to tons of different researchers across fields, and also to people working in industry and politics. In practice, it was difficult and expensive to set up, and still is very expensive, especially if the information is not collected electronically. Collecting that data is worthwhile despite the costs, because of how many uses it has, but still could be made easier. In terms of other data we should be collecting, I can speak from my own field: in psychiatry and medicine, it would be very helpful to have data on which diagnoses people have had, when they were diagnosed, and which treatments they’re taking, among the general population. This is because it’s difficult to recruit and keep people with illnesses in studies, and studies that try to do that tend to capture only a biased subgroup of them — usually those that are healthier and more educated. In contrast, in some countries, like Denmark, everyone in the population has their medical data already collected routinely because there’s a national healthcare service, and researchers can access it. But that kind of dataset is very rare. The challenge is doing that in a way that’s secure and private, because there are important civil liberty concerns with sharing data like that, and it doesn’t need to be collected by the government necessarily — in countries that are autocratic or unstable, having a transparent and independent organisation collect that data might be more valuable. Anonymising that data is also a crucial part of this, but it’s also important to prevent people from being identified if they have a particularly unusual set of data points, for example. That might require encryption or only giving researchers access to summaries of the data, rather than individual data points. I’m probably not the best person to answer that for other fields, but I can imagine there are lots of examples of data that fit those priorities of being widely useful, sensitive, and difficult for smaller groups to collect. 4/ Scientific institutions seem to be losing trust. How can we reform science to rebuild those institutions and regain consensus and trust? I tend to believe that trust should be earned and deserved, rather than expected. But it’s hard to make a judgment about scientific institutions as a whole. I’d say what’s needed is taking people’s concerns seriously and being transparent and fair in dealing with them. With concerns about vaccines, for example, over the past century, many countries have established systems to compensate people if they’ve had rare side effects from vaccines. But that doesn’t mean that we should avoid using vaccines widely, because they’ve helped avoid an enormous amount of suffering. Being able to deal with different concerns from different people is difficult, but I tend to think of it as a continuous process, where we should try to design institutions in a way that they have the ability to learn what works and what doesn’t, and how to get the balance right. 5/ Many researchers post their working papers for the public, influencing other scientists with unpublished work. Is the peer review system fundamentally broken? I’m in favour of preprints and working papers. What’s interesting here is that with the journal system, researchers could influence the public with headlines alone, while the actual data and research would remain hidden behind paywalls. So, sharing working papers is an improvement because at least now more people are given the opportunity to spot errors or problems with the research, or learn from it and apply it elsewhere. Journals today coordinate peer review, but the way this happens is both slow and inefficient. Editors of a journal will email researchers to ask if they’re available, hope they are, and wait for weeks or months for their response to a study. And, except in rare cases, reviewers aren’t compensated for their work. What’s needed is a way to open up the review process to a wider range of people — for example, by sharing data and research publicly — and also treating peer review as a specialisation within science. That might involve treating it as a career pathway, perhaps like auditing works in other industries. Micro Reads After the Artemis I mission’s brilliant success, why is an encore 2 years away? – Eric Berger, Ars Technica | Why Silicon Valley is so hot on nuclear energy and what it means for the industry – Catherine Clifford, CNBC | Sustainable Funds Powerhouse Parnassus Weighs Investing in Nuclear Energy – Leslie Norton, Morningstar | AI experts are increasingly afraid of what they’re creating – Kelsey Piper, Vox | How much would you pay to see a woolly mammoth? – Antonio Regalado, MIT Tech Review | Restoring a key hormone could help people with Down syndrome – Emily Underwood | Space Elevators Are Less Sci-Fi Than You Think – Stephen Cohen | Dimming the Sun to Cool the Planet Is a Desperate Idea, Yet We’re Inching Toward It – Bill McKibben, The New Yorker | A Rallying Cry for More US Health Innovation at “Warp Speed” – Rachel Silverman, CGD | Unlocking American Agricultural Innovation – Adin Richards, Institute for Progress
2022-12-01T00:00:00
https://www.aei.org/articles/machine-language-will-chatgpt-have-a-job-market-impact/
[ { "date": "2022/12/01", "position": 44, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 46, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 44, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 45, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 45, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 46, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 43, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 44, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 46, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 44, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 44, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 50, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 50, "query": "ChatGPT employment impact" } ]
How ChatGPT Will Impact HR Departments And Roles
How ChatGPT Will Impact HR Departments And Roles
https://newsletter.workplaceintelligence.com
[]
ChatGPT can help HR departments save money by automating repetitive and time-consuming tasks, such as answering common questions from employees.
The popular artificial intelligence chatbot, ChatGPT reached 100 million users in just two months of launching as professionals use it regularly to help them accomplish their jobs. The potential for ChatGPT to impact how we work and live is so big that Microsoft invested $10 billion in ChatGPT’s parent company OpenAI and has already integrated it into Bing, their search engine. And, in response there are already a lot of competitors, including Google Bard. Between Buzzfeed replacing 12% of their workforce with it, and judges using it to make court decisions, the impact of ChatGPT is growing rapidly and can’t be ignored. I believe it will impact every single industry, company, department, and job in one way or another moving forward. Nearly one-third of U.S. white-collar workers have already used or integrated it into their everyday work. ChatGPT is a language model developed by OpenAI that uses advanced artificial intelligence and machine learning techniques to generate human-like text. It was trained on a large corpus of text data, allowing it to understand and respond to a wide range of questions and prompts. ChatGPT matters because it represents a major step forward in the development of AI and machine learning technology. Its ability to generate human-like text with remarkable accuracy and consistency has wide-ranging implications for many professions like HR. By using ChatGPT, HR can improve employee productivity while reducing costs. According to a study conducted by Deloitte, companies that use AI-powered systems, including ChatGPT, can see an improvement in employee productivity by up to 40%. ChatGPT's ability to provide instant support and information can help employees complete tasks more efficiently, freeing up time for more strategic and value-adding activities. ChatGPT can help HR departments save money by automating repetitive and time-consuming tasks, such as answering common questions from employees. A study by Accenture estimates that organizations could save up to 60% on operational costs by using AI-powered systems like ChatGPT. ChatGPT can be integrated into chatbots, virtual assistants, and customer service platforms to provide instant, personalized support to employees. Its ability to understand natural language and respond to a wide range of questions makes it well-suited for tasks such as answering frequently asked questions, providing product recommendations, and resolving employee issues. Here are some HR-specific applications of ChatGPT that you can implement: Data management: ChatGPT can help HR with data entry, employee records management and generate reports. This allows HR to not only have access to the right information, but be able to organize it and interpret it to make better and more efficient decisions for their organizations. ​ Employee Assistance Programs (EAP): HR can use ChatGPT to provide 24/7 support to employees with personal or work-related problems, such as stress, mental health, and work-life balance. ChatGPT can provide personalized support, information, and resources to help employees manage their issues. ​ Recruitment and talent Acquisition: HR can use ChatGPT to provide instant answers to candidates' questions about the company, job requirements, and the application process. ChatGPT can also assist with pre-screening candidates by answering frequently asked questions and helping to identify the most qualified candidates. A study by HBR found that organizations that use AI in the recruitment process see an improvement in candidate quality by up to 20%. In addition, ChatGPT can be used to create job descriptions and conduct initial interviews using specific questions to evaluate candidate responses, both of which save recruiters hours of time. ​ Employee onboarding: One of the leading predictors of employee tenure and productivity is an onboarding process. ChatGPT can provide new hires with information on company policies, benefits, important contacts, and other details that could answer most questions they would have. A lot of onboarding programs are very inefficient and are tedious from both the new hire and HR perspectives. By automating using ChatGPT, information is organized and centralized, so it saves HR from having to answer the same questions repeatedly. ​ Employee retention: It can be used to analyze employee sentiment and provide insights into potential issues that they have that could lead to higher turnover. By using this data, companies can tweak their employee experience to increase retention. ​ ​ Employee training and development: HR can use ChatGPT to provide on-demand training and development opportunities for employees. ChatGPT can assist with training by answering questions, providing resources, and offering guidance on specific topics. ​ Performance evaluations: Both managers and employees dislike going through the performance review process. ChatGPT can be used to generate performance evaluations based on data input such as an employee’s job responsibilities and accomplishments. ​ Employee engagement and feedback: HR can use ChatGPT to collect and analyze employee feedback on a variety of topics, such as company culture, management, and work-life balance. ChatGPT can also assist with employee engagement by providing information and resources to help employees feel more connected to their work and the company. ChatGPT can provide HR departments with an efficient and cost-effective way to support and engage employees. Its ability to provide personalized support and information in real-time can help HR better meet the needs of their workforce and improve overall organizational performance. But, in its early stages, it has limitations in understanding different queries, bias in data, lack of emotional intelligence and it only has a knowledge base of research and events up to 2021. With that being said, there are several reasons why HR departments should be concerned about the use of ChatGPT in the workplace, which include: Elimination of jobs: As ChatGPT continues to automate and improve HR processes and administrative tasks, it’s quite possible that HR departments will downsize. These tasks include updating documents and staff records, legal compliance, reviewing company policies and creating reports. ​ Legal and ethical issues: ChatGPT's ability to generate large amounts of data and information about employees raises legal and ethical concerns around privacy, data protection, and information security. HR departments need to ensure that the use of ChatGPT complies with relevant laws and regulations, and that employees' personal information is protected. ​ Bias and fairness: ChatGPT is trained on large amounts of data, which may contain biases and stereotypes. HR departments need to be aware of these potential biases and ensure that ChatGPT is not making unfair or discriminatory decisions in areas such as recruitment, employee promotions, or performance evaluations. ​ Accuracy and reliability: While ChatGPT is designed to provide accurate and reliable information, it is not infallible. HR departments need to critically evaluate the information provided by ChatGPT and ensure that it aligns with the company's values and policies. ​ Employee acceptance and trust: The use of ChatGPT may raise concerns and questions among employees, who may be wary of interacting with an AI-powered system. HR departments need to ensure that employees understand the benefits and limitations of ChatGPT and are comfortable with its use in the workplace. HR departments need to consider the legal, ethical, and social implications of using ChatGPT in the workplace. They must also be diligent in evaluating the accuracy and reliability of the information provided by ChatGPT and work to build employee trust and acceptance of this technology. Despite the potential risks, ChatGPT has the potential to transform the HR function in organizations by automating repetitive tasks, improving the employee experience, and enhancing HR departments' ability to deliver value to organizations. HR departments that embrace this technology will be well-positioned to support their organizations in a rapidly changing business environment. Thanks for reading — be sure to join the conversation on LinkedIn and let me know your thoughts on this topic! ​ ​
2022-12-01T00:00:00
https://newsletter.workplaceintelligence.com/posts/workplace-intelligence-insider-how-chatgpt-will-impact-hr-departments-and-roles
[ { "date": "2022/12/01", "position": 56, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 58, "query": "ChatGPT employment impact" }, { "date": "2023/05/04", "position": 9, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 53, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 54, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 54, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 59, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 56, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 55, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 54, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 55, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 55, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 59, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 59, "query": "ChatGPT employment impact" } ]
The Impacts of ChatGPT on Jobs, Society, and Industries | Journal
The Impacts of ChatGPT on Jobs, Society, and Industries
https://vocal.media
[]
The impact of ChatGPT goes far beyond its immediate applications, as it has the potential to disrupt the job market, impact patent laws, and reshape society as ...
ChatGPT, developed by OpenAI, is a highly advanced language model that is quickly becoming one of the most promising artificial intelligence (AI) technologies of the 21st century. It is capable of generating human-like responses to a wide range of questions and prompts, making it a versatile tool for businesses, researchers, and developers. However, the impact of ChatGPT goes far beyond its immediate applications, as it has the potential to disrupt the job market, impact patent laws, and reshape society as a whole. Impact on Jobs One of the most significant impacts of ChatGPT is its potential to automate tasks and replace human workers. In the customer service industry, for example, ChatGPT can be integrated into chatbots to provide instant and accurate responses to customer inquiries, freeing up human agents to handle more complex tasks. Similarly, in the field of content creation, ChatGPT can generate articles, poems, and entire books, making it an ideal tool for writers and publishers. This automation of tasks will likely lead to the loss of certain jobs, but it may also create new jobs in areas such as language model development, integration, and maintenance. Impact on Patents The impact of ChatGPT on patent law is a complex and controversial issue. On one hand, the model is based on cutting-edge AI technology and represents a significant advance in the field. On the other hand, because it is based on massive amounts of pre-existing text data, it raises questions about the ownership of the content generated by the model. Some experts argue that this content should be considered a form of intellectual property, and that the creators of ChatGPT should be granted patents for their work. Others argue that the content generated by ChatGPT should be considered a form of public domain, and that no patents should be granted. Impact on Society The impact of ChatGPT on society goes far beyond its potential to automate jobs and impact patent laws. It has the potential to change the way we interact with computers, and to reshape our understanding of what it means to be human. For example, as ChatGPT becomes more sophisticated and more widely adopted, it may become increasingly difficult to tell the difference between human-generated content and content generated by the model. This could blur the lines between what is real and what is not, leading to new ethical and philosophical questions about the role of AI in society. Another way that ChatGPT may impact society is through its ability to generate highly biased or misleading content. If the model is trained on biased data, it may generate biased responses, leading to further division and conflict within society. This highlights the importance of ensuring that AI models like ChatGPT are trained on diverse and inclusive data, and that the algorithms behind these models are transparent and auditable. Impact on Business The impact of ChatGPT on business is likely to be significant and far-reaching. In addition to automating customer service and content creation tasks, the model has the potential to revolutionize business operations and decision-making. For example, ChatGPT can analyze large amounts of data and generate insights and recommendations, enabling businesses to make data-driven decisions with greater speed and accuracy. It can also help businesses automate repetitive tasks and streamline workflows, freeing up employees to focus on more strategic and creative tasks. Impact on Politics The impact of ChatGPT on politics is a matter of concern for many experts. The model has the potential to generate fake news and disinformation, spreading false information and misleading the public. Additionally, it may be used to manipulate public opinion and interfere with political campaigns and elections, compromising the democratic process. It is important for policymakers to consider these potential risks and to work to regulate the use of ChatGPT in a way that protects the public and ensures that the technology is used in a responsible and ethical manner. Impact on Entertainment The impact of ChatGPT on entertainment is likely to be significant as well. The model has the potential to automate tasks such as screenwriting and story creation, freeing up artists and creatives to focus on more strategic and creative tasks. Additionally, ChatGPT has the potential to revolutionize the gaming industry by enabling the creation of highly interactive and immersive games that respond to player actions and decisions. Finally, the model may be used to create new forms of entertainment that blur the lines between human-generated content and content generated by AI, leading to new and exciting creative possibilities. Impact on Society The impact of ChatGPT on society goes far beyond its potential to automate jobs and impact patent laws. It has the potential to change the way we interact with computers, and to reshape our understanding of what it means to be human. For example, as ChatGPT becomes more sophisticated and more widely adopted, it may become increasingly difficult to tell the difference between human-generated content and content generated by the model. This could blur the lines between what is real and what is not, leading to new ethical and philosophical questions about the role of AI in society. Another way that ChatGPT may impact society is through its ability to generate highly biased or misleading content. If the model is trained on biased data, it may generate biased responses, leading to further division and conflict within society. This highlights the importance of ensuring that AI models like ChatGPT are trained on diverse and inclusive data, and that the algorithms behind these models are transparent and auditable. Conclusion In conclusion, the impact of ChatGPT on business, politics, and entertainment, as well as on jobs, patents, and society as a whole, is likely to be significant and far-reaching. It is a powerful tool with the potential to automate tasks, streamline workflows, and revolutionize industries. However, it is also a technology that poses potential risks and challenges, and it will be important for businesses, policymakers, and the public to work together to ensure that it is used in a responsible and ethical manner. The potential impacts of ChatGPT on these areas are complex and far-reaching, and they will likely continue to evolve as the technology becomes more sophisticated and more widely adopted.
2022-12-01T00:00:00
https://vocal.media/journal/the-impacts-of-chat-gpt-on-jobs-society-and-industries
[ { "date": "2022/12/01", "position": 64, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 64, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 63, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 64, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 64, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 64, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 67, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 64, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 65, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 64, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 66, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 69, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 69, "query": "ChatGPT employment impact" } ]
The Looming Impact of AI on U.S. Jobs - Infinitive
The Looming Impact of AI on U.S. Jobs
https://infinitive.com
[ "Jonathan Breen" ]
Knowledge workers are not immune. The education tech company Chegg, for example, saw its stock plunge when students flocked to ChatGPT for homework help – and ...
AI’s Looming Threat to American Jobs Productivity up, employment down? Artificial intelligence is often praised for boosting productivity, but many business leaders and analysts warn it could significantly reduce employment in the coming years. In the United States, early signs already show AI-driven automation replacing workers or slowing hiring. While AI will create some new roles, the near-term net effect looks heavily negative for jobs. As Microsoft’s CEO Satya Nadella put it, we are witnessing “a new era of AI” that requires companies to “do more with less” – a mantra increasingly achieved by cutting headcount and deploying AI solutions. This post examines how AI has begun displacing U.S. workers, what top executives are saying, and projects year-by-year job gains vs. losses from 2025 through 2029, leaning toward a cautious (and bleak) outlook on employment. Early Signs: AI-Driven Job Reductions Have Begun It’s already happening in tech. Over the past two years, numerous companies have explicitly or implicitly attributed layoffs to AI adoption. Tech firms lead the way. In January 2023, Microsoft announced 10,000 layoffs while simultaneously pouring billions into OpenAI (maker of ChatGPT). A few months later, Meta’s Mark Zuckerberg cut another 10,000 jobs and outlined plans for “heavy investment in AI”. These moves signaled that even Big Tech, traditionally a job engine, is trimming staff as it pivots to AI. Google likewise began 2024 with hundreds of layoffs in its ad division, at the same time deploying AI to streamline customer support and ad sales. According to The Wall Street Journal, Google’s cuts were part of reining in costs and refocusing resources on AI development. Other sectors are following suit. UPS, the 116-year-old logistics giant, announced it would eliminate 12,000 jobs (primarily in management) in early 2024 – and notably said those positions “won’t return” even when business picks up. UPS has aggressively rolled out machine learning to automate tasks like pricing customer shipments, reducing the need for human staff in those functions. Similarly, asset manager BlackRock cut 600 roles in 2024, with its CEO citing industry shifts and new technologies (AI among them) as drivers. While BlackRock didn’t directly blame AI, the firm’s memo acknowledged that automation was reshaping workflows. Knowledge workers are not immune. The education tech company Chegg, for example, saw its stock plunge when students flocked to ChatGPT for homework help – and it laid off 4% of its workforce in 2023 as a result. Q&A site Stack Overflow, facing declining traffic as developers turn to AI coding assistants, also downsized staff. In another high-profile case, the Wall Street Journal noted Duolingo cut 10% of its contractors in late 2023, explaining that AI could now handle some of the content creation those workers did. Media and creative roles are among the first casualties. The web portal MSN has used AI to generate news stories since 2020, allowing it to lay off dozens of human journalists. Marketing agencies are experimenting with AI content generators in place of copywriters and designers – in China, ad firm BlueFocus outright replaced its creative staff with generative AI in 2023. And in the U.S., Salesforce reportedly cut around 700 jobs in 2023, with insiders speculating the roles would be backfilled by AI systems as the company doubled down on automation. From call centers to copywriting, real examples of AI substitutions are piling up. What Tech Leaders and Executives Are Saying Corporate leaders now acknowledge AI’s threat to jobs. Sam Altman, CEO of OpenAI (creator of ChatGPT), has been blunt about the fallout: “AI is going to eliminate a lot of current jobs, and there will be classes of jobs that totally go away,” he said. Importantly, Altman adds that AI will also create new jobs and change many existing ones – but the transition may be painful. Executives at outplacement firm Challenger, Gray & Christmas observe a similar trend: “This wave of [AI] technology is a potential replacement or an enhancement for lots of critical-thinking, white-collar jobs,” noted SVP Andy Challenger. Roles up and down the corporate ladder, including middle managers and analysts, are now in AI’s crosshairs. By 2026, 20% of large organizations will use AI to “flatten” management (Gartner). This will eliminate over half of current middle-management positions. This reflects AI’s growing ability to handle administrative and analytical tasks that managers do today. NVIDIA’s CEO Jensen Huang even outlined an audacious vision: deploying 100 million AI “digital employees” alongside just 50,000 human workers. That kind of 2000:1 ratio illustrates how dramatically AI could scale up productivity with minimal human labor. As Huang’s plan suggests, one human worker equipped with armies of AI assistants could potentially do the work of hundreds. Even traditionally optimistic CEOs are preparing for workforce upheaval. Arvind Krishna, CEO of IBM, made headlines by pausing hiring for roles that AI could replace. Krishna estimates that up to 30% of IBM’s back-office jobs (about 7,800 roles) can be automated by AI in the next few years. “We’ll be very selective filling jobs that don’t touch our clients or technology,” an IBM spokesman explained, signaling that support functions like HR are prime targets for AI-driven cuts. And Meta’s Mark Zuckerberg – despite publicly downplaying that AI caused its 2023 layoffs – has reorganized Facebook around AI initiatives, implicitly accepting that fewer traditional roles are needed. In an internal memo he warned 2024 would be an “intense year” of efficiency, after already cutting thousands of staff to refocus on AI and automation. Not all tech voices are doom-and-gloom. Some emphasize augmentation over replacement. Many CEOs claim AI will assist employees rather than fire them. In practice, however, the narrative of “augmentation” often coincides with headcount reductions. For example, UPS’s CEO Carol Tomé highlighted that new AI tools let sales teams draft proposals without pricing analysts – and indeed UPS then shed many of those analyst jobs. Executives may prefer to avoid saying “AI took our jobs,” but evidence shows they are quietly using AI to do more with fewer people. By one count, U.S. employers announced over 4,600 AI-related job cuts from May to October 2023. Bloomberg found this is likely an undercount, as companies often keep AI’s role in layoffs “under the radar” to avoid bad press. In short, tech leaders foresee major disruption: fewer hires, leaner teams, and a premium on AI skills for those who remain. The Gathering Storm: 2025 and Beyond. So far, the overall impact of AI on U.S. employment has been modest – the unemployment rate remains low, and only about 17,000 U.S. jobs were explicitly lost to AI between mid-2023 and late 2024. But analysts suggest this may be the “lull before the storm”. A 2024 World Economic Forum survey found 40% of employers worldwide plan to reduce their workforce between 2025 and 2030 wherever AI can perform tasks. Likewise, in PwC’s 2024 CEO poll, 1 in 4 CEOs said generative AI would lead to workforce cuts of at least 5% in the short term. Many corporate leaders are simply waiting to see AI proven effective before making deeper cuts. As the technology matures, those cuts could accelerate rapidly. Major financial firms anticipate shrinking staff with AI. Bloomberg Intelligence reports that global banks may slash up to 200,000 jobs within 3–5 years as AI automates routine work. On average, banking tech executives expect about a 3% reduction in their workforce, with nearly a quarter of big banks projecting workforce declines of 5–10% due to AI adoption. The roles most at risk are back-office clerks, operations staff, and customer service reps – essentially any job involving routine, repetitive tasks. As one senior analyst put it, “AI will not eliminate them fully, rather it will lead to workforce transformation.” In plain terms, many traditional jobs will disappear, replaced by a smaller number of new AI-focused roles. AI is creating new jobs – just not enough to offset the losses in the near term. Demand has spiked for machine learning engineers, AI ethicists, prompt writers, and data scientists. The U.S. Bureau of Labor Statistics projects strong growth for AI-skilled occupations through 2033. Yet those specialist positions are relatively few. For example, when Dropbox pivoted to an “AI-first” strategy in 2023, it laid off 500 people but announced plans to hire just dozens with AI expertise. Similarly, Meta’s 2024 “year of efficiency” meant trimming thousands of jobs to free budget for a smaller number of AI developer roles. In short, AI-related hiring is booming, but it’s concentrated in highly skilled jobs, while the technology is simultaneously eroding many clerical, support, and entry-level positions. Jobs gained vs. jobs lost due to AI is expected to tilt negative through the latter 2020s. A Goldman Sachs analysis estimated AI could ultimately “expose” 300 million jobs globally to automation. For the U.S., one think-tank study projected that 12% of American jobs could be wiped out by AI by 2030. That would be tens of millions of jobs lost. Even if new AI-powered industries create millions of different jobs, it likely won’t happen fast enough to offset the job losses. As the Harvard Business Review observed, AI may not cause long-term mass unemployment, but “significant short-term job losses” are highly likely in the coming years. The consensus in the business press – from Bloomberg to The Economist – is that the late 2020s will bring turbulence and displacement in the labor market before things eventually stabilize. Year-by-Year U.S. Job Impact Forecast (2025–2029) By the numbers. Taking these trends and executive signals into account, below is an estimated annual forecast for jobs gained due to AI (new positions created by AI-driven growth) versus jobs lost due to AI (roles eliminated or not filled because AI does the work). These figures assume AI adoption continues to accelerate across industries and that companies act on their current plans to streamline workforces. (All numbers are approximate and represent the net change attributable specifically to AI each year, based on expert reports and surveys.) 2025: Jobs gained: ~50,000 (surge in hiring of AI specialists, data scientists, and engineers); Jobs lost: ~300,000 (first broad wave of AI-driven layoffs in sectors like tech, media, customer service, and retail). Early adopters of generative AI – from call centers deploying chatbots to online publishers using AI content – will trim staff. Net effect: –250,000 jobs. Early adopters of generative AI – from call centers deploying chatbots to online publishers using AI content – will trim staff. 2026: Jobs gained: ~100,000; Jobs lost: ~600,000. AI tools become mainstream in offices, leading many firms to not replace departing workers. White-collar automation (e.g. accounting, marketing content generation) accelerates. Some middle-manager roles are phased out as AI takes over reporting and analysis tasks. Net effect: –500,000 jobs. AI tools become mainstream in offices, leading many firms to not replace departing workers. White-collar automation (e.g. accounting, marketing content generation) accelerates. Some middle-manager roles are phased out as AI takes over reporting and analysis tasks. 2027: Jobs gained: ~150,000; Jobs lost: ~900,000. By this year, AI-driven automation reaches critical mass in finance, healthcare administration, and manufacturing. Employers across industries implement the cost-cutting plans laid three years earlier. For many companies, this is the peak period of job displacement. Net effect: –750,000 jobs. By this year, AI-driven automation reaches critical mass in finance, healthcare administration, and manufacturing. Employers across industries implement the cost-cutting plans laid three years earlier. For many companies, this is the peak period of job displacement. 2028: Jobs gained: ~200,000; Jobs lost: ~1,200,000. With AI now deeply embedded in business processes, efficiency gains allow further cuts – for example, autonomous AI agents handling IT support or supply chain planning. The cumulative impact becomes visible in higher unemployment or slower job creation nationally. Net effect: –1,000,000 jobs. With AI now deeply embedded in business processes, efficiency gains allow further cuts – for example, autonomous AI agents handling IT support or supply chain planning. The cumulative impact becomes visible in higher unemployment or slower job creation nationally. 2029: Jobs gained: ~250,000; Jobs lost: ~1,500,000. New AI-driven industries (AI-driven biotech, advanced robotics, etc.) are scaling up and hiring, but by now AI has also permanently replaced millions of traditional jobs (from clerical roles to routine professional jobs). Companies reach a new baseline staff size optimized with AI. Net effect: –1,250,000 jobs. Summary By the end of 2029, the U.S. could see on the order of 4 to 5 million net jobs lost due to AI over the five-year period. This aligns with warnings that around 10–12% of jobs are at high risk from AI automation by decade’s end. Such losses would be unprecedented in speed, though not without historical parallel – earlier tech revolutions also displaced workers before new opportunities expanded. It’s important to note these figures are estimates; actual outcomes will depend on policy responses, economic conditions, and the pace of AI innovation. Nonetheless, the prevailing sentiment in early 2025 is that AI’s impact on employment will be significantly negative in the near term, even as we optimistically anticipate new industries and roles to emerge in the longer run. The next five years will test how resilient and adaptable the U.S. labor market can be in the face of this AI upheaval. Sources: The Wall Street Journal; Bloomberg; The Economist; Forbes; Harvard Business Review; World Economic Forum; Challenger, Gray & Christmas; company press releases and executive statements, among others. All data and quotes are from 2024–2025 reports and public comments by business leaders.
2025-04-15T00:00:00
2025/04/15
https://infinitive.com/the-looming-impact-of-ai-on-us-jobs/
[ { "date": "2022/12/01", "position": 75, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 80, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 74, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 75, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 75, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 78, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 84, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 80, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 77, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 80, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 82, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 93, "query": "ChatGPT employment impact" }, { "date": "2025/04/16", "position": 4, "query": "AI job creation vs elimination" }, { "date": "2025/04/16", "position": 11, "query": "AI job losses" }, { "date": "2025/04/16", "position": 6, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 17, "query": "AI job creation vs elimination" }, { "date": "2025/05/01", "position": 93, "query": "ChatGPT employment impact" } ]
ChatGPT Isn't Going To Take Your Job - Monday Morning Economist
ChatGPT Isn't Going To Take Your Job Away
https://www.mondayeconomist.com
[ "Jadrian Wooten" ]
Overall, the impact of AI on the demand for labor is complex and will vary depending on a range of factors. As AI technology continues to advance, it will be ...
Artificial intelligence (AI) is becoming increasingly prevalent in our daily lives, from virtual assistants on our smartphones to self-driving cars on the roads. As AI technology continues to advance, it is transforming a wide range of industries and has the potential to revolutionize many aspects of society. One way that AI connects with basic economic concepts is through automation. As AI systems become more sophisticated, they are able to perform a growing number of tasks that were previously done by humans. This has led to concerns about the potential for AI to automate away jobs and displace human workers. However, the demand for labor is not determined solely by the presence of automation. Instead, it is also influenced by a range of factors, including the overall health of the economy, the availability of jobs in different industries, and the skills and experience of the workforce. In some cases, the introduction of AI may lead to the creation of new jobs that did not previously exist. For example, as AI systems become more widely used, there will likely be a growing demand for workers with the skills and knowledge to develop, maintain, and manage these systems. Overall, the impact of AI on the demand for labor is complex and will vary depending on a range of factors. As AI technology continues to advance, it will be important for policymakers and business leaders to carefully consider its potential effects on the economy and the workforce. Everything above that line was written by the internet’s newest sensation, ChatGPT. All I had to do was enter the following prompt into a text box and the site spit back those paragraphs in about 10 seconds: Write a short blog post on the prevalence of artificial intelligence and how it connects with basic economic concepts about automation and the demand for labor. OpenAI, the creator of ChatGPT, intended the site to act as a chatbot. I could have responded to the original output and asked it to change parts of the story or include citations. It would have made those updates within a few seconds. There are some limits to what the chatbot can do since it isn’t actively searching the web as we chat, but the results so far have been a mix of terrifying, hilarious, and brilliant. Once the site launched, users flocked to OpenAI to create an account and play around. CEO Sam Altman tweeted that the site went live on Wednesday, November 30th, and crossed 1 million users by Monday, December 5th. To get a sense of just how fast that is, Chartr put together a graph comparing it to another popular AI app (lensa) and other popular tech sites (Instagram and Spotify): Those opening paragraphs linking automation and labor demand are largely accurate. The notion that workers will be replaced by artificial intelligence or automation is based on a concept known as the substitution effect. This effect is driven by the relative price of labor and capital. When the price of capital decreases, firms may be more likely to substitute it for relatively more expensive labor, leading to a decrease in the demand for workers. The underlying assumption, however, is that firms make this change while producing the same amount of products as before. This fear of automation isn’t new, but there are a lot of things people get wrong about automation. While the substitution effect is possible, there’s also the potential for automation to increase employment through the scale effect. As the price of automation decreases, so does the average cost of producing whatever product the company wants to sell. Firms can pass some of those savings on to consumers in the form of lower prices, which will result in consumers purchasing more products. If the price drop is large enough, and consumers purchase a lot more products, firms may actually end up hiring more workers and using more capital. This automation paradox doesn’t even account for the fact that automation may also create brand-new jobs that we didn’t even realize were possible. Forty years ago, most people couldn’t imagine jobs like app developer, social media manager, or uber driver. As old jobs become automated, new jobs are created. The lost jobs are much more obvious, while the new jobs can take time to develop. There’s also a lot of fear among educators that chatbots will destroy the college essay, but I’d argue it’s high time we rethink how students are assessed anyway. The Guardian was one of the few newspapers bold enough to say that “[p]rofessors, programmers and journalists could all be out of a job in just a few years.” Those are the sort of clickbait statements that have been around for literally 100 years (see below) as people have predicted machines will take someone’s job. In one of the first presidential interviews on television, President Kennedy complained that too many workers were being thrown out of the labor force because of automation. I’m not trying to say that ChatGPT isn’t really cool and will likely impact a wide variety of jobs. I just don’t think it’ll wipe out entire occupations. Automation has historically targeted routine tasks that are easy to replicate, but it will largely affect people depending on their occupation. The routine parts of my job are the ones I hate doing and would love to automate! Instead of spending an hour proofreading my posts, I can ask a chatbot to do it. This significantly reduces the cost of writing the newsletter and allows me to increase my output. Since the newsletter only comes out once per week, I could write another article each week or I may just start a podcast instead. Leave a comment Thank you for reading Monday Morning Economist. This post is public so feel free to share it. Share
2022-12-01T00:00:00
https://www.mondayeconomist.com/p/chatgpt
[ { "date": "2022/12/01", "position": 76, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 82, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 71, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 70, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 70, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 50, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 48, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 75, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 72, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 75, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 48, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 45, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 45, "query": "ChatGPT employment impact" } ]
The Impact of ChatGPT on Employment: ChatGPT and ... - Amazon.com
Amazon.com
https://www.amazon.com
[]
Amazon.com: The Impact of ChatGPT on Employment: ChatGPT and Future of Work eBook : Khalid, Muhammad Raza, GPT, Chat: Kindle Store.
Click the button below to continue shopping
2022-12-01T00:00:00
https://www.amazon.com/Impact-ChatGPT-Employment-Future-Work-ebook/dp/B0BSP9ZVVY
[ { "date": "2022/12/01", "position": 81, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 88, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 83, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 86, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 86, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 84, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 90, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 89, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 88, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 89, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 88, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 58, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 58, "query": "ChatGPT employment impact" } ]
How AI applications like ChatGPT impact the workplace - PwC
How AI applications like ChatGPT impact the workplace
https://www.pwc.com
[]
AI applications such as ChatGPT can extend capabilities and improve productivity when augmented into existing jobs. For example, in journalism, AI can help ...
Historically, technological innovations have always created a sense of excitement in some and fear in others. When it comes to the impact on jobs, white-collar workers have enjoyed a sense of security, while blue-collar jobs have always experienced the greatest impact through either significant change or simply becoming obsolete. Possibly for the first time, the tables have turned, and white-collar workers are looking at AI with a sense of trepidation. AI applications such as ChatGPT can extend capabilities and improve productivity when augmented into existing jobs. For example, in journalism, AI can help condense, collate and even articulate whole articles in a matter of seconds. The implications on the workplace are that new jobs will be created, others transformed and possibly sooner than many expect, and some roles will disappear. Nothing new here when it comes to digital innovation, except that the impacted roles could primarily be of well-trained, digitally savvy and highly skilled persons. This is because such technology can facilitate creative tasks, complex problem-solving, drafting and even advising. By way of example, it is possible that a small team of software developers could be replaced by one experienced software developer whose role is augmented by AI applications. In the short term, this technology will primarily enable workers to focus on value-adding tasks. Lawyers could save hours sifting through legal documents, and medical professionals could save critical time reviewing journals and patient files to identify and map possible treatment plans. HR professionals can ask AI applications to prepare interview questions, summarise CVs and draft job vacancies.
2022-12-01T00:00:00
https://www.pwc.com/mt/en/publications/humanresources/how-ai-applications-like-chatgpt-impact-the-workplace.html
[ { "date": "2022/12/01", "position": 88, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 96, "query": "ChatGPT employment impact" }, { "date": "2023/03/07", "position": 92, "query": "AI skills gap" }, { "date": "2023/03/07", "position": 61, "query": "AI regulation employment" }, { "date": "2023/03/07", "position": 10, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 92, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 97, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 97, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 90, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 97, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 98, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 96, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 98, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 96, "query": "ChatGPT employment impact" }, { "date": "2025/02/01", "position": 89, "query": "ChatGPT employment impact" }, { "date": "2025/05/01", "position": 89, "query": "ChatGPT employment impact" } ]
AI in the workplace: The benefits of ChatGPT - SheCanCode
AI in the workplace: The benefits of ChatGPT
https://shecancode.io
[ "Kayleigh Bateman" ]
Heading up the research team, Hannah's projects include: the future of work, the implications of AI and automation, and improving the employee experience.
IN JANUARY 2023, CHATGPT PASSED THE MILESTONE OF 100 MILLION USERS GLOBALLY, HIGHLIGHTING THE HUGE POPULARITY IT HAS AMASSED SINCE ITS LAUNCH IN NOVEMBER 2022. While some are using the tool to tell a joke or write a song, the clear demand for the services it provides suggests that ChatGPT has a very serious future ahead of it. In this piece, Hannah Jeacock, Research Director at MHR, dives into what sets ChatGPT apart, the benefits it can bring as a tool in the workplace and what we might see next. Hannah is MHR’s Research Director. With 18 years’ tech experience, she has worked for MHR in both engineering and management roles. Heading up the research team, Hannah’s projects include: the future of work, the implications of AI and automation, and improving the employee experience. Additionally, Hannah is a champion for Women in Tech and is involved in initiatives to help young people begin their career in tech. CHATGPT IS AN AI CHATBOT. In its simplest form, the tool can understand and generate natural language text, enabling it to interact with humans in a conversational way. This is made possible by leveraging a large language model that is trained on billions of words from publicly available text on the internet. The model predicts the next best words to use in a sequence, based on the context of not only the words fed into it, but also on the previous words in the sentence. In this sense the term ‘Artificial Intelligence’ is misleading as these large language models are not actually intelligent, but instead imitate intelligence by building human-like sentences based on all the words and sentences they have been trained upon. There are clearly limitations – for example language models can only work with the text they have been trained on. In ChatGPT’s case, this was with text up to 2021, and so the text it generates will only correspond to information available at that time. More importantly, the tool has the tendency to make up facts, making it an unreliable source of information. WHAT SETS CHATGPT APART? Natural language processors have existed in various forms since 1966. So, what sets ChatGPT apart? Firstly, ChatGPT, like similar large language models, can deal with a much wider variety of requests than the majority of chatbots available today. Most other systems are designed to action very specific tasks and will not be able to respond to any requests that fall outside of set parameters. For example, a travel agent’s customer service bot will have the ability to answer questions relating to any bookings with the agency and potentially action tasks off the back of related requests. In comparison, ChatGPT can handle a wide array of tasks and answer a range of queries relating to many different topics. From writing stories to creating code, ChatGPT is a significant step up in capability, and is even able to perform sentiment analysis, language translation and text summarisation. The tool’s conversational interface is also important, providing human-like responses and the ability to adapt its answers to fit the wider context of a conversation. HOW CAN BUSINESSES USE CHATGPT? Like many new technologies, ChatGPT has the potential to drive efficiency in a variety of business-critical tasks. Take HR as an example. ChatGPT can generate job adverts or descriptions, payslips or reports, enabling these tasks to be completed quickly and freeing up an HR team’s time to focus on other tasks that require a human touch. ChatGPT also has a role in other business activities, helping employees to augment and enhance their work. For marketing copy for example, ChatGPT could be used to draft a first version of the copy, highlighting key points and providing information for teams to draw from. This piece could then be reviewed by an employee, who can edit the content and ensure that it is up to the required standard in terms of readability, messaging and audience awareness. At the same time, the tool can help to make sense of discussions around business plans or brainstorming sessions, taking notes from calls and meetings and formulating these into a potential draft strategy which can then be further honed by teams. At MHR we strongly believe in the value of exploring new technologies like ChatGPT and their impact to both businesses and their employees. This is why we have recently launched MHR Labs, a new hub led by MHR’s expert teams of data scientists and software engineers, reviewing and discussing new technologies impacting businesses across the globe. WHAT’S NEXT FOR CHATGPT? While ChatGPT is already a household name, this is just the beginning of its journey. The speed of change is already rapid – GPT-4, the latest iteration of the tool, was released in March this year bringing enhanced problem solving and reasoning capabilities. As excitement around these developments continues, it will also encourage competitor tools to be created. However, as with all new technology, the increasing use of these solutions is starting to raise some concerns that must be addressed. For instance, while ChatGPT uses publicly available data, it does not consider copyright licensing and there are already lawsuits against other generative AI tools arguing that copyrighted data is being used to generate content. Businesses must work to ensure they understand the resulting legal risks and have fully considered the full implications of using content created by tools like ChatGPT. There are also growing concerns about what the technology means for the workforce. Employees are likely to question whether AI tech could replace their jobs altogether. This is something businesses, and in particular HR teams, will address to avoid potentially significant problems with employee motivation and mental wellbeing. Striking the right message will be critical. The value of tools like ChatGPT lies in their ability to work alongside humans, enhancing the work people do and relieving them of time-intensive or low value tasks. These tools exist to make employees lives easier, not to get rid of them all together. Ensuring that teams understand this should be a priority for businesses as the use of ChatGPT increases. There is also concern that ChatGPT will start to take on tasks that previously would have been done by employees with less experience, raising the question of how junior employees will learn on the job. Businesses should reflect on how they can adapt to make sure the development of junior employees is not impacted in this way, perhaps instead training them in how to get the most out of these tools to maximise their benefits. However, the hype around ChatGPT is not without reason. The technology is capable of extraordinary things, and this truly is a tool that has the potential to transform the way we work, bringing significant benefits to both business leaders and their teams.
2023-05-24T00:00:00
2023/05/24
https://shecancode.io/ai-in-the-workplace-the-benefits-of-chatgpt/
[ { "date": "2022/12/01", "position": 92, "query": "ChatGPT employment impact" }, { "date": "2023/01/01", "position": 79, "query": "ChatGPT employment impact" }, { "date": "2023/05/24", "position": 14, "query": "ChatGPT employment impact" }, { "date": "2023/07/01", "position": 97, "query": "ChatGPT employment impact" }, { "date": "2023/08/01", "position": 77, "query": "ChatGPT employment impact" }, { "date": "2023/09/01", "position": 77, "query": "ChatGPT employment impact" }, { "date": "2023/10/01", "position": 94, "query": "ChatGPT employment impact" }, { "date": "2023/12/01", "position": 81, "query": "ChatGPT employment impact" }, { "date": "2024/05/01", "position": 79, "query": "ChatGPT employment impact" }, { "date": "2024/07/01", "position": 78, "query": "ChatGPT employment impact" }, { "date": "2024/08/01", "position": 79, "query": "ChatGPT employment impact" }, { "date": "2024/12/01", "position": 79, "query": "ChatGPT employment impact" } ]
Robotics and the Displacement of Human Labor: An Emerging Risk?
Robotics and the Displacement of Human Labor: An Emerging Risk?
https://riskversity.com
[]
Socioeconomic Inequality: Workers displaced by robotics may struggle to find new employment opportunities, widening the income disparity between different ...
Robotics and the Displacement of Human Labor: An Emerging Risk? In any classic futuristic story, there’s often one common thread: robots have taken over everyday life. From the Jetsons to WALL-E to Black Mirror, the story of artificial intelligence taking over tasks, chores, and, in some cases, people is not new. The rise of robotics and automation has industries and organizations across the globe reconsidering where and how they operate. Organizations are increasingly adopting robotic solutions to enhance productivity, reduce costs, and remain competitive. While the integration of robotics brings advantages, it also presents a significant risk: the displacement of human labor. This emerging risk has profound implications for the workforce, economic stability, and corporate strategies. In recent history, robotic technologies have advanced rapidly, moving from simple, repetitive tasks to more complex functions that require precision and adaptability. Industries such as manufacturing, logistics, healthcare, and retail are at the forefront of innovation using robotics and AI to complete tasks and enhance operations. For example, AI has been used to in healthcare to identify lung cancer through pattern identification in blood samples. Similarly, in robotics, AI has been used to develop smart machines to pick and sort products in large warehouses. In warfare, AI-equipped drones are effective at carrying surveillance sensors and cameras into dangerous environments without risk to human life. These demonstrate excellent applications for efficiency, precision, and risk mitigation—but these are only some of the benefits of AI in modern operations. The Benefits Driving Automation Increased Efficiency and Productivity: Robots can operate 24/7 without fatigue, significantly boosting output and operational efficiency. This capability is particularly valuable in industries that demand high precision and consistency. Cost Reduction: While the initial investment in robotics can be substantial, the long-term savings in labor costs, reduced error rates, and lower maintenance expenses make it a financially attractive option. Improved Safety: Robots can perform hazardous tasks that pose risks to human workers, such as handling toxic materials or operating in extreme environments. This not only enhances workplace safety but also reduces liability and compensation costs. Positive Unintended consequences: In nearly every introduction of new technologies, the resulting infrastructure has positively impacted GDP and our way of life in an unintended way. Consider how smart phones have increased our ability to communicate. Steve Jobs wouldn’t have considered Uber Eats as a benefit that would be unlocked for our culture because of the device, and yet, busy families everywhere can thank smart phones for the added convenience. The Risk of Labor Displacement Despite these advantages, the adoption of robotics and AI also raises concerns about the displacement of human labor. As robots take over tasks traditionally performed by humans, many workers face the threat of job loss. This risk is particularly real in industries with high potential for automation, where repetition is key to productivity. Job Loss and Economic Impact: The most immediate and visible impact is the potential for significant job losses. This threat has concerned people for years. Workers in roles that involve routine, repetitive tasks are especially vulnerable. This displacement can lead to economic instability for individuals and communities, increasing unemployment rates and reducing consumer spending power. Skill Gaps and Workforce Readiness: The transition to a more automated workplace requires a workforce with new skills. There is a growing demand for workers proficient in technology, programming, and robotics maintenance. However, the current workforce may not be adequately prepared for these roles, leading to a significant skills gap. Socioeconomic Inequality: Workers displaced by robotics may struggle to find new employment opportunities, widening the income disparity between different segments of the population. Nevertheless, we should learn lessons from the past and those who feared technological improvements like the luddites of the late 19th century. Despite their fears of immediate losses of jobs in the industries impacted, there were new jobs and industries unlocked because of the very technologies that they feared and tried to prevent from advancement. For example…. Risk-Mitigation Strategies for Organizations There are several proactive ways that organizations can stay ahead of—or at least on track with—automation and robotics that may impact their operations. First, organizations can invest in training and reskilling programs to help workers transition to new roles. It’s also important to foster innovation. While some jobs may be displaced, automation can also create new opportunities. Organizations can focus on the development of new products and services that can generate new employment opportunities. Finally, organizations that consider the broader impact on their communities and work to minimize negative consequences through thoughtful planning and stakeholder engagement will fare the best in this rapidly changing environment. RiskVersity Can Help Prepare Your Organization As the world adjusts to more automation and increased robotic applications, RiskVersity can help identify and address the resulting risks. Our team of experts understands how to help you safeguard your organization from current risks and better prepare for future risks that may result. Ready to discuss? Contact RiskVersity today.
2024-08-27T00:00:00
2024/08/27
https://riskversity.com/robotics-and-the-displacement-of-human-labor-an-emerging-risk/
[ { "date": "2022/12/01", "position": 8, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 7, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 8, "query": "robotics job displacement" }, { "date": "2023/03/01", "position": 7, "query": "robotics job displacement" }, { "date": "2023/04/01", "position": 93, "query": "automation job displacement" }, { "date": "2023/09/01", "position": 8, "query": "robotics job displacement" }, { "date": "2023/10/01", "position": 92, "query": "automation job displacement" }, { "date": "2023/10/01", "position": 7, "query": "robotics job displacement" }, { "date": "2024/02/01", "position": 88, "query": "automation job displacement" }, { "date": "2024/03/01", "position": 9, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 93, "query": "automation job displacement" }, { "date": "2024/04/01", "position": 9, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 94, "query": "automation job displacement" }, { "date": "2024/05/01", "position": 7, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 91, "query": "automation job displacement" }, { "date": "2024/06/01", "position": 9, "query": "robotics job displacement" }, { "date": "2024/07/01", "position": 96, "query": "automation job displacement" }, { "date": "2024/08/01", "position": 94, "query": "automation job displacement" }, { "date": "2024/08/28", "position": 2, "query": "robotics job displacement" }, { "date": "2024/09/01", "position": 8, "query": "robotics job displacement" }, { "date": "2024/10/01", "position": 92, "query": "automation job displacement" }, { "date": "2024/12/01", "position": 93, "query": "automation job displacement" }, { "date": "2024/12/01", "position": 6, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 90, "query": "automation job displacement" }, { "date": "2025/01/01", "position": 6, "query": "robotics job displacement" }, { "date": "2025/04/01", "position": 8, "query": "robotics job displacement" }, { "date": "2025/05/01", "position": 6, "query": "robotics job displacement" } ]
The Impact of Robotics on Employment and Motivation of ...
The Impact of Robotics on Employment and Motivation of Employees in the Service Sector, with Special Reference to Health Care
https://pmc.ncbi.nlm.nih.gov
[ "Mohammed Owais Qureshi", "Department Of Human Resource Management", "College Of Business", "King Abdulaziz University", "Rabigh", "Saudi Arabia", "Rumaiya Sajjad Syed", "Department Of Health", "Hospital Management" ]
This research focuses on the impact of robotics on the employment and motivation of employees in the service sector with special reference to health care.
Replacing employees with robots is an inevitable choice for organizations in the service sector, more so in the health care sector because of the challenging and sometimes unhealthy working environments, but, at the same time, the researchers propose that it should be done in a manner that helps in improving the employment and motivation of employees in this sector. The impact of robotics has both positive and negative impacts on the employment and motivation of employees in the retail sector. So far, there has been no substantial research done into robotics, especially in the health care sector. This study is an exploratory research, based on secondary data, such as books on topics related to robotics, websites, public websites of concerned departments for data and statistics, journals, newspapers and magazines, websites of health care providers, and different printed materials (brochures, etc). The economy is being lifted by the new concept of robotics, but we cannot be sure of all the possible benefits. At this early stage, it therefore becomes important to find out the possible benefits/limitations associated with robotics, so that the positives can be capitalized, established, and developed further for the employment and motivation of employees in the health care sector, for overall economic development. The negatives should also be further studied and mitigated. The research is based on secondary data taken from different books on the related topics, web portals, public websites of concerned departments for data and other statistics, various journals, newspapers and magazines, websites of selected health care providers, as well as different printed materials (brochures, etc.) collected from them. Substantial information has been gathered from these sources thus allowing for appropriate analysis, compilation, interpretation, and structuring of the entire study. Thus, in an attempt to isolate and categorize potential sources of robotics and its impact on the employment and motivation of employees in the health care sector, the available literature is reviewed. To achieve the main objective, the following subobjectives were set: (1) assess the impact of robotics on employment and employee motivation; (2) assess the impact of robotics on health care; and (3) assess the impact of robotics on employment and employee motivation in health care. The main objective of the research is to shed light on the facilitation of robotics in health care and its impact on the employment and motivation of employees, and also to assess its possible advantages/disadvantages. This is with the intention to raise awareness about positive and negative effects of robotics on the employees of the health care sector, and finally developing strategies for its use in small, medium, and large health care service providers. This research also provides a basis to study the various factors that should be responsible for attracting the health care providers to use robotics in their operations. Organizations today are faced with increasing labor costs and a shortage of workers, and are thus investing in robotics. Robots never demand raises and are able to work around the clock. Robots can perform tasks that most humans could not possibly do, such as working in challenging conditions and being able to purport amazing feats with utmost precision. What makes a firm ready for the challenges of the future is not just technology but the management of human resources, especially the impact of robots on the employment and motivation of employees in the health care sector. In the past couple of decades the advancement in the technology of robotics has been enormous but at the same time there has not been much research conducted on the effects of the robotics on the employment and motivation of employees in the service sector, or in health care in particular. This research focuses on the impact of robotics on the employment and motivation of employees in the service sector with special reference to health care. In this era of information explosion, human capital has become one of the main driving forces behind the economic success of service sector organizations. This is especially true in the health care sector, which, according to Centers for Medicare and Medicaid Services [24] , consumes over 10% of gross domestic product (GDP) of most developed nations. As a share of the economy, health care has risen from 7.2% of GDP in 1965 to over 16% of GDP today, and it is projected to be 20% of GDP just 10 years. Thus, it has become even more important that human resources' in the health care should be managed well and, to do so, an integration of innovative technology with the day-to-day activities of employees should be the prime focus of organizations operating in health care. 2. Materials and methods 2.1. Literature review A robot is a system that contains sensors, control systems, manipulators power supplies, and software all working together to perform a task [1]. Motivation is an employee's intrinsic enthusiasm about and drive to accomplish activities related to work. Motivation is that internal drive that causes an individual to decide to take action [2]. The service sector consists of the “soft” parts of the economy, i.e., activities where people offer their knowledge and time to improve productivity, performance, potential, and sustainability. The basic characteristic of this sector is the production of services instead of end products. Services (also known as “intangible goods”) include attention, advice, access, experience, and discussion [3]. Health care is the diagnosis, treatment, and prevention of disease, illness, injury, and other physical and mental impairments in humans. Health care can form a significant part of a country's economy. In 2008, the health care industry consumed an average of 9.0% of the GDP across the most developed OECD countries. The USA (16.0%), France (11.2%), and Switzerland (10.7%) were the top three spenders [4]. 2.2. Impact of robotics on employment and employee motivation Companies such as iPhone manufacturer Foxconn (New Taipei City, Taiwan), which has been plagued by a series of labor scandals, has announced plans to add more than one million robots to its workforce. It still has a long way to go, however, as only about 20,000 are currently in use [5]. According to the International Federation of Robotics, overall paid employment has risen in most countries including Brazil, China, Republic of Korea, Germany, and USA, but not Japan, which has seen a decline. The statistics mainly show a reduction in employment in manufacturing in the developed countries, often a small reduction. This coincides with an increase in output and an increase in robotics use, except in the case of Japan. The robot industry itself generates 170,000–190,000 jobs worldwide, to which can be added the support staff and operators, another similar number of people. Despite the rapid increase in the use of robots, USA has proportionately half the number of robots used by Germany. Germany itself (partly because of a different industry mix) is third, behind Japan and Korea. The concept of “jobless recovery”, where an industry comes out of a recession leaner, needing fewer employees, is only short term. It is likely to lead to more job creation by the leaner, more competitive companies. At the same time, the service sector continues to absorb most of the displaced people. Some of these new service people owe their jobs to a new robot driven industry. The research by the International Federation of Robotics further points out that, although automation displaces people in manufacturing, it almost always increases output. In some cases this allows such an increase in production and related decrease in unit price, that creates a whole new market and generates the need for downstream jobs to get the product to the consumer. This releases employees for other, often new jobs outside manufacturing. An alternative view is that this displacement in the future will be more difficult to place, as service robotics may take over some or many of the new job opportunities in human tasks such as in banking, fast food chains, and retailing petrol forecourts [6]. David Sims in his blog reports that Drew Greenblatt, the president of Marlin Steel (MD, United States), an American company that manufactures wire baskets and sheet metal products praises the way robots have helped his company grow 25% and increase operational safety [7]. Greenblatt told Inc.com that when he bought the company, which was the largest bagel basket maker in the country, in the late 1990s, “the most modern technology in the plant was a fax machine,” and the top workers could produce a basket by hand every 12 seconds. Today, robots can make five baskets per second, with precision that is “light years beyond what we were capable of before.” Greenblatt argues that robots have made his workers more valuable and highly paid than their competitors. “Recently, we won a huge order for sheet metal brackets,” he noted. “The brackets were formerly made in China by workers who earned $2.50 an hour and produced perhaps 50 an hour. Our sheet metal operator earns 10 times that rate, but sets up a robot that produces 2,000 brackets an hour.” [8]. However, critics of the automation boom claim that such technological advancements are killing off middle-class jobs. Robots and automated systems have not only eliminated elevator operators and highway toll collectors, but are also making inroads into higher-skilled job functions, and the long-term effect in job losses among human workers may be much more severe than most expect [7]. 2.3. Impact of robotics in health care The Robotics for Healthcare study was funded by the European Commission, DG Information Society and Media, with the aim to investigate and develop a roadmap for the application of robotics in medicine and health care [9]. According to Kinetic consulting, by 2050 one in four people in the United Kingdom will be over the age of 65 years. Japan's population is the most rapidly ageing in the world—30 million people, accounting for 25% of the population, are over the age of 65 years. To meet this challenge, health and local authority services must reconfigure, placing greater emphasis on community care and the effective use of technology. One promising technology is robotics [10]. Compared with humans, robots may be quicker to train, cheaper to maintain, easier to refuel and repair, and less prone to be bored by repetitive tasks. They could help the elderly and chronically ill to remain independent, reducing the need for carers and the demand for care homes [10]. According to a renowned blogging site, called information week, since 2000, the da Vinci Surgical System, has conducted more than 20,000 surgeries and has paved the way for robotic advancements in health care. In fact, vendors have introduced a number of new robots to better provide care to remote patients, help with various physical therapies and—similar to the da Vinci system—help perform surgery. For example, Magnetic Microbots are a group of tiny robots used in various operations, such as removing plaque from a patient's arteries or helping with ocular conditions and disease screenings. Other robotic advancements are used to better the day-to-day lives of patients, helping them eat, such as the Bestic Arm, or helping a patient regain her ability to walk, like many of Toyota's Healthcare Assistants [11]. In the next few years, thousands of “service robots” are expected to enter the health care sector—picture R2-D2 from Star Wars carrying a tray of medications or a load of laundry down hospital corridors. Fewer than 1,000 of these blue-collar robots currently roam about hospitals, but those numbers are expected to grow quickly [12]. Robots such as the Aethon TUG, according to the company, working just two shifts 7 days per week, saves the labor of 2.8 full-time equivalent (FTE) employees, yet costs less than a single FTE. Aethon TUG safely navigates through hospital corridors, elevators, and departments to get items from point A to point B. It rolls 24 hours, 7 days a week to make both scheduled and on-demand deliveries, and never gets sidetracked from its mission [13]. Swisslog's RoboCourier, a similar delivery system, increases staff efficiency with a run-time of 11 hours/day [14]. Giraff is a mobile communication solution that facilitates the elderly's contact with the outside world. It is remote-controlled, on wheels, and has a camera and monitor [15]. Another robot, called RP-VITA, is a result of a joint development effort between iRobot and Intouch Health. The system features mapping and obstacle detection, as well as avoidance technology and an iPad user interface for control and interaction. The robot can also interface with diagnostic devices and electronic medical records systems. The remote rig will eventually be able to navigate to specified target destinations autonomously [16]. Bestic is a unique innovation for those who want to eat by themselves without requiring help from others. Bestic is a robotic-assisted dining appliance for people who are unable to move their arms or hands. It is designed to be an integral part of a meal by being easily accessible and convenient in a manner that does not intrude on a dignified and pleasant eating experience [17]. A robot called Cosmobot is used by doctors to enhance the therapy of developmentally disabled children aged 5–12 years. Using the robot can make therapy more interesting for children and allows for better success when achieving long-term therapy goals [18]. A robot called AnyBots provides a type of immersive telepresence, meaning instead of focusing merely on audio and video communications, the AnyBots robot allows for movement controlled by a remote [19]. Toyota unveiled four robots designed to accomplish ambitious objectives at a Tokyo event. The robots help the paralyzed patients walk or balance and help their caretakers gently transport them. The company hopes to commercialize the products sometime after 2013 [9]. There is a well-documented shortage of nurses and direct-care workers in the United States and around the world, which is expected to become more problematic as the older adult population grows and prepares for retirement. In a study of the effects of high patient-to-nurse ratio, Aiken et al [25] showed that each additional patient per nurse was associated with a 7% increase in patient mortality and a 23% increase in nurse burnout. Consequently, studies have suggested that lowering the patient-to-nurse ratio would result in less missed patient care [20]. Thus robotics can play a role in assisting nurses to complete their daily tasks in order to provide better health care. Robotic systems in nursing care to support the arduous work of the individual professional and to counteract the imminent staff shortages [9]. The pharmaceutical industry is continuously improving quality and increasing quantity of their products. Health care systems are being set up in more and more countries. The worldwide demand for pharmaceutical products is on the rise. The pharmaceutical industry was hardly affected by the economic crisis. Thus, investments in robots only slightly decreased in 2009. Similar trends can also be observed in the medical devices industry. In both industries, robot installations will gain momentum in the coming years [6].
2014-07-29T00:00:00
2014/07/29
https://pmc.ncbi.nlm.nih.gov/articles/PMC4266810/
[ { "date": "2022/12/01", "position": 16, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 15, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 15, "query": "robotics job displacement" }, { "date": "2023/09/01", "position": 15, "query": "robotics job displacement" }, { "date": "2023/10/01", "position": 18, "query": "robotics job displacement" }, { "date": "2024/03/01", "position": 16, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 16, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 20, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 15, "query": "robotics job displacement" }, { "date": "2024/09/01", "position": 15, "query": "robotics job displacement" }, { "date": "2024/12/01", "position": 18, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 18, "query": "robotics job displacement" }, { "date": "2025/04/01", "position": 22, "query": "robotics job displacement" }, { "date": "2025/05/01", "position": 20, "query": "robotics job displacement" } ]
AI and jobs: 4 key steps governments can take to limit ... - AI for Good
AI and jobs: 4 key steps governments can take to limit job displacement
https://aiforgood.itu.int
[]
A number of high-profile studies have predicted high levels of job displacement as a consequence of automation across developed economies.
News items about advances in artificial intelligence (AI) are coming thick and fast these days. As AI systems become better at sorting data, finding patterns, and making predictions, these algorithms are undertaking an ever-increasing range of tasks, from filtering email spam, to delivering takeaways, to tackling more sophisticated problems such as providing legal advice or deciding whether you are paid a visit by the police. With this global focus on AI, it is perhaps unsurprising that increased attention is being paid to the impact of AI-powered automation on jobs and employment. Just as the Industrial Revolution drove automation of repetitive physical work, the ‘Intelligence Revolution is predicted to have the same effect on a widening range of intellectual tasks, meaning that more jobs can potentially be performed by robots and computers. A number of high-profile studies have predicted high levels of job displacement as a consequence of automation across developed economies. Most recently, the Organisation for Economic Cooperation and Development (OECD) found that 14% of jobs in 32 participating countries are at high probability of automation, with a further 32% of jobs likely to be significantly changed by automating technologies. These figures are somewhat lower than previous estimates by Oxford academics Carl Frey and Michael Osborne in 2013, or by consultancy PwC earlier this year, but are higher than the OECD’s own estimates from 2016. There is in fact great disagreement among economists both on how many jobs are likely to be displaced by AI-fuelled automation, as well as whether the work that is automated will be replaced by new (and possibly more fulfilling) alternative work. The history of the Industrial Revolution would suggest that more jobs will be created than are lost. History does not always repeat itself, however: if machines become better than humans at intellectual tasks, what will be left for humans to do? One thing that almost all economists agree on is that change is coming and that its scale and scope will be unprecedented. A UK study on AI and automation In order to encourage a more sophisticated understanding of the future of work, and to increase political engagement with this topic, we applied calculations of the automatability of jobs in different industrial sectors to United Kingdom Parliamentary constituency-level employment data. Our findings are startling. Although the overall figure for the whole of the UK is that 30% of jobs are at high risk of automation by the early 2030s, when looking across the country, the proportion of jobs at high risk varies from 22% to 39%. Most significantly, the highest levels of future automation are predicted in Britain’s former industrial heartlands in the Midlands and the North of England. These are areas which have already suffered from deindustrialization and many of them are already unemployment hotspots. Factors that predicted a higher proportion of high-risk jobs included a reliance on jobs in the manufacturing and transport/logistics sectors in the local economy. Our work suggests that automation will impact different geographic regions differently. Our report also included opinion poll data from our annual survey of the UK public’s attitudes to AI. The poll shows that despite evidence suggesting high levels of automation are coming, only 7% of respondents said they were worried about the impact of automation on their own jobs, and only 28% were worried about jobs in their local area. It seems a deeper public debate about the future of work is urgently required. Automation – likely to impact regions differently Our work suggests that automation will impact different geographic regions differently. This point is also made by other researchers, including those at the OECD and PwC, who have predicted differential impacts of automation on different genders and socioeconomic classes. With these differential impacts comes the risk of exacerbating existing socioeconomic inequalities. A recent Brookings Institution report found that although the economy has grown due to automation, the share going to worker earnings has shrunk. Unequal distribution of the proceeds of automation may have destabilizing effects on societies worldwide. RELATED: AI for Good Global Summit to ensure AI benefits humanity As artificial intelligence supercharges automation over the next decade, and this hits distinct groups differently, there will be profound social and political consequences. Any policy response to the challenges of the future of work must take account of how automation will impact different groups of people. Governments – maximizing the opportunities and minimizing the risks of automation Future Advocacy suggests that governments could take the following steps to maximize the opportunities and minimize the risks of automation: Commission and support detailed research to assess which employees are most at risk of job displacement by automation in their countries. It is essential that we better understand how impacts will differ by employment sector, geography, age group, gender, educational attainment and socio-economic group. Develop smart, targeted strategies to address future job displacement, based on the results of research into the differential impact of automation by sector, region and demographic group in their countries. The importance of targeting these interventions to those most at risk cannot be overemphasised. Such interventions could include supporting businesses to retrain employees, and providing financial and psychological support to the people impacted. Adapting the local education system to maximise the opportunities and minimise the risks created by AI. While STEM skills will be crucial in the future world of work, education systems should not restrict themselves to focusing on these alone. They must provide forward-looking and future-proof training in creativity and interpersonal skills, which will be less automatable in the longer term. Lifelong learning, student-led learning, and the personalization of education will underpin the resilience and adaptability necessary to thrive in the workplace of the future. Lastly, educational policy must also support initiatives that encourage underrepresented sectors of society (including women and ethnic minorities) to receive training in AI development and deployment. Conduct research into alternative income and taxation models that result in fairer distribution of the wealth that these technologies will create. This could include undertaking well-designed trials of Universal Basic Income along the lines of those currently underway in Finland, Scotland and Canada. Fiscal and welfare policies must be updated to ensure that wealth is not increasingly concentrated in the hands of a few commercial entities who own robots and other automated technologies. See Future Advocacy’s full report on The Impact of AI in UK Constituencies. The 2nd AI for Good Global Summit at ITU headquarters in Geneva, 15-17 May, will take action to ensure that artificial intelligence accelerates progress towards the United Nations Sustainable Development Goals (SDGs). Learn more about the Global Summit here.
2018-04-10T00:00:00
2018/04/10
https://aiforgood.itu.int/ai-and-jobs-4-key-steps-governments-can-take-to-limit-job-displacement/
[ { "date": "2022/12/01", "position": 25, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 25, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 26, "query": "robotics job displacement" }, { "date": "2023/04/01", "position": 41, "query": "automation job displacement" }, { "date": "2023/08/01", "position": 95, "query": "government AI workforce policy" }, { "date": "2023/09/01", "position": 41, "query": "automation job displacement" }, { "date": "2023/09/01", "position": 26, "query": "robotics job displacement" }, { "date": "2023/10/01", "position": 42, "query": "automation job displacement" }, { "date": "2023/10/01", "position": 26, "query": "robotics job displacement" }, { "date": "2023/12/01", "position": 43, "query": "automation job displacement" }, { "date": "2024/01/01", "position": 98, "query": "government AI workforce policy" }, { "date": "2024/02/01", "position": 50, "query": "automation job displacement" }, { "date": "2024/03/01", "position": 42, "query": "automation job displacement" }, { "date": "2024/03/01", "position": 26, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 35, "query": "automation job displacement" }, { "date": "2024/04/01", "position": 26, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 47, "query": "automation job displacement" }, { "date": "2024/05/01", "position": 27, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 48, "query": "automation job displacement" }, { "date": "2024/06/01", "position": 25, "query": "robotics job displacement" }, { "date": "2024/07/01", "position": 40, "query": "automation job displacement" }, { "date": "2024/07/01", "position": 98, "query": "government AI workforce policy" }, { "date": "2024/08/01", "position": 44, "query": "automation job displacement" }, { "date": "2024/08/01", "position": 99, "query": "government AI workforce policy" }, { "date": "2024/09/01", "position": 25, "query": "robotics job displacement" }, { "date": "2024/10/01", "position": 48, "query": "automation job displacement" }, { "date": "2024/12/01", "position": 42, "query": "automation job displacement" }, { "date": "2024/12/01", "position": 72, "query": "government AI workforce policy" }, { "date": "2024/12/01", "position": 25, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 53, "query": "automation job displacement" }, { "date": "2025/01/01", "position": 24, "query": "robotics job displacement" }, { "date": "2025/02/01", "position": 33, "query": "automation job displacement" }, { "date": "2025/03/01", "position": 33, "query": "automation job displacement" }, { "date": "2025/04/01", "position": 72, "query": "government AI workforce policy" }, { "date": "2025/04/01", "position": 7, "query": "robotics job displacement" }, { "date": "2025/05/01", "position": 21, "query": "robotics job displacement" } ]
[PDF] Just reallocated? Robots, displacement and job quality - ifo Institut
Client Challenge
https://www.ifo.de
[]
In our work we focus only on displaced workers, inspecting whether exposure to robots pushes them towards jobs of lower quality and assessing whether sector or ...
JavaScript is disabled in your browser. Please enable JavaScript to proceed. A required part of this site couldn’t load. This may be due to a browser extension, network issues, or browser settings. Please check your connection, disable any ad blockers, or try using a different browser.
2022-12-01T00:00:00
https://www.ifo.de/sites/default/files/events/2022/pillars22-Cuccu.pdf
[ { "date": "2022/12/01", "position": 39, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 38, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 39, "query": "robotics job displacement" }, { "date": "2023/09/01", "position": 38, "query": "robotics job displacement" }, { "date": "2023/10/01", "position": 39, "query": "robotics job displacement" }, { "date": "2024/03/01", "position": 39, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 39, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 78, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 38, "query": "robotics job displacement" }, { "date": "2024/09/01", "position": 39, "query": "robotics job displacement" }, { "date": "2024/12/01", "position": 38, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 39, "query": "robotics job displacement" }, { "date": "2025/04/01", "position": 44, "query": "robotics job displacement" }, { "date": "2025/05/01", "position": 80, "query": "robotics job displacement" } ]
Robots Aren't Taking Jobs Away: The Future of Warehouse Labor
Robots Aren’t Taking Jobs Away: The Future of Warehouse Labor
https://made4net.com
[]
When robots are introduced into a warehouse, warehouse associates may fear that their jobs will be displaced by the bots, creating what's known as a “lights ...
Thank you to our partner, Locus Robotics for this guest post of the future of warehouse labor. By Mary E. Hart Last year, the warehouse and transportation industry had a record 490,000 job openings according to the Washington Post, and that number has only grown. Yet, the ongoing narrative is that robotics and automation, including autonomous mobile robots (AMRs), are taking away jobs. Should warehouse workers really be worried that they’re going to lose their jobs to robots? What’s needed instead is an understanding of the reason for those fears and the benefits of warehouse robotics for associates. The Myth of Lights Out Factories When robots are introduced into a warehouse, warehouse associates may fear that their jobs will be displaced by the bots, creating what’s known as a “lights out factory”. A lights out factory is one that is entirely automated, without human workers. During a recent MassRobotics panel, “The Future of Work: Jobs in an Age of Robotics”, panelist Ira Moskowitz, CEO of the Advanced Robotics for Manufacturing (ARM) Institute, dispelled that fear while admitting that the narrative does exist. He stated that in his 30 years of working in the industry, he’s never seen someone deploy robots at a facility to replace a person. Instead, the robots take on the mundane or dangerous tasks, leaving the humans still employed and working on different tasks. On that same panel, Julie Shah, Director of the Interactive Robotics Group at the MIT Computer Science and Artificial Intelligence Laboratory, noted that she had a discussion with a factory that was bringing robots into their facility. The question came up of “Is the goal a ‘lights out’ factory?”, and one person in the factory replied that it wouldn’t happen as a factory without people is a factory that isn’t innovating.” Shah added on to Moskowitz’s statement, adding that robotics technology doesn’t take away jobs. Instead, they do the manual tasks that humans don’t need to do and enhance human capabilities. That’s the future of warehouse labor. Don’t Fear Technology Warehouse associates also fear that if robots don’t displace them, they won’t be able to figure out how to work with the robots due to the new technology. Tom Ryden, Executive Director of MassRobotics, provided this analogy on the panel about robotic technology and a tractor and asked the audience to think of a tractor of years ago. It had two pedals, just like a car, and it was easy to drive if you knew how to drive a car. Today, if you get into a tractor to drive it, you’ll find a dashboard of technology that is completely different from a car. In order to drive a tractor, you have to be trained. The same holds true for robotics – workers need to be trained on how to use the technology. United States Secretary of Labor, Marty Walsh, stated, “We shouldn’t be afraid of technology.” Instead, the focus should be on making sure young people who are graduating have a pathway to understanding and working with robotic solutions, so they’re not fearful of the technology. While there are robotic solutions in warehouses that require deep training, there are others that don’t. One example is the Locus solution. With Locus Robotics, warehouse associates can be trained in just minutes. With other solutions, the training time might require a week or more of training. Less Walking for Associates For warehouse workers, there are many benefits of working with autonomous mobile robots (AMRs), including the “cool” factor and the quick training time, but one of the top benefits is making their jobs less strenuous. Mike Johnson, President of Locus Robotics, notes that associates in traditional warehouses have to walk between 12 to 14 miles a day. When a facility brings in AMRs like the Locus solution, the bots do the majority of the walking while the associates stay in one section. Warehouse associates love working with AMRs. In fact, one associate told her VP of HR that when she goes home at the end of the day, her feet no longer hurt the way they used to before they brought in LocusBots. Another associate told her manager that working with the LocusBots gave her back the energy she needed “to be the mother she was supposed to be”, as she was no longer walking for miles each day. Brian Lemerise, CEO at Quiet Logistics, said that their warehouse associates think of the collaborative robots (cobots) as their coworkers, and they love working alongside the LocusBots. Lemerise stated, “With LocusBots, warehouse associates don’t have to pull heavy carts, pallet jacks, or full cases of products. Instead, they’re just walking productively from robot to robot, performing their tasks.” The Locus solution creates happier warehouse associates, which gives you a recruiting and retention edge. At the same time, it dramatically improves worker productivity by 2x to 3x, enabling higher pick rates and decreased cycle times. The future of warehouse labor likely just means happier workers. Learn more: https://locusrobotics.com/
2022-12-01T00:00:00
https://made4net.com/knowledge-center/robots-arent-taking-jobs-away-the-future-of-warehouse-labor/
[ { "date": "2022/12/01", "position": 63, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 60, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 61, "query": "robotics job displacement" }, { "date": "2023/09/01", "position": 66, "query": "robotics job displacement" }, { "date": "2023/10/01", "position": 62, "query": "robotics job displacement" }, { "date": "2024/03/01", "position": 62, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 61, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 59, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 68, "query": "robotics job displacement" }, { "date": "2024/09/01", "position": 66, "query": "robotics job displacement" }, { "date": "2024/12/01", "position": 67, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 65, "query": "robotics job displacement" }, { "date": "2025/04/01", "position": 67, "query": "robotics job displacement" }, { "date": "2025/05/01", "position": 52, "query": "robotics job displacement" } ]
Why Robots Won't Steal Your Job - SHRM
Why Robots Won't Steal Your Job
https://www.shrm.org
[ "Nahia Orduña" ]
According to the World Economic Forum, 85 million jobs may be displaced by the shift in labor between humans and machines by 2025, while 97 million new ...
According to the World Economic Forum, 85 million jobs may be displaced by the shift in labor between humans and machines by 2025, while 97 million new roles may emerge. These are the “jobs of the future." | Nahia Orduña
2022-12-01T00:00:00
https://www.shrm.org/executive-network/insights/robots-wont-steal-job
[ { "date": "2022/12/01", "position": 67, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 79, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 69, "query": "robotics job displacement" }, { "date": "2023/09/01", "position": 64, "query": "robotics job displacement" }, { "date": "2023/10/01", "position": 71, "query": "robotics job displacement" }, { "date": "2024/03/01", "position": 72, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 71, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 13, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 62, "query": "robotics job displacement" }, { "date": "2024/09/01", "position": 64, "query": "robotics job displacement" }, { "date": "2024/12/01", "position": 65, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 67, "query": "robotics job displacement" }, { "date": "2025/04/01", "position": 62, "query": "robotics job displacement" }, { "date": "2025/05/01", "position": 12, "query": "robotics job displacement" } ]
Will Robots Take My Job?
Will Robots Take My Job?
https://willrobotstakemyjob.com
[]
Discover your job's vulnerability to automation and computerization. Get personalized insights to future-proof your career.
or show random example Thank you for casting your vote! Your opinion matters to us and we appreciate your participation. Don't forget to visit us tomorrow for a new poll. We look forward to seeing you again! View results of all our polls Risk sentiment chart The Monthly Automation Risk Level Chart visualizes evolving views on job automation risks. Points on the chart represent the monthly aggregated risk level, sourced from user votes and scaled from 0% to 100%, with 100% indicating maximum perceived risk. The risk level for each occupation is calculated as a weighted average, where votes from larger professions have greater influence, reflecting the broader impact of automation risks across the workforce. This chart provides a collective sentiment weighted by employment, offering insights into the automation risks for jobs with a larger share of the workforce. Discover more charts like this one, including a variant that uses an unweighted average. Explore the charts to observe shifts in job security in the AI and robotics era. Remember, your vote influences the perceived risk for your occupation. Don't miss out on having your say!
2022-12-01T00:00:00
https://willrobotstakemyjob.com/
[ { "date": "2022/12/01", "position": 78, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 75, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 77, "query": "robotics job displacement" }, { "date": "2025/02/01", "position": 51, "query": "AI replacing workers" }, { "date": "2025/02/01", "position": 71, "query": "universal basic income AI" }, { "date": "2025/03/01", "position": 72, "query": "universal basic income AI" }, { "date": "2025/04/01", "position": 49, "query": "AI replacing workers" }, { "date": "2025/05/01", "position": 52, "query": "AI replacing workers" }, { "date": "2025/05/01", "position": 72, "query": "universal basic income AI" }, { "date": "2025/06/01", "position": 51, "query": "AI replacing workers" }, { "date": "2025/06/01", "position": 80, "query": "universal basic income AI" } ]
Human-AI Collaboration and Job Displacement: Current Landscape
Human-AI Collaboration and Job Displacement: Current Landscape
https://smythos.com
[ "Rechelle Ann Fuertes" ]
Unlike the fear-driven narratives of robots taking all our jobs, a more nuanced reality is emerging. Workers aren't simply being replaced – they're learning to ...
Imagine arriving at work to find an artificial intelligence system as your new colleague. For millions of workers worldwide, this scenario is rapidly becoming reality. As Forbes reports, it’s not AI alone that’s changing jobs – it’s people equipped with AI tools who are reshaping the workplace landscape. The rise of human-AI collaboration marks a pivotal shift in how we work. Unlike the fear-driven narratives of robots taking all our jobs, a more nuanced reality is emerging. Workers aren’t simply being replaced – they’re learning to partner with AI in ways that enhance human capabilities while transforming traditional roles. Today’s workplace stands at a crossroads where artificial intelligence meets human potential. Some employees embrace AI as a powerful ally that handles routine tasks, freeing them to focus on creative problem-solving and strategic thinking. Others approach this technological shift with understandable anxiety about job security and the pressure to adapt to new skills. What does this mean for your career? The impact varies dramatically across industries and roles. While AI excels at processing data and identifying patterns, it lacks the nuanced understanding, emotional intelligence, and creative problem-solving abilities that make humans irreplaceable in many positions. The key to thriving in this new era lies in understanding how to effectively collaborate with AI rather than compete against it. This article explores how human-AI collaboration is reshaping modern workplaces, examines genuine concerns about job displacement, and uncovers strategies for adapting to this technological transformation. Whether you’re excited about AI’s potential or concerned about its impact, understanding this evolving landscape is crucial for navigating your professional future. It’s not AI alone that will replace people, but rather people equipped with AI tools who will replace those who are not. Forbes Technology Council The Evolution of AI in the Workforce The transformation of artificial intelligence from simple rule-based programs to today’s sophisticated systems marks one of technology’s most dramatic evolutionary leaps. Back in the 1940s, when the first digital computers emerged, AI existed only in research labs. Today, according to McKinsey’s latest research, 72% of organizations have embraced AI technologies – a remarkable surge from the 50% adoption rate seen in previous years. The early workplace AI systems were straightforward, following rigid if-then rules for basic tasks like data entry and simple calculations. These rule-based systems, while revolutionary for their time, could only handle predetermined scenarios and lacked the ability to learn or adapt. Think of them as digital filing clerks – efficient but inflexible. As processing power grew and machine learning algorithms matured, AI began tackling more complex challenges. Neural networks, inspired by human brain structure, enabled systems to recognize patterns and learn from experience. This leap forward meant AI could now handle nuanced tasks like customer service inquiries, quality control inspections, and even creative work. The impact across industries has been profound. In manufacturing, AI-powered robots work alongside humans, handling precision tasks and quality control. Healthcare organizations use AI to analyze medical images and predict patient outcomes. Financial institutions leverage AI for fraud detection and risk assessment, processing millions of transactions in real-time. If 2023 was the year the world discovered generative AI, 2024 is the year organizations truly began using—and deriving business value from—this new technology. McKinsey Global Survey on AI, 2024 Perhaps most striking is the acceleration of AI adoption in professional services. Marketing teams now use AI to analyze consumer behavior and personalize campaigns. Legal departments employ AI for document review and contract analysis. Even human resources has embraced AI for resume screening and candidate matching, though always with human oversight to ensure fairness and accuracy. Looking ahead, the trajectory seems clear: AI will continue reshaping how we work, with 75% of organizations predicting significant or disruptive changes in their industries. This isn’t just about automation – it’s about augmentation, enabling humans to focus on higher-value tasks while AI handles routine operations. The key challenge for organizations isn’t whether to adopt AI, but how to do so responsibly while maintaining the human element that drives innovation and creativity. Try SmythOS Make an agent faster than a cup of coffee Create My First Agent Human-AI Synergy Across Different Sectors The integration of artificial intelligence across industries is transforming how organizations operate and deliver value. From enhancing medical diagnoses to streamlining manufacturing processes, AI has become an essential partner in driving innovation and efficiency. In healthcare, AI is improving patient care through better diagnostic accuracy and treatment planning. For instance, according to Philips Healthcare, AI-powered imaging systems can automatically detect anatomical landmarks for precise patient positioning during CT scans, reducing radiation exposure while maintaining image quality. In cardiac care, AI assists physicians by providing automated measurements and quantification, allowing them to focus on higher-level interpretation and patient care. The manufacturing sector demonstrates equally impressive examples of human-AI collaboration. Production facilities now employ AI systems that work alongside human operators to optimize quality control and maintenance schedules. Factory workers partner with AI-powered visual inspection systems that can detect microscopic defects at speeds far beyond human capability, while maintenance technicians utilize AI predictions to prevent equipment failures before they occur. In the financial services industry, human-AI synergy has transformed decision-making processes. Investment managers leverage AI algorithms to analyze vast amounts of market data and identify trading opportunities, while human experts apply their strategic thinking and contextual understanding to make final investment decisions. This combination of AI’s computational power with human judgment has led to more informed and timely financial decisions. These collaborative systems continue to evolve as both humans and AI adapt to each other’s strengths. While AI excels at processing large volumes of data and identifying patterns, human professionals provide crucial oversight, emotional intelligence, and ethical considerations that ensure responsible implementation of AI-driven solutions. The success of human-AI collaboration across these sectors highlights a crucial lesson: rather than replacing human workers, AI serves as a powerful tool that enhances human capabilities and allows professionals to focus on more strategic and creative aspects of their work. This synergy between human expertise and artificial intelligence continues to drive innovation and efficiency across industries. Job Displacement: Risks and Realities A robot illustrating AI’s influence on modern work. – Via ytimg.com Recent data from the International Labour Organization reveals a nuanced picture of AI’s impact on employment – approximately 2.3% of global employment (75 million jobs) faces high exposure to generative AI technology, with higher-income countries showing greater vulnerability at 5.1% of their workforce. However, these figures tell only part of the story. A comprehensive analysis by the U.S. Bureau of Labor Statistics challenges common assumptions about widespread job losses. Rather than the dramatic displacement often predicted, the evidence suggests a more gradual transformation of work. Jobs aren’t simply disappearing – they’re evolving, with technology augmenting human capabilities rather than wholly replacing workers. Clerical support workers face the highest risk of disruption, with women being 2.5 times more exposed to automation risks than men. This disparity stems from the concentration of women in administrative and customer service roles that involve routine, procedural tasks – precisely the type of work that AI excels at automating. However, even in these high-risk sectors, complete job elimination is rare. Instead, we’re seeing a shift in role requirements and responsibilities. The manufacturing sector provides an instructive example of automation’s real-world impact. While some assembly line positions have indeed been displaced by robotics, new roles have emerged in robot maintenance, programming, and supervision. This pattern of job evolution rather than wholesale elimination appears consistently across industries adopting AI technologies. Importantly, the data shows that technology’s impact varies significantly by region and economic development level. In Latin America, for instance, many jobs that could benefit from AI’s productivity-enhancing effects lack the necessary technological infrastructure, while positions at high risk of automation already use computers extensively. This creates an uneven landscape of technological displacement and opportunity. Looking ahead, the key challenge isn’t mass unemployment but rather managing the transition period for affected workers. History shows that while technological change can be disruptive in the short term, it typically leads to economic growth and new job creation in the long run. The critical factor will be ensuring workers have access to retraining programs and support systems during these transitions. Mitigating Job Displacement: Strategies and Solutions Rapid advancement in AI technologies demands proactive measures to address potential job displacement. According to the Harvard Business Review, millions of workers will need comprehensive reskilling in the coming decades. This challenge requires acquiring new skills and, in some cases, changing occupations entirely. Organizations face an urgent imperative to invest in their workforce’s future. Research shows that by 2025, over 85 million jobs may be displaced due to the shift in labor division between humans and machines. However, this technological revolution is also expected to create 97 million new roles better adapted to the emerging human-machine collaboration landscape. Upskilling programs represent a crucial defense against displacement. These initiatives help employees enhance their existing capabilities with new digital competencies, allowing them to work alongside AI systems effectively. For instance, manufacturing workers can learn to operate and maintain advanced robotics systems rather than being replaced by them. Year Jobs Displaced Jobs Created 2025 85 million 97 million 2027 83 million 69 million Reskilling presents a more comprehensive solution for workers whose roles face significant disruption. This approach involves training employees in entirely new skill sets, often transitioning them to emerging roles in data analysis, AI oversight, or digital operations. While more intensive than upskilling, reskilling programs offer workers a path to long-term career sustainability. Companies must also foster a culture of continuous learning. This means establishing internal training programs, partnering with educational institutions, and providing resources for self-directed learning. The most successful organizations view skill development not as a one-time intervention but as an ongoing process essential to both employee and business growth. By 2025, 50% of all employees will need reskilling due to adopting new technology. Five years from now, over two-thirds of skills considered important in today’s job requirements will change, according to the World Economic Forum. Government support and public-private partnerships play vital roles in scaling these solutions. Policy initiatives that fund retraining programs, provide educational subsidies, and create skill development frameworks can help ensure that workers across all sectors have access to necessary resources for career transition. Individual workers must also take initiative in their professional development. This includes staying informed about industry trends, actively pursuing learning opportunities, and developing adaptable mindsets that embrace change rather than resist it. The most resilient workers will be those who view technological advancement as an opportunity for growth rather than a threat. Leveraging SmythOS for Optimal Human-AI Collaboration SmythOS emerges as a groundbreaking platform that transforms how organizations approach human-AI collaboration. Rather than replacing human workers, SmythOS enables enterprises to create “blended teams” where artificial and human intelligence work together to achieve unprecedented levels of innovation and productivity. At its core, SmythOS provides a comprehensive no-code platform that democratizes AI development through its intuitive visual builder interface. This approach allows teams across an organization to create and deploy custom AI agents without requiring specialized technical expertise. As noted by SmythOS Co-Founder and CTO Alexander De Ridder, By ensuring students truly understand the future of AI Orchestration and are equipped to walk into companies across the globe with a fundamental understanding of how to build multi-agent systems, we believe we can empower future generations to harness the power of artificial intelligence rather than fear it. Alexander De Ridder, SmythOS CTO via VentureBeat The platform’s built-in monitoring capabilities provide unprecedented visibility into AI operations, enabling organizations to track agent performance, resource utilization, and system health from a centralized dashboard. This comprehensive oversight ensures AI systems remain aligned with business objectives while maintaining robust security protocols. SmythOS addresses job displacement concerns by focusing on augmentation rather than replacement. The platform empowers employees by automating repetitive tasks, allowing them to focus on higher-value activities that require uniquely human skills like creativity, emotional intelligence, and complex decision-making. This collaborative approach enhances overall workplace productivity while preserving meaningful human roles. Integration capabilities stand as another cornerstone of SmythOS’s value proposition, offering connections to over 300,000 apps, APIs, and data sources. This extensive interoperability enables organizations to create seamless workflows between their existing systems and AI agents, maximizing efficiency while maintaining enterprise-grade security controls that ensure autonomous agents operate within secure parameters. Perhaps most significantly, SmythOS democratizes access to AI development through its free runtime environment for deploying autonomous agents. This approach eliminates traditional infrastructure cost barriers, allowing organizations of all sizes to harness the power of human-AI collaboration without excessive operational overhead. The result is a more inclusive technological future where the benefits of AI advancement can be shared across the broader workforce. Future Directions and Conclusion A human and robotic hand shaking, symbolizing AI connection. Human-AI collaboration is evolving rapidly, moving beyond simple automation into an era of true partnership between human intelligence and artificial systems. This shift is driven by recent developments, as highlighted here. AI’s ability to process vast amounts of data and automate routine tasks complements the human strengths of strategic oversight, creative problem-solving, and ethical guidance. This symbiotic relationship will shape the future of technological advancement. Organizations must prepare for sophisticated AI systems that communicate and exchange knowledge effectively. Future AI platforms will likely demonstrate enhanced autonomy while integrating seamlessly with human workflows. SmythOS, for example, offers a robust framework for developing autonomous agents that operate independently yet remain under human supervision. Challenges such as security, scalability, and ethical considerations will require careful attention as AI systems grow more complex. Success will depend on platforms that balance powerful automation with responsible implementation practices. Try SmythOS Automate any task with SmythOS! Create My First Agent Staying ahead in AI development demands a holistic approach that integrates technological advancement with human expertise. Those who leverage tools and platforms supporting this balanced approach will thrive in the evolving world of AI development.
2024-11-12T00:00:00
2024/11/12
https://smythos.com/managers/hr/human-ai-collaboration-and-job-displacement/
[ { "date": "2022/12/01", "position": 83, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 71, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 79, "query": "robotics job displacement" }, { "date": "2024/03/01", "position": 77, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 77, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 46, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 80, "query": "robotics job displacement" }, { "date": "2024/09/01", "position": 77, "query": "robotics job displacement" }, { "date": "2024/11/12", "position": 3, "query": "AI job creation vs elimination" }, { "date": "2024/11/12", "position": 69, "query": "artificial intelligence workers" }, { "date": "2024/11/12", "position": 82, "query": "future of work AI" }, { "date": "2024/11/12", "position": 6, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 75, "query": "robotics job displacement" }, { "date": "2025/02/01", "position": 93, "query": "AI job creation vs elimination" }, { "date": "2025/06/01", "position": 92, "query": "AI job creation vs elimination" } ]
AUTOMATION AND JOB DISPLACEMENT | Humans - Vocal Media
AUTOMATION AND JOB DISPLACEMENT
https://vocal.media
[]
These include roles in programming, system maintenance, data analysis, AI development, and robotics engineering. Additionally, as businesses adopt automation ...
Automation and Job Displacement The rapid development of automation technologies has significantly transformed industries across the globe. Automation, which refers to the use of technology to perform tasks that were previously done by humans, is increasingly becoming a central element in manufacturing, services, and even white-collar jobs. While automation has brought about efficiencies, cost reductions, and increased productivity, it has also raised concerns about job displacement. As machines and algorithms take over routine and manual tasks, there is growing anxiety about the future of work, especially for those whose jobs are vulnerable to automation. In this article, we will explore the implications of automation on job displacement, the industries most affected, the potential benefits and challenges of automation, and strategies for managing this technological shift. The Rise of Automation and Technological Advancements Automation has been around for centuries, but it has accelerated in recent decades due to advancements in artificial intelligence (AI), robotics, machine learning, and data analytics. These technologies enable machines to learn from experience, adapt to new tasks, and perform complex functions that were previously reserved for humans. Examples include robots on assembly lines, chatbots handling customer service inquiries, and software that can analyze vast amounts of data and make decisions in real-time. As automation continues to advance, the types of jobs at risk are expanding beyond traditional manufacturing roles. The shift toward more sophisticated forms of automation means that tasks previously done by clerical workers, drivers, and even professionals in sectors like healthcare, finance, and law are now being automated as well. Job Displacement and the Impact on the Workforce One of the most significant concerns surrounding automation is its potential to displace jobs. Historically, technological advancements have created new types of work, but the pace and scale of automation today are unprecedented. Jobs that were once considered secure, such as cashiers, factory workers, drivers, and administrative assistants, are now at risk of being replaced by machines. Industries Most Affected by Automation (I)Manufacturing The manufacturing industry has been at the forefront of automation for decades. Robotics, artificial intelligence, and machine learning have automated many aspects of production, reducing the need for manual labor in tasks like assembly, welding, painting, and packaging. While this has led to significant productivity gains, it has also resulted in the loss of millions of manufacturing jobs worldwide, particularly in regions where labor costs are high. (II)Retail and Customer Service Retail jobs, especially those in cashiering, stock management, and customer service, are increasingly susceptible to automation. Self-checkout kiosks, online shopping platforms, and AI-driven customer service bots have replaced human workers in many instances. The rise of e-commerce has further reduced the demand for retail employees, leading to job losses in brick-and-mortar stores. (III)Transportation and Logistics The transportation sector is also experiencing significant disruption due to automation. Self-driving vehicles, drones for deliveries, and automated warehouses are expected to reduce the need for human drivers, couriers, and warehouse workers. For instance, companies like Tesla, Waymo, and Amazon are investing heavily in autonomous vehicle technologies, which could revolutionize the way goods and people are transported, while displacing traditional jobs in trucking, taxi services, and delivery. (IV)Administrative and Clerical Jobs AI and machine learning have automated many tasks traditionally performed by clerical and administrative workers. For example, scheduling software, virtual assistants, and data entry tools are replacing the roles of personal assistants, secretaries, and office managers. Similarly, algorithmic trading and AI-driven financial analysis are threatening jobs in the finance sector that were once dependent on human expertise. (V)Healthcare and Legal Services Even highly skilled professions like healthcare and law are not immune to automation. In healthcare, AI-driven diagnostic tools, robotic surgeries, and telemedicine platforms are enhancing patient care while reducing the need for certain human roles. In law, software that can review contracts, conduct legal research, and predict case outcomes is automating tasks previously handled by paralegals and junior lawyers. Benefits of Automation While the displacement of jobs due to automation is a major concern, it is important to consider the potential benefits that automation brings. (I)Increased Productivity and Efficiency Automation can increase productivity and efficiency by eliminating repetitive tasks, reducing human error, and speeding up processes. This leads to cost savings for businesses and lower prices for consumers. For instance, automated manufacturing systems can operate 24/7 without breaks, increasing output while minimizing downtime. (II)Creation of New Jobs Historically, technological advancements have also created new jobs and industries. While automation may displace some workers, it can also lead to the emergence of new job categories. These include roles in programming, system maintenance, data analysis, AI development, and robotics engineering. Additionally, as businesses adopt automation technologies, they may expand and require more skilled workers to operate and manage these systems. (III)Improved Working Conditions Automation can also improve working conditions by taking over dangerous, repetitive, or physically demanding tasks. For example, robots in manufacturing can handle hazardous materials, while AI can analyze medical data faster than humans, providing doctors with more accurate diagnoses. This allows workers to focus on higher-level, more intellectually stimulating tasks. Challenges of Automation and Job Displacement The most significant challenge of automation is the potential for widespread job displacement. As machines take over more tasks, workers whose jobs are automated may struggle to find new employment. The skills required for the jobs of the future may be vastly different from those of the past, leaving many workers unprepared for the shift. This issue is particularly pressing for low-skilled workers, who may have limited access to training or education in high-demand fields like technology and data analysis. Additionally, the uneven distribution of automation’s benefits raises concerns about income inequality. While automation can drive economic growth, the wealth generated may not be evenly distributed, with some workers benefiting from higher wages in new tech-related roles, while others face unemployment or lower-wage jobs. Managing the Shift: Solutions for the Future To mitigate the negative impacts of automation and job displacement, governments, businesses, and workers need to collaborate on solutions. (I)Investing in Education and Reskilling One of the most critical steps in preparing workers for the future is investing in education and training programs that focus on skills in high-demand sectors. Reskilling programs can help displaced workers transition into new roles, especially in fields like technology, healthcare, and advanced manufacturing. (II)Universal Basic Income (UBI) UBI is a concept that has gained attention in the context of job displacement. It involves providing all citizens with a guaranteed income regardless of employment status. This approach could provide workers with financial security as they transition to new roles or industries. (III)Government Support and Workforce Planning Governments can play a key role in supporting workers through the automation transition by offering retraining programs, unemployment benefits, and job placement services. They can also implement policies to encourage businesses to invest in human capital and support workers displaced by technology. Conclusion Automation and job displacement are complex issues that will continue to shape the future of work. While automation presents significant challenges, including job loss and increased inequality, it also offers opportunities for economic growth, productivity gains, and improved working conditions. To ensure a smooth transition, it is essential to invest in education, retraining programs, and social policies that support displaced workers. Only through proactive efforts can society harness the benefits of automation while mitigating its potential negative effects on the workforce.
2022-12-01T00:00:00
https://vocal.media/humans/automation-and-job-displacement
[ { "date": "2022/12/01", "position": 85, "query": "robotics job displacement" }, { "date": "2023/01/01", "position": 69, "query": "robotics job displacement" }, { "date": "2023/02/01", "position": 82, "query": "robotics job displacement" }, { "date": "2023/04/01", "position": 40, "query": "automation job displacement" }, { "date": "2023/09/01", "position": 49, "query": "automation job displacement" }, { "date": "2023/09/01", "position": 69, "query": "robotics job displacement" }, { "date": "2023/10/01", "position": 39, "query": "automation job displacement" }, { "date": "2023/10/01", "position": 61, "query": "robotics job displacement" }, { "date": "2023/12/01", "position": 36, "query": "automation job displacement" }, { "date": "2024/02/01", "position": 49, "query": "automation job displacement" }, { "date": "2024/03/01", "position": 44, "query": "automation job displacement" }, { "date": "2024/03/01", "position": 63, "query": "robotics job displacement" }, { "date": "2024/04/01", "position": 45, "query": "automation job displacement" }, { "date": "2024/04/01", "position": 62, "query": "robotics job displacement" }, { "date": "2024/05/01", "position": 38, "query": "automation job displacement" }, { "date": "2024/05/01", "position": 72, "query": "robotics job displacement" }, { "date": "2024/06/01", "position": 37, "query": "automation job displacement" }, { "date": "2024/06/01", "position": 76, "query": "robotics job displacement" }, { "date": "2024/07/01", "position": 42, "query": "automation job displacement" }, { "date": "2024/08/01", "position": 40, "query": "automation job displacement" }, { "date": "2024/09/01", "position": 83, "query": "robotics job displacement" }, { "date": "2024/10/01", "position": 37, "query": "automation job displacement" }, { "date": "2024/11/23", "position": 6, "query": "automation job displacement" }, { "date": "2024/11/23", "position": 11, "query": "robotics job displacement" }, { "date": "2024/12/01", "position": 45, "query": "automation job displacement" }, { "date": "2024/12/01", "position": 69, "query": "robotics job displacement" }, { "date": "2025/01/01", "position": 41, "query": "automation job displacement" }, { "date": "2025/01/01", "position": 68, "query": "robotics job displacement" }, { "date": "2025/02/01", "position": 49, "query": "automation job displacement" }, { "date": "2025/03/01", "position": 69, "query": "automation job displacement" }, { "date": "2025/04/01", "position": 69, "query": "robotics job displacement" } ]
How Amazon Uses AI To Automate Work In Its Corporate ...
How Amazon Uses AI To Automate Work In Its Corporate Headquarters
https://kantrowitz.medium.com
[ "Alex Kantrowitz" ]
This commitment to AI in the workplace is newly relevant as powerful tools like Dall-E, ChatGPT, and their ilk make their way into the public's hands.
How Amazon Uses AI To Automate Work In Its Corporate Headquarters Nearly all companies will soon have workplace AI on par with the tech giants. Here’s what to expect. Alex Kantrowitz 6 min read · Dec 24, 2022 -- 3 Listen Share When I wrote my book, Always Day One, I wanted to open up the tech giants’ method of doing business to the outside world. My sense was that by revealing their leadership practices, processes, and internal technology, I could hand their playbook to those seeking to compete with them or emulate the constructive parts of their businesses. As I reported the book, I was struck by how deeply artificial intelligence was already ingrained in their cultures. With in-house AI research labs that rank among the globe’s best, the tech giants were automating wide swaths of their operations and changing the nature of work within their companies. This commitment to AI in the workplace is newly relevant as powerful tools like Dall-E, ChatGPT, and their ilk make their way into the public’s hands. As access to this powerful technology spreads, nearly all companies will soon have tools like those I saw inside Amazon, Apple, Facebook, Google, and Microsoft. So work inside our companies will change as well. For the next two weeks, I’m going to share excerpts from the book looking at one of the most fascinating programs I came across, Amazon’s Hands Off The Wheel. Here’s part one, and stay tuned for part two next week where we follow the workers in the story and see what happens next. (And you can pick up the book at Amazon, Apple Books, Barnes and Noble, and Bookshop.org.) How Amazon Uses AI To Automate Work In Its Corporate Headquarters Humans are predictable, Amazon knows it well. “Pick a ZIP code, and Amazon can pretty much tell you what people wear, buy, and do in that ZIP code,” said Neil Ackerman, an ex–Amazon general manager. “Go from house to house. They wear the same clothes, they eat the same food, they decorate the same, they buy the same stuff. It could be different colors, but they’re mostly predictable.” With twenty-five years of data at its disposal, Amazon knows what we want, when we want it, and it’s likely already sent the next thing you’re going to order to a fulfillment center near you. Amazon knows a surge in winter coat orders is coming in the fall. But not only that, it knows certain ZIP codes buy lots of North Face jackets, so it can load up on North Faces in nearby warehouses. Using this knowledge, Amazon is automating an array of work in its corporate offices under a program called Hands off the Wheel. Amazon’s fulfillment centers get stocked with products before you buy them. And traditionally, Amazon has employed “vendor managers” to make this process run smoothly. A vendor manager working with Tide, for instance, figures out how much detergent to place at each Amazon fulfillment center, when it needs to be there, and how much Amazon will pay per unit. Then, they negotiate the price with Tide and place the orders. The position was prestigious inside Amazon until recently. It was fun, relationship-oriented, and put Amazonians in contact with the globe’s top brands. Then, in 2012, Amazon’s senior leadership began examining whether people really needed to be doing these tasks. If the humans were predictable, Amazon’s algorithms could potentially determine what products needed to be in which fulfillment centers, when they needed to be there, in what quantity, and for what price. And they could probably do it better. “Buyers in a traditional manner do the same thing over and over again,” Ackerman said. “They get a call, they get a sales pitch, they buy an amount of product, they usually buy the wrong amount because they’re humans, and lo and behold, people buy the products, and it’s a cycle. When you have actions that can be predicted over and over again, you don’t need people doing that. And frankly, computers, or algorithms, or machine learning, are smarter than people.” Understanding this, Amazon’s leaders decided to attempt to automate traditional vendor manager responsibilities including forecasting, pricing, and purchasing. People inside Amazon began calling this initiative Project Yoda. Instead of having vendor managers do the work, Amazon would use the Force. In November 2012, Ralf Herbrich joined Amazon as its director of machine learning. One of his early goals was to get this project off the ground. “I remember when I started. We had a lot of manual decision-making and forecasting still happening,” Herbrich, who left Amazon in late 2019, told me via phone from Berlin. “We were starting to look into algorithms; in fact, it was one of my launch projects.” Herbrich and his team — which fluctuated from a few dozen machine-learning scientists to more than a hundred — spent the next few years hunkered down trying to bring Project Yoda to life. They initially tried a few textbook machine-learning approaches, which were ideal for predicting orders for products purchased at high volume, but broke when applied to products purchased sporadically. “They worked nice on maybe one hundred or one thousand products,” Herbrich said of the textbook methods. “But we had twenty million to do.” So they tinkered. Each time Herbrich’s team came up with a new formula, they used it to simulate the previous year’s orders, trying to figure out how it did versus Amazon’s actual numbers with human workers. With trial and error, Herbrich’s predictions got good enough that Amazon began putting them right in its employees’ workflow tools. Vendor managers now saw predictions for how many units of products to stock in each region. Vendor managers (and their colleagues who assisted with product ordering) used these systems to “augment their decision-making,” as Herbrich put it. In 2015, the initiative once known as Project Yoda became Hands off the Wheel, a name that’s less subtle. Instead of simply taking the AI’s predictions into account as they made their decisions, Amazon vendor managers would take their hands off the wheel and let the system do its work. Soon, Amazon’s senior team set high goals for the percentage of retail employee actions that should be entirely hands off the wheel. Manual interventions were discouraged, and in some cases had to be approved by category managers (quasi-CEOs in their own right). “So, just to be blunt, we should probably find different job functions, because we’re clearly working ourselves out of a job here?” Vendor managers’ jobs soon changed profoundly. “We were not able to order as much as we used to with the freedom and flexibility that we used to be able to,” Elaine Kwon, a former vendor manager, told me. “At some point, if I’m preparing for a big holiday, I spend a lot of my time thinking about what to order. That is a buyer’s job, to figure out what to buy. That’s slowly getting taken away too. [Management] was like, ‘No we’re not going to have you guys do that.’ ” At a meeting about Hands off the Wheel goals, an ex–Amazon employee I’ll call Tim, who asked to remain anonymous, fearing retribution, said he brought up what was quickly becoming evident. “So, just to be blunt, we should probably find different job functions, because we’re clearly working ourselves out of a job here?” he said. The room laughed, but Tim was serious. And eventually, the presenter said yes, this would decrease the amount of human involvement. “They essentially said yes but didn’t want to put too fine a point on it,” Tim explained. Hands off the Wheel eventually extended across the entire retail organization. Forecasting, pricing, purchasing, and inventory planning are now being done with the assistance of, or entirely through, automation. Merchandising, marketing, and even negotiation are also partially automated inside Amazon. When suppliers want to make a deal with Amazon, they now often negotiate with a computer portal instead of a vendor manager. The car is driving itself.
2023-01-01T00:00:00
2023/01/01
https://kantrowitz.medium.com/how-amazon-uses-ai-to-automate-work-in-its-corporate-headquarters-bbbc48e89769
[ { "date": "2022/12/01", "position": 24, "query": "machine learning workforce" } ]
Engineering Training and Collaboration with Industry
Engineering Training and Collaboration with Industry
https://www.asme.org
[]
Machinists will need to develop new skills in artificial intelligence and machine learning for production, five-axis machines, additive manufacturing, and ...
Convergence Collaboration Certification It is an exciting time in the world of manufacturing. Industry is rapidly changing and adopting technologies, automation, and digitization as part of a new industrial revolution: Industry 4.0. Across the globe, technologies and processes that seemed impossible just a decade ago—artificial intelligence, machine learning, generative design, and digital twins, to name just a few—are now seen as the basis for growth in the coming decade. These emerging technologies will help companies meet new demands from their customers.Through digital transformation and wholesale business innovation, Industry 4.0 will no doubt bring with it new opportunities to create the most technologically advanced, sustainable, and efficient products that the world has ever known.If only job seekers showed the same kind of excitement for manufacturing.There are record numbers of unfilled jobs across the globe, with many organizations predicting even greater shortfalls in the amount of skilled labor before the end of the decade. According to the Manufacturing Institute, the manufacturing skills gap in the U.S. could result in 2.1 million unfilled jobs by 2030.The new technologies and processes of Industry 4.0 may help companies address skill gaps in their workforces, at least in the short term, but at this pivotal moment, many industry leaders are wondering: What will the industry and workforce look like in another 10 years? And what can be done to reduce the disparity between the needs of businesses and the skills available in the labor market?To help address uncertainty about the future of work and provide some insight, Autodesk and the American Society of Mechanical Engineers collaborated on a comprehensive research project that included an academic literature review, in-depth phone interviews, and surveys of respondents in the U.S., Canada, and U.K. The resulting white paper, Future of Manufacturing , was released this fall and offers perspectives from both industry and academia that help identify the future workflows and skills needed, with a focus on mechanical engineers, manufacturing engineers, and CNC machinist roles over the next decade.One of the key takeaways was the need for educators—spanning secondary to higher education, vocational, and technical training—to better prepare students for emerging technologies while still teaching foundational concepts and theories, all while meeting accreditation standards.How can educators accomplish all of this in a classroom and why is it so important? I’ve personally been in this position, asking myself and my colleagues this exact question. As a former lecturer in the School of Engineering at the University of Warwick, I have experienced firsthand the challenges of incorporating new concepts and technology into existing teaching programs. What I found—and what our research backs up—is that it’s absolutely necessary for academia and industry to collaborate.The research we conducted to produce Future of Manufacturing told us that as Industry 4.0 adoption progresses, the convergence and digital transformation of job roles across the manufacturing spectrum is becoming increasingly evident. As this transformation unfolds, mechanical engineers, manufacturing engineers, and CNC machinists will work together more closely, leveraging cloud-based software and fully integrated hardware solutions to design and manufacture the products of the future. As the need for closer and quicker collaboration intensifies, the traditional boundaries between roles will blur and may even start to disappear.These changes will fundamentally shift the job descriptions of many roles and continue to shape skill requirements at every level of the manufacturing process, requiring a new way of thinking about education and training the next generation. When I personally think about the future, I can foresee a time when mechanical and manufacturing engineers of the future may bear more than passing resemblance to the systems engineers, electrical engineers, software engineers, and data scientists of today.While we may all have our own visions for the future of the industry, we must also know that we cannot do this work in silos. In order to build a well-rounded study, we spoke with a wide array of industry professionals and academic leaders to help gather substantial data that paints a more realistic image of where we are today and where we’re heading tomorrow.From our research, we gained valuable insight into the anticipated changes in each of the three roles we looked at over the next five to 10 years.Our research told us that mechanical engineers will continue to improve upon engineering designs and become more involved in manufacturing implementations and processing production data results to improve designs for manufacturability. This means they will need to have applied knowledge throughout the manufacturing process; no more “throwing it over the wall.” The survey showed 80 percent of academics agree that application of technologies like generative design will be a particularly important skillset for this role.The research also showed that the manufacturing engineer position will become even more interdisciplinary as it blends skills with both mechanical engineers and CNC machinists. They will expand their focus from manufacturing processes to managing the overall production operations and analyzing globally distributed systems, including machining tools, robotic systems, and additive manufacturing facilities. Within industry, 72 percent of respondents believe human-robotic interaction will increase and 74 percent believe automation will increase for this role.We also saw that CNC machinists’ roles will evolve dramatically, moving from being a CNC operator to an engineering technician who programs CNC machines. Over time, they will take on other manufacturing engineering functions.Of all three roles, this one is expected to change the most. Machinists will need to develop new skills in artificial intelligence and machine learning for production, five-axis machines, additive manufacturing, and hybrid manufacturing. They will also be expected to collaborate more with engineering teams and be more involved with upstream processes including design, which will require greater knowledge of CAM software.The research didn’t just tell us about technical skills. We also found that workers entering these roles will need to be proficient with “soft” skills such as problem solving, collaboration, and, of course, communication.What does all this change occurring in the next decade mean for the education and training of a new workforce?Among the research findings, there were specific indicators for actions to take, like a strong chorus of support for growing students’ design for manufacturing knowledge and skills. Some 90 percent of respondents believed teaching this skillset is one of the most impactful ways for academia to develop the future manufacturing workforce. Additionally, academics and industry professionals are looking toward the future with shared interest in emerging technologies like artificial intelligence, machine learning, and 3D printing. The research pointed to these emerging skills growing in importance and application over the next decade, and fortunately, we know these technologies are fascinating topics to students themselves.While these insights are compelling, they also expose a delta between where the industry is heading and the current topics of academic curricula. And with business pressures driving manufacturing to adopt more Industry 4.0 technologies, it is critical for academia and industry to align on trends and requirements as soon as they can. Educators at every level are tasked with preparing students for emerging technologies while still teaching foundational concepts and theories, all while meeting accreditation standards. It’s a daunting task.At the University of Warwick, I was responsible for curriculum development and teaching mechanical engineering and design, which means I not only had to build courses that incorporated new concepts and technology, but I had to do so while keeping the pre-existing skillset of the students in mind.Students often enter these programs with little or no design and manufacturing knowledge and educators need to bring them up to speed as quickly as possible. This is no fault of the student but does require the training program to broaden its scope and start at a very foundational level to establish key fundamental skills before moving on to more advanced topics and workflows.Back when I was building courses, one of my dilemmas was whether I should introduce advanced topics like generative design early on in a program or wait until later years. I came down on the side of exposing students to these technologies in their first few weeks to help them build comfort, and then slowly refining their skills over the duration of the program.While our research focused on speaking with people from industry and academia, I think it’s also valuable to have a student perspective in mind. I reached out to some of my former students who went through traditional mechanical engineering programs to hear their thoughts on what gaps exist between academic programs and industry requirements.“At times it felt like there was a lack of convergence to the skills I would actually need beyond university—instead it was heavily knowledge-based,” my former student, Elliott Griffiths, shared with me. “In my mind, it would be beneficial to deliver training in workflows instead of discretized delivery of content. Working with end-to-end applied workflows helps develop the skill to learn other skills, making you a lot more comfortable with new processes and applications, in my opinion.”Elliott’s perspective provides valuable feedback on modern education programming and resonates with what we heard from so many other academics about the need for updating curricula with real-world skills.While there will always be a need for the kind of specialization and research that can only blossom in universities, engineering education must focus on reducing the “time to talent,” that is, the speed at which schools can produce application-centered professionals. The research showed us that creating “industry-aware” curricula is going to be key in reducing the time to talent.For this model to work, it is crucial that academia and industry collaborate. Too often there is a perception that academia and industry are at odds with their goals and requirements, and the topic of pushing academia more towards job-ready skills is hotly debated.We should remember, however, that students are key stakeholders in these conversations. They enter engineering and technology programs with a goal to not just earn an education, but also to find a rewarding job at the end of their program.Fortunately, our research suggests that academia and industry are each enthusiastic about working together. Across industry professionals and academics, all respondents agree on the need for business leaders to support the education sector with curriculum development. There is also shared interest in reducing time to talent by leveraging real-world contexts and hands-on experiences.In addition to teaching new skills in the classroom, educators should also incorporate time for industry-recognized certifications for modern software, machines, and workflows. It may come as a surprise to some, but we learned that 86 percent of academics surveyed embrace less reliance on traditional degrees and welcome more specialized certifications developed in partnership with industry.Using project-based, role-aligned learning pathways also embraces students’ desire to be immersed in meaningful content and target specific skills that will be required in their future jobs. Role-aligned pathways are a great way to target specific skill sets and competencies in new or existing programs and it’s something Autodesk has been working on for a while. To give you an example, pathways like Autodesk’s Fusion 360 CAM for 2.5 Axis Milling certification program enables students to follow a self-paced, curated playlist of videos, quizzes, and challenge assignments designed specifically for the CNC machinist role.Certifications are also going to be increasingly helpful in ensuring students learn foundational skills but also demonstrate their competency to future employers.In addition, embracing certifications allows educators to avoid forcing students to “log seat time” to demonstrate proficiency. Verifying competencies with industry-validated certifications allows students to make progress on building new skills once they have reached a proven level of expertise.This approach inherently removes down time from training programs and creates a more personalized learning experience.Further, 84 percent of all survey respondents believe employers and academia should partner on new types of certification programs based on employer needs, and 91 percent want new opportunities for long-term internships and co-op programs.Closer links between education and industry are going to be vital to ensuring that the talent pipeline is robust and able to rise to the challenges of the future, both seen and unseen. It is undeniable that Industry 4.0 will transform manufacturing as we know it today. We also know that education will have to undergo its own changes to keep pace and prepare students with the necessary skills for future roles in manufacturing.The research we conducted for Future of Manufacturing has told us that there is shared interest in this transformation, as both sides can see the benefits. By creating an education ecosystem that closely partners with industry and embraces new Industry 4.0 technologies, we can reimagine workforce development and engineering programs to build a better tomorrow.Simon Leigh is Senior Manager Global Education Strategy at Autodesk. He is based in Coventry, England. You can download a copy of the white paper,, from ASME.org.
2022-12-01T00:00:00
https://www.asme.org/topics-resources/content/training-the-future-engineering-workforce
[ { "date": "2022/12/01", "position": 38, "query": "machine learning workforce" } ]
Machine Learning to Evaluate Important Human Capital ...
Machine Learning to Evaluate Important Human Capital (HC) Determinants Impacting IT Compensation
https://rijbr.in
[ "Rachana Jaisawal", "Hnb Garhwal University" ]
by R Jaisawal · 2022 · Cited by 11 — India is producing over 1 million youth per month resulting in ameliorating economy and a high attrition rate in the workforce, therefore employers must ...
India is producing over 1 million youth per month resulting in ameliorating economy and a high attrition rate in the workforce, therefore employers must leverage their benefits to perpetuate, nurture their rising workforce & avoid a shortfall in their talent pipeline to stay sustainable & be Employer of Choice. Corporates must develop a suitable compensation strategy to stay competitive and engage their workforce because employees depend mainly on wages and salaries which must be equivalent to their work done. In light of these, it is very much essential for the author to consciously evaluate human capital indicators that are impacting the compensation in Indian IT Companies. This paper aims to evaluate five different machine learning classifier algorithms to predict the topmost important features and the best model for IT compensation based on a survey of 1170 IT professional responses across 61 organizations collected in the NCR region. Due diligence has been given to evaluate the most accurate classifier based on the accuracy score. The result indicated that Random forest regression performs better with a mean absolute error of 0.07 degrees and an accuracy of 99.63%. The finding of the study reflects that the top important variables in determining CTC are Experience, Institution from which the candidate graduated, education, and the Skillset that the individual possesses. These variables have a greater impact on compensation designing and act as strong predictors while other variables are marginally insignificant for predicting the compensation of IT sector employees. Moreover, this study could be beneficial for job seekers and employers looking to hire top talent for their organizations. As the Deep Learning model requires a huge amount of data which was limited in this case, therefore, the researcher could not employ them.
2022-12-01T00:00:00
https://rijbr.in/1/article/view/797
[ { "date": "2022/12/01", "position": 42, "query": "machine learning workforce" } ]
Washington Adventist University
Washington Adventist University
https://professionalworkforcedevelopment.wau.edu
[]
Machine Learning with Python. PROFESSIONAL DEVELOPMENT. Project Management ... Washington Adventist University believes an educated workforce is at the heart of a ...
Thank you for your interest in Washington Adventist University. Oops something went wrong. First Name * Last Name * Email * Phone Number * ZIP Code Program of Interest Clinical Medical Assistant Cyber Security Data Analyst Dental Assistant EKG Technician Health Information Technology Machine Learning with Python Medical Administrative Assistant Patient Care Technician Pharmacy Technician Phlebotomy Technician Physical Therapy Aide Physician’s Office Assistant with EHRM Project Management Sterile Processing Technician Surgical Technologist I consent that Washington Adventist University representatives may utilize automated technology to contact me regarding educational opportunities via email, telephone, and/or text messages. I understand this consent is not required and carrier charges may apply. * Marketing Detail Field will not be visible to web visitor Marketing Source Field will not be visible to web visitor Marketing URL Field will not be visible to web visitor Academic Partner Field will not be visible to web visitor Student Pathways Field will not be visible to web visitor TCPA Compliance Field will not be visible to web visitor REQUEST INFORMATION
2022-12-01T00:00:00
https://professionalworkforcedevelopment.wau.edu/
[ { "date": "2022/12/01", "position": 58, "query": "machine learning workforce" } ]
Talent Services and solutions: spatial analysis
Talent Services and solutions: spatial analysis · machine learning · data analysts · project managers · business intelligence · neurodiversity
https://www.asanalytics.com.au
[]
... workforce. Your organisation can become a destination for neurodivergent talent ... CAD designers. gear with bulb icon. Machine learning analysts. pen and ...
Before ASA, Blake Morrison struggled to find supportive and long-term work. At ASA, Blake grew along the journey, developing his GIS skills and becoming one of the greatest supports for his colleagues. January 2022 marked Blake’s first work anniversary with ASA and his attainment of a six-month contract as a GIS analyst with the Applied Agricultural Remote Sensing Centre at the University of New England. Blake’s manager and team couldn’t be more impressed by his commitment to delivering the National Protected Cropping Map involving spatial mapping and fieldwork. They are incredibly excited to support his career progression in the industry, recently extending his contract for another three years. You can read more about Blake's journey here. “I love the variety of work and I love applying my brainpower and have it potentially benefit ASA. ASA has the most genuine management which allows me to have greater confidence. I feel freed from the shame cycle of Centrelink and Job Active and feel like ASA have allowed me to maintain and enhance my job prospects. Thank you for your support and inspiration along the way. I still consider myself the luckiest person in the world and so much of it is because of ASA.”
2022-12-01T00:00:00
https://www.asanalytics.com.au/talent-solutions
[ { "date": "2022/12/01", "position": 71, "query": "machine learning workforce" } ]
Competencies for the artificial intelligence age
Competencies for the artificial intelligence age: visualisation of the state of the art and future perspectives
https://link.springer.com
[ "Santana", "Msanher Upo.Es", "Business Administration", "Marketing Department", "Pablo De Olavide University", "Seville", "Díaz-Fernández", "Mdiafer Upo.Es", "Carretera De Utrera", "Km" ]
by M Santana · 2023 · Cited by 91 — ... machine learning or big data. Artificial intelligence is a fuzzy concept ... intelligence, necessarily imply training and adapting the workforce.
5.1 Content analysis As we have seen, the number of publications related to AI and competencies is increasing. In total, 421 articles and reviews have analysed the competencies and skills for AI. This section studies the research themes studied in each period, with a special focus on the most prolific period: 2016–2020. 5.1.1 1992–2010 In the period 1992–2010, learning is a motor theme (well-developed and important for the structure of the discipline) (Fig. 6, see “Appendix”). The learning research theme covers subthemes such as knowledge and education. Lepak and Snell (1998) claim that HR departments need to be more strategic when facing virtual HR challenges with the aim of adapting to the uncertainty of technological and market changes. Collaboration, flexibility and change management are underlying requisite competencies detected in this period to cope with technological changes (Lepak and Snell 1998). In this vein, Dyer (1999) identifies a list of competencies for HR managers, such as being a business partner, having technological competencies for the HR function, managing change and competencies in organisational development; his study analyses whether different university programmes were preparing human resource professionals for their future. Particularly, Huston (2008) highlights a set of requisite competencies for nurse leaders, such as having a global mindset in their field, technological skills, decision-making skills, ability to create organisation cultures, comprehension and intervention in political issues and ability to develop collaborative skills, among others. Requisite competencies go beyond mere technologically-driven aspects, emphasising the need to empathise with the workforce. This fact is considered at both the educational and the organisational levels. Indeed, human resource managers have acknowledged the variety of competencies required to deal with the workforce, with people representing the primary source of competitive advantage (Rowley and Warner 2007). In this line, an interesting approach to competencies in the twenty-first century within organisations was proposed in the Special Issue on this topic in the Journal of Management Development, in which the guest editors stated that emotional, social and cognitive intelligence competencies predict effectiveness in professional, management and leadership roles in many sectors of society (Boyatzis 2008). In addition, numerous studies published in this period are related to different education programmes addressing business management, health management and technology management, and emphasising value creation-learning (Anderson 2010; Huston 2008; Kerr and Lloyd 2008). Lawler and Elliot (1996) study an AI tool to help with HR management decision making, finding an expert system that can replicate some non-trivial problem solving competencies in HRM. According to Miles et al. (2000), innovation stems from an underdeveloped skill, namely collaboration. Organisations that understand how to collaborate better create and transfer knowledge, and this knowledge can lead to innovation. Innovation and knowledge are key factors for future organisations. 5.1.2 2011–2015 From 2011 to 2015, big data and human resource management emerged as motor themes for the competencies for AI field of study (Fig. 7, see “Appendix”). In this period, the big data research theme appears as an independent cluster with related research subthemes, such as education, IT, innovation or knowledge. According to the Chartered Institute of Personnel and Development report, big data will enable the HR function to leverage and capture the important information. Big data early adopter companies have faced significant challenges, such as difficulties in obtaining the technical skills to support big data tools; differences in the supply of workers with the requisite skills may explain the differences in the adoption of IT innovations (Tambe 2014). Big data and talent analytics have been considered as important capabilities for the HR function (CIPD 2013). Preparing new generations with skills such as creativity and technical capabilities to compete in the twenty-first century is a significant concern for country innovation systems, so education policies must be integrated in the national innovation strategy (Ibata-Arens 2012). In addition, requisite leadership competencies such as decisiveness, proactivity, innovative decision making and intelligent stewardship are aimed at obtaining agile, flexible and cross-cultural responses to lead effectively and ethically in a changing and globalised environment (Sheppard et al. 2013). These authors emphasise the use of leadership development programmes based on action learning or mentoring to develop these skills. Twenty-first century developments and challenges such as big data and network activities have major implications for HRM models beyond the mere single employer–employee relationship (Swart and Kinnie 2014). These authors identify three HRM models (i.e., buffering the network, borrowing from the network, and balancing the network), which are suitable to enhance networked working grounded on new technological developments. Overall, an increasing interest in employees’ skills, leadership skills and best practices from organisations and educational institutions was observed throughout this period (Sheppard et al. 2013; Tambe 2014). 5.1.3 2016–2020 In the last period (2016–2020), data science and firm performance are motor themes, artificial intelligence, innovation and future are basic themes, competence, outcomes and self-efficacy are specialised themes and analytics is an emerging or disappearing theme (both weakly developed and marginal to the field) (Fig. 3). As most of the publications belong to this period (Fig. 5, see “Appendix”), we analyse it in depth in this section. Table 7 (see “Appendix”) shows the performance measure analyses with the number of documents, citations and h-index per theme. Fig. 3 Strategic diagram of the competencies and skills for AI applications’ themes from 2016 to 2020 (number of union documents) Full size image Data science, with 86 documents and 611 citations, covers research subthemes such as business intelligence, predictive analytics, data quality and knowledge management (Table 7, see “Appendix”). The enormous variety, volume and generation rate of data available can be properly channelled by the data analytics competency to improve a firm’s decision-making performance (Ghasemaghaei et al. 2018). The data analytics competency is a five-dimension formative index defined as the firm’s ability to deploy and combine data analytics resources for rigorous and action-oriented analyses of data (Ghasemaghaei et al. 2018; p.103). Rialti et al. (2019) state that big data may impact a company’s performance by influencing its capability and adaptability, and they refer to the need to develop organisational big data analytics capabilities (i.e., infrastructure flexibility, management capabilities and personnel capabilities) to obtain significant information for decision making. Thus, data science has quickly emerged in both the traditional business models and knowledge management within organisations, fostering the growth of new competencies such as data analytics. Firm performance, with 73 documents and 452 citations, is another motor theme in this field of study (Table 7, see “Appendix”). Firm performance covers topics such as the resource-based view and supply chain management, among others. A number of studies analyse the influence of big data and predictive analytics on company performance, such as supply chain, operational and healthcare performance (Dubey et al. 2019; Peeters et al. 2020; Rialti et al. 2019; Wang et al. 2019). Dubey et al. (2019) study the influence of external pressures on the organisational resources moderated by the impact of big data capability, explaining how this capability influences the operational and cost performance of the organisation. Wang et al. (2019) examine the complexity of big data analytics within healthcare organisations, exploring how big data analytics act with firm resources and capabilities in diverse configurations to improve the quality of care. Peeters et al. (2020) develop a people analytics effectiveness wheel, in which the people analytics team need to focus on four aspects: resources, products, stakeholder management and governance structure. Further research on how big data contributes to firm performance is required. The analytics research topic is an emerging theme and covers subthemes such as information and e-HRM. Indeed, the new developments incorporated to the e-HRM and information subthemes enhance organisational performance. AI can be applied to obtain diverse information from social media for organisations to use (Kaplan and Haenlein 2019) and it is a valuable tool for decision making (i.e., recruitment, compensation, etc.). Furthermore, Bondarouk and Brewster (2016) conceptualise the IT and HRM literature to clarify the pros and cons for different stakeholders of the intersection of these fields of study, HRM and technology, traditionally defined as e-HRM, i.e., the performance of HRM activities through the support of channels grounded in web-tecnologies (Ruël et al. 2004). Thus, AI offers opportunities to control, manage and govern job processes and tasks efficiently and will change the way in which HRM specialists work (Bondarouk and Brewster 2016). The future research theme, together with the artificial intelligence and innovation transversal clusters, present a large number of publications (Table 7, see “Appendix”). The future cluster is a basic or transversal theme with 60 documents and 307 citations and concerns themes such as the fourth industrial revolution, blockchain, Industry 4.0, service robots, big data, smart factory and automation. Another basic or transversal topic is the innovation theme, which has 77 documents and 437 citations and covers subthemes such as learning, strategic HRM and intelligence. The artificial intelligence cluster, with 86 documents and 514 citations, is a basic or transversal theme and discusses subthemes such as machine learning, robotics, digitalisation and human resource management. Indeed, the arrival of Industry 4.0, or the fourth industrial revolution, focuses on technology such as artifical intelligence and advanced robotics, going beyond a mere technological challenge to also include a human challenge (Rampersad 2020; Santana and Cobo 2020). To survive in the future, human skills and competencies to effectively work and cope with new technological developments are crucial (Rampersad 2020). Kaplan and Haenlein (2019) present a Three C Model to support firms with the internal and external challenges of artificial intelligence, which they define as Confidence, Change and Control. Furthermore, Ghobakhloo (2018; p.910) provides a strategic roadmap for manufacturers transitioning toward Industry 4.0, which is an integrative system of value creation comprised of 12 design principles and 14 technology trends. According to this author the Industry 4.0 roadmap is not a one-size-fits-all suitable for every company strategy and should consider the firms’ core competencies, motivations, targets, capabilities and budgets. The human–machine interaction is an important concept in this artificial intelligence research topic. Klumpp (2018) develops a multidimensional conceptual model to differentiate the performance of human-artificial collaboration systems before investment decisions. This author identifies four levels of resistance before an efficient and trusted collaboration is attained between humans and artificial intelligence systems. To overcome many of the challenges that artificial intelligence poses to human resource management, Garcia-Arroyo and Osca (2021) highlight the management of resistance to change within organisations through data experts or multidisciplinary teams to support HRM, and the convenience of alliances between firms and educational institutions. These aspects are also related to the specialised research theme, self-efficacy, covering subthemes such as acceptance, which refers the acceptance of these new technologies by the organisations, but also by the end-users (Kaše et al. 2019; Martínez-Caro et al. 2018). The study of the requisite competencies and skills to cope with these artificial intelligence challenges within organisations constitutes a specialised theme, together with the outcomes and previously analysed self-efficacy research themes. The competence cluster covers research subthemes such as abilities and success. The outcomes cluster discusses satisfaction and work-engagement subthemes in the AI age. Regarding the compentece cluster, it is known that employees need to be innovative, detect opportunities and possess certain competencies to overcome the fear that robots and AI will replace many positions (Rampersad 2020) in a wide range of industries such as tourism and culture. Despite the increasing concern about the gap between existing and requisite employees’ digital competencies, more effort is needed to face the challenges of digitalisation in the twenty-first century (Oberländer et al. 2020). Sousa and Rocha (2019) conduct interviews to find out which skills managers require to cope with new disruptive technologies and list the following skills according to three dimensions: innovation, leadership and management. In turn, Sousa and Wilks (2018) detect critical competencies (i.e., critical thinking and problem solving, collaboration in networks and leading by influence, agility and adaptability, initiative and entrepreneurship, effective oral and written communication, evaluating and analysing information, and curiosity and imagination) and technological disruptive competencies (i.e., artificial intelligence, nanotechnologies, robotisation, the internet of things and augmented reality). Van Laar et al. (2017) make a significant contribution by highlighting the digital skills needed for the twenty-first century, namely: technical competencies, information management, communication, collaboration, creativity, critical thinking and problem solving. Murawski and Bick's (2017) research considers the following selected digital competencies: information processing, communication, content creation, safety, problem solving, digital rights, digital emotional intelligence, digital teamwork, making use of big data, self-disruption, making use of artificial intelligence and virtual leadership. Furthermore, these authors call on organisations to focus on the alignment of multiple stakeholders for the design of ‘digital’ curricula and the integration by HR departments of the construct of digital competences, e.g. for compensation matters and job requirements (Murawski and Bick 2017; p.721). HR professionals must decide what types of capabilities are needed and whether to create their own capabilities or buy them in. Despite the possibilities of big data for the HR function, Angrave et al. (2016, p.1) are critical of the current approach to HR analytics, stating that the HR function must engage operationally and strategically to develop better methods, since it is unlikely that existing practices of HR analytics will deliver transformational change. In turn, Alharthi et al. (2017) also consider that organisations are not ready to make use of the big data capabilities, listing a number of recommendations to address big data barriers. For example, technological barriers can be infrastructure readiness and complexity of data, human barriers can be privacy and a lack of skills, and an organisational barrier can be the organisational culture (Alharthi et al. 2017; Manyika et al. 2011). With respect to human barriers, Alharthi et al. (2017) indicate that organisations need to collaborate with educational institutions in order to develop or acquire the requisite skills for the big data era, and that organisations should include privacy protection measures to enhance existing processes related to big data. In the 2014–2017 period, it became clear that it is crucial to leverage AI through HR. In addition, Sousa and Wilks (2018) state that critical and technological disruptive competencies are needed to obtain sustainable employability. In this vein, Periáñez-Cañadillas et al. (2019) find that candidates’ digital competencies of communication, content creation, safety and problem solving are determinant in the selection decision. In conclusion, competency mismatch is a fact (Acemoglu and Autor 2011; Cukier 2019; Oberländer et al. 2020), but work-integrated learning and technology-enabled talent matching platforms can reduce the effect of new technological developments, and efforts need to be made through specific training from educational institutions and organisations (Alharthi et al. 2017), as previously stated. In this line, Di Gregorio et al. (2019) propose an integrated model of employability competencies in the marketing field (e.g., soft competencies, analytical competencies, digital and technical competencies, core marketing competencies and customer insight competencies) to be implemented in universities that will help marketing graduates succeed in the digital domain, since related marketing professions such as e-commerce manager, social media manager, digital marketing manager or big data analyst have been signalled as key future positions. 5.2 Evolution map In this section, we examine the evolution of the different research themes in the subperiods 1992–2010, 2011–2015 and 2016–mid-2020 (Fig. 4). The academic themes identified in each period were observed with SciMAT based on the evolution of the keywords over the years. From 1992 to 2010, the incipient field of study into competencies and skills for AI applications mainly covered research themes related to the acquisition and learning of the requisite resources and competencies to increase knowledge within organisations and educational institutions (Dyer 1999; Lawler and Elliot 1996; Lepak and Snell 1998). In the period 2011–2015, the learning theme evolved into the big data research topic, as real proof of the growing importance of the big data applications in business and coinciding with the incorporation of the term big data in the Oxford English Dictionary in 2013 (Press 2013; Wasserman 2013). The human resource management cluster emerges for the first time as an independent research theme, highlighting the need to leverage artificial intelligence through human resource management. In the last period (2016 to mid-2020), new clusters appear such as data science, artificial intelligence, future and firm performance are derived from the big data research theme, reflecting the importance of the development of AI within organisations and the need for efficient management. Competence is seen for the first time as a separate topic, with a steadily increasing number of authors studying the requisite competencies needed to face the artificial intelligence era, such as analytics and information interaction (Marler and Boudreau 2017; Sun et al. 2017). Finally, the research themes self-efficacy (related to the user acceptance subtheme) and outcomes (associated with work-engagement and satisfaction) appear as separate themes in this period, discussing the challenges that AI users currently encounter. This conceptual evolution map shows human resource management as an important resource for enabling the requisite competencies, together with the educational institutions, to face the development of AI within organisations.
2023-08-14T00:00:00
2023/08/14
https://link.springer.com/article/10.1007/s11846-022-00613-w
[ { "date": "2022/12/01", "position": 83, "query": "machine learning workforce" } ]
How Human Survival First Principles Influence Machine ...
Human–Artificial Intelligence Systems: How Human Survival First Principles Influence Machine Learning World Models
https://www.mdpi.com
[ "Fox", "Stephen Fox" ]
by S Fox · 2022 · Cited by 4 — In computational neuroscience, efforts are ongoing to relate this framing of world models to machine learning [9,10,11]. Although this world models framing ...
This important research gap is addressed in the remaining five sections of this paper. Next, in Section 2 , the world models construct is related to human organizations. This is done in terms of the construct’s origins in psychology theory-building during the 1930s through its applications in systems science during the 1970s to its recent applications in computational neuroscience. Then, in Section 3 , it is explained how human survival first principles of interactions between energy and entropy influence organizational world models. In Section 4 , a practical example is provided of how survival first principles lead to opposing organizational world models. In Section 5 , it is explained how opposing organizational world models can constrain applications of machine learning. In Section 6 , principal contributions are stated, and directions for future research are proposed. Overall, the paper highlights the influence of interactions between energy and entropy on organizations’ applications of machine learning. In doing so, profound challenges are revealed for human-artificial intelligence systems. Conceptualization of people having internal models of themselves in the world, i.e. world models, can be found in psychology theory-building throughout the middle decades of the twentieth century [ 1 4 ]. By the 1970s, world models were being considered in the context of systems science and control theory [ 5 6 ]. More recently, there has been a framing in computational neuroscience of world models that is applicable to natural and artificial agents [ 7 8 ]. In computational neuroscience, efforts are ongoing to relate this framing of world models to machine learning [ 9 11 ]. Although this world models framing describes individuals’ interactions with the world in terms of entropy [ 12 14 ], efforts to relate it to machine learning have not previously considered how human survival first principles of interactions between energy and entropy influence machine learning (ML) implementations that are based on the world models of human organizations. That is ML implementations that are based on human organizations documented models of themselves in the world, such as their business models and strategic plans. This is an important research gap as many machine learning implementations are made by human organizations rather than by individual people. From the 1930s to the 2020s, a fundamental difference between the world models of individual people and human organizations is that the world models of individual people are embodied, while the world models of human organizations are documented in, for example, business models, strategic plans, and quality management system manuals. In computational neuroscience, efforts are ongoing to relate the active inference framework of world models to machine learning [ 9 11 ]. Although this world model framework describes individuals’ interactions with the world in terms of entropy [ 12 14 ], efforts to relate it to machine learning have not previously considered how survival first principles of interactions between energy and entropy influence the machine learning world models of human organizations. That is machine learning models that are developed and implemented based on, for example, human organizations’ documented business models, strategic plans, and quality management practices. This is an important research gap as many machine learning applications are made by human organizations rather than by individual people. Furthermore, in the first principles’ framing of world models [ 7 11 ], Bayesian cycles of perceptual, epistemic, and instrumental inference can exist [ 37 40 ]. Bayesian inference involves assessing the probability of a hypothesis based on prior knowledge of things that might be related to the hypothesis, and the updating of the hypothesis based on new evidence as it becomes available [ 41 ]. Perceptual inference refers to inferring sensory stimuli from predictions based on internal representations built from prior experience. Epistemic inference refers to updating beliefs about how to survive in an environment. Instrumental inference involves inferring action options and consequences in the environment. For brevity, such inference can be described as active inference [ 40 ]. This first principles work, which is led by neuroscientist Karl Friston, corresponds loosely with what organizational theorist Karl Weick described in the 1990s as the active process of sense making [ 34 ]. Moreover, active inference corresponds loosely with the continuous improvement cycles that organizations document in their quality management systems [ 42 ]. More recently, development of the world models construct has led to a framing that is applicable to natural and artificial agents [ 7 8 ]. This development follows some half a century after artificial intelligence pioneer John McCarthy drew attention to the importance of representations of the world in problem solving [ 35 ]. The recent framing [ 7 8 ] describes individuals’ interactions with environments in terms of entropy [ 9 11 ]. This first principles work provides examples of convergence between neuroscience concerned with world models and organizational studies. In particular, triple-loop learning in organizational studies [ 26 ] has some correspondence with homeostasis, allostasis, and metastasis in psychology and neuroscience [ 36 ]. If successful, homeostasis regulates essential internal variables at a set point (first loop). If homeostasis is not successful, allostasis can reorganize input–output relations with the environment in order to restore a sustainable regulatory set point (second loop). If allostasis is not successful, there can be an explicit consideration of failing implicit allostasis, and action can be taken to restore a sustainable regulatory set point (third loop). However, if there is not an explicit consideration of allostatic failure, metastasis can occur where regulatory processes are replaced by dysregulatory processes (maladaptive third loop). Development of the world models construct moved towards neuroscience when Parkes wrote about the capacity of the central nervous system to organize the most complex impressions into internal models of the world, which allow us to recognize and understand the world [ 27 ]. Similarly, organizational studies began to encompass neurological perspectives, notably in Stafford Beer’s book, Brain of the Firm [ 28 ]. Moreover, world models were considered in the context of system science by Jay Forrester who wrote: “Each of us uses models constantly. Every person in private life and in business instinctively uses models for decision making. The mental images in one’s head about one’s surroundings are models. One’s head does not contain real families, businesses, cities, governments, or countries. One uses selected concepts and relationships to represent real systems” [ 5 ]. At the same time, world models were considered in control theory, when it was argued that internal models need to resemble the systems that they are intended to control [ 6 29 ]. In the 1990s and 2000s, notable studies by Thomas Metzinger focused on the self in world models. This was reported in his book Subjekt und Selbstmodell [ 30 ], which was followed by several papers in the 2000s in journals such as Progress in Brain Research [ 31 ]. During this time, systems scientist, Peter Senge, argued for continuous adaptation between organizations and environments [ 32 ]. Similarly, organizational theorist Karl Weick’s 1990s concept of sensemaking provided a basis for the perspective that organizations need to adapt through continuous learning [ 33 ]. The term sensemaking refers to an active process in which actors enact their environment by isolating elements for closer attention, probing some activities and seeing what responses they attract in order to deepen their insights. Sensemaking is also retrospective because the meaning of actions is not known until they become lived experiences [ 34 ]. Further developments of the world models construct took place in the 1970s in relation to psycho-social transitions. In particular, Colin Murray Parkes [ 4 ] opined that people have an assumptive world that comprises not only a model of the world as it is but also models of the world as it might be. He opined that assumptive worlds encompass prejudices, plans, and expectations, which can change due to changes in the life space. The term life space was coined decades earlier by Kurt Lewin, by which he meant the total psychological environment that a person experiences subjectively but not necessarily consciously [ 19 ]. Parkes went on to propose that there can be three types of change in world models. One type of change is that a world model may be modified and continue to influence behavior. Another type of change is that a world model may be retained as an occasional determinant of behavior. Alternatively, a world model may be abandoned and cease to influence behavior [ 20 ]. In terms of human organizations, this corresponds loosely with issues in business model innovation [ 21 ]. In particular, organizational survival can depend on organizations changing their business models. However, business models can generate self-reinforcing feedback loops [ 22 ], which can contribute to an organization failing due to persisting with an old business model rather than changing its business model with a changing environment [ 23 25 ]. One organizational behavior perspective, which originated in the 1970s and can be applied to address this issue, is triple-loop learning. This involves three feedback loops. In the first, organizations seek to align internal models with the external world. In the second loop, internal models are revised to better fit the external world. In the third loop, organizations can revise how they revise their internal models [ 26 ]. Subsequently, in the 1960s, when considering evolutionary psychology, John Bowlby opined that if an individual is to draw up a plan to achieve a set goal, the individual must have some sort of working model of his environment, and must also have some working knowledge of his own behavioral skills and potential [ 3 ]. In terms of human organizations, this corresponds loosely with strategic planning practices required to model an organization’s environment and its own capabilities in relation to the environment. These can include analyses to map macroeconomic factors such as the political, economic, social, and technological (PEST); five-forces analyses to map microeconomic forces (substituted offerings, established rivals, new entrants, power of suppliers, and power of customers); and analyses to determine one’s own internal strengths and weaknesses compared to external opportunities and threats (SWOT) [ 17 ]. Such practices can contribute to the development of organizations’ business models, which can provide structured descriptions of how an organization will interact with its environment [ 18 ]. Conceptualization of internal model and external world can be found in Kurt Lewin’s topological psychology of the 1930s [ 1 ]. During the same decade, the economist, Ronald Coase, considered interactions between the inside and outside of organizations: for example, in terms of where companies should define their boundaries [ 15 ]. Development of the world models concept took place in the 1940s through the work of psychologist Kenneth Craik on the nature of explanations [ 2 ]. He wrote of small-scale mental models of external reality that utilize knowledge of past events in dealing with the present and future. He opined that small-scale mental models enable trying out alternative possible actions and concluding which could be the best of them [ 2 ]. Also during the 1940s, organizations began to develop forecasting models [ 16 ]. In this section, developments of the world models construct are related to human organizations. In each subsection, notable developments in formalizing individuals’ models of themselves in the world are described alongside chronologically corresponding developments in formalizing organizations’ models of themselves in the world. Individuals’ world models are embodied models of themselves in the world. These include some mental models, but not all mental models need be included within world models. For example, a person may have a mental model of prehistoric art as described to that person during school lessons. Such mental models may not contribute to individuals’ internal model of themselves in the world. By contrast, organizations’ world models are documented, for example in business models and strategic plans that comprise their internal models of themselves in the world. The need to balance energy input and energy output in exchanges across boundaries between internal states and external states can manifest in instances of the principle of least action [ 58 ], such as the principle of least effort during information seeking [ 59 60 ] and in the principle of least collaborative effort in information exchanges involving people [ 61 62 ]. The principle of least effort and the principle of least collaborative effort can be served by people paying more attention to their established internal models than by making more effort by paying attention to changing external states. This can lead to organizations having lock-ins [ 63 ] and path dependencies [ 64 ]. Paying more attention to internal models than to external states can lead to exactly the same external information being interpreted differently by different people in order to serve explanations that support their preconceptions and confirm their biases: for example, in the opposing motivated social cognition of so called culture wars [ 65 66 ]. Preference for least action to maintain own internal models across opposing boundaries is congruent with argument that the development of technology is driven by desire for own ease and for domination of others [ 67 ]. Often, human formulation of boundaries can involve establishing borders, which separate areas where energy is accessed more easily than in adjacent areas on the other side of the border [ 45 47 ]. Human-made boundaries can entail ingroup love versus outgroup hate [ 48 ] and ingroup humanization versus outgroup dehumanization [ 49 ]. Boundary-based preferences can be deeply embodied in neurology [ 50 51 ]. They can entail related preferences for the similar [ 52 53 ] and for the familiar [ 54 55 ]. Preferences for similar people within familiar situations can become strongly related through homophily whereby, metaphorically, birds of a feather seek to flock together, for example via so called Internet echo chambers [ 56 57 ]. In this section, it is explained how survival first principles of interactions between energy and entropy influence organizational world models. A survival first principle is to maintain a positive energy balance by limiting the amount of energy lost to entropy. This involves resisting the second law of thermodynamics by establishing boundaries between internal states and external states. Establishing constraining boundary conditions enables living things, including human organizations, to differentiate themselves from the environment while being partially open to exchanges of information, matter, and energy with the environment. Maintaining positive energy balance is inherently tied to having boundaries [ 43 44 ]. In particular, living things construct their own constraining boundary conditions so they are able to do the work needed to survive. Here, work refers to constrained release of energy within a few degrees of freedom. Release of energy within a few degrees of freedom is necessary to prevent most energy being dissipated rapidly as entropy. For practical purposes, entropy can be considered as overlapping information uncertainty (information-theoretic entropy), physical disorder (statistical mechanics entropy), and energy expenditure being lost in unproductive actions (thermodynamic entropy). For example, a human organization with poorly defined boundaries in its strategic plan, business model, and/or quality management system manual can experience much information uncertainty about customer expectations. Accordingly, that organization can experience much physical disorder in its efforts to meet customer expectations, which entails much energy expenditure being lost in unproductive actions. By contrast, constraining the release of energy can enable much more work to be done with the same amount of energy [ 43 44 ]. 4. Example of Opposing Organizational World Models Examples of opposing world models can be found in global food production, consumption, and prosumption. The word, prosumption, is a portmanteau term, which summarizes that people survive through a combination of production and consumption [ 68 ]. From the everyday point-of-view of individual prosumers, the external state can be environments that include a wide variety of organizations that offer different prosumption preference options that are designed to target the preferences of particular groups, which they define as market segments. This is done with the aim of making their offerings the prosumption preferences of those particular groups. For example, two segments that have been defined for the convenience food market are “kitchen evaders” and “convenience-seeking grazers” [ 69 ]. Convenience food involves little production work as people perform some minor tasks such as removing packaging. By contrast, preparing meals from home-grown food involves a much higher proportion of production work. Some people will choose to undertake a higher proportion of task work when that can keep them inside the boundaries of their preferred socio-cultural group within which they believe they can best survive [ 70 ]. Thus, there can be interplay between preference for maintaining immediate positive energy balance during tasks and maintaining overall positive energy balance by staying within the boundaries of an ingroup. For brevity, these can be abbreviated to energy-positive and ingroup-positive. These are the most fundamental of human preferred states, which underlie a multitude of more transitory heterarchical prosumption preferences. The term, heterarchical, refers to the potential for preferences to be ranked differently in different situations at different times. As summarized in Figure 1 , active inference across triple loop learning can entail heterarchical preference contests in the interface state between organizations in the external state and individual prosumers’ internal states. In the first loop, organizations can seek to maintain market equilibrium around a set point: for example, high profit from high sales of convenience food. Organizations in the external state can formulate choice architectures [ 71 ] to lead prosumers from awareness of one of their products to involvement with their brand. This entails reinforcement teaching to prosumers with the aim of prosumers’ reinforcement learning [ 72 ] that serves the goals of the organization such as high consumption of convenience food. In the second loop, an organization can seek to address homeostatic challenges, such as high loss of prosumers to competitor organizations, through allostatic change. For example, an organization could introduce a loyalty programme, which has step-by-step increases in bonus rates and prosumer status in line with increased value of purchases. However, there are limits to individual organizations’ reinforcement teaching of their predefined reward functions to prosumers. For example, if one organization introduces a loyalty programme, other organizations can quickly do the same through active inference. First, perceptual inference that environment change threatens survival: in particular, customers are leaving to a competitor that has introduced a loyalty programme. Next, epistemic inference that survival in changed environment depends upon offering a rival loyalty programme. Then, instrumental inference that survival depends on the new action of offering a loyalty programme. However, when all organizations attempt new reinforcement teaching by introducing loyalty programmes, there may be no survival advantage to any of them in contests for prosumption preferences [ 73 ]. At the same time, prosumers can be prone to variety-seeking behavior, which can be moderated by whether or not their prosumption is observed [ 74 ]. For example, when a healthy food prosumer has little energy available, active inference may lead the healthy food prosumer to get energy-dense food from the nearest possible source. First, there can be perceptual inference that there is energy depletion that could prevent travelling to that evening’s healthy food party. Next, there can be epistemic inference that it is acceptable at a time of energy depletion to seek the nearest available source of energy-dense food. Then, instrumental inference that it is time to go to get energy-dense junk food before there are not sufficient energy resources to even stand up and move [ 75 ]. The nearest source could be a petrol station selling junk food [ 76 ]. This source can be energy-positive but ingroup-negative, because this prosumer seeks to survive within the boundaries of a healthy food community. Hence, if the prosumer notices that an ingroup member is unexpectedly close by, for example buying petrol at the station, the prosumer may be impelled to expend energy by walking passed the energy-positive but ingroup-negative source in order to get to an ingroup-positive food shop. More broadly, people can just get bored with sourcing resources to address their needs from already known organizations. Then, occasionally and unpredictably, people can make an impulse purchase instead [ 77 ]. Hence, preference contests are heterarchical because different innate needs can have primacy in different situations at different times. Human organizations in heterarchical preference contests can apply machine learning in their efforts to gain competitive advantage against each other [ 78 79 ]. Yet, amidst heterarchical preference contests, general preference options can emerge that can have a determining influence over prosumption preferences. This can happen through culturally-bounded rationality, within which heuristic decision-making due to imperfect information and limited energy is based upon prevailing socio-cultural norms [ 80 ]. This can involve mere-exposure conditioning, whereby repeated exposure to something leads to it becoming part of the familiar background [ 81 ]. Rather than there being reinforcement teaching and reinforcement learning of preferences through the targeted predefinition of specific rewards (e.g. increasing loyalty programme bonus rate) and specific punishments (e.g. lower loyalty programme bonus rate), there can be non-reinforced acquisition of preferences due to mere exposure to the sociomaterial environment, such as a local retail landscape comprising only convenience shops selling junk food. There can be mere-exposure effects and socio-cultural norms from the combined presence of many organizations’ offerings of food-like substances, which can lead to there being sensory ecologies where signals related to salt, sugar and fat dominate sensory exchanges with the food environment [ 82 83 ]. Thus, heterarchical preference contests can take place in ecological traps where rapid environmental change has led to preference for poor-quality habitats [ 84 ]. In particular, where it has become the socio-cultural norm to minimize energy output and maximize energy input through consumption of junk food: even when it is clear that this threatens survival [ 85 86 ]. Here, it is important to note that humans are evolved to learn to minimize energy expenditure through the regulation of movement economy. Hence, it can be expected that people will learn through repetitions of trial-and-error the shortest routes to getting positive energy balance in their sociomaterial environment [ 87 88 ]. Yet, at the same time, organizations with world models that are opposed to junk food can introduce triple loop learning initiatives to limit metastasis, such as the increasing prevalence of survival threats from overconsumption of salt, sugar, and fat [ 89 90 ]. Such initiatives can encompass food preference learning throughout life [ 91 92 ]. In preference contests, organizations can develop choice architectures for healthier food alternatives [ 93 ]. At the same time, preference contests can include efforts to frame healthier food choices in terms of bounded rationality [ 94 ]. In practical terms this can include initiatives to change the sociomaterial environment from so-called food deserts into so-called food oases. This involves healthier food options becoming available in areas where previously only highly processed food were available [ 95 ]. However, triple loop learning initiatives for healthy food may not be successful if there is insufficient consideration of innate preference for maintaining positive energy balance. For example, food oases can be so called food mirages when the healthy food options are not affordable and hence highly processed foods remain the only affordable option [ 96 ]. In terms of innate preferences, prosumers positive energy balance is facilitated by healthy food being nearer in a newly set-up local food oasis. Yet, positive energy balance is not facilitated if prosumers have to expend more energy by working more to earn the money to buy the more expensive healthy food. Also, triple loop learning initiatives may not be successful if there is insufficient consideration of innate preference for maintaining overall positive energy balance by staying inside the boundaries of an ingroup situated within the borders of a particular area. This can happen when establishing a food oasis leads to the gentrification of the area and the local population has to disperse because it cannot afford to pay housing rents. Hence, there can be local opposition to the introduction of local provision of healthy food options [ 97 ]. In their efforts to prevail in triple loop preference contests, human organizations seeking to increase access to healthier food options can apply machine learning [ 98 99 ], while organizations that they are in opposition to are already applying machine learning in efforts to enable their own survival.
2022-12-14T00:00:00
2022/12/14
https://www.mdpi.com/2079-8954/10/6/260
[ { "date": "2022/12/01", "position": 84, "query": "machine learning workforce" } ]
Wearable Sensors and Artificial Intelligence for Physical ...
Wearable Sensors and Artificial Intelligence for Physical Ergonomics: A Systematic Review of Literature
https://pmc.ncbi.nlm.nih.gov
[ "Leandro Donisi", "Department Of Chemical", "Materials", "Production Engineering", "University Of Naples Federico Ii", "Naples", "Istituti Clinici Scientifici Ics Maugeri", "Pavia", "Giuseppe Cesarelli", "Noemi Pisani" ]
by L Donisi · 2022 · Cited by 73 — In particular, Machine Learning (ML) and Deep Learning (DL) allow us to ... workforce, and the emergence of situations with exposure to multiple risk ...
Abstract Physical ergonomics has established itself as a valid strategy for monitoring potential disorders related, for example, to working activities. Recently, in the field of physical ergonomics, several studies have also shown potential for improvement in experimental methods of ergonomic analysis, through the combined use of artificial intelligence, and wearable sensors. In this regard, this review intends to provide a first account of the investigations carried out using these combined methods, considering the period up to 2021. The method that combines the information obtained on the worker through physical sensors (IMU, accelerometer, gyroscope, etc.) or biopotential sensors (EMG, EEG, EKG/ECG), with the analysis through artificial intelligence systems (machine learning or deep learning), offers interesting perspectives from both diagnostic, prognostic, and preventive points of view. In particular, the signals, obtained from wearable sensors for the recognition and categorization of the postural and biomechanical load of the worker, can be processed to formulate interesting algorithms for applications in the preventive field (especially with respect to musculoskeletal disorders), and with high statistical power. For Ergonomics, but also for Occupational Medicine, these applications improve the knowledge of the limits of the human organism, helping in the definition of sustainability thresholds, and in the ergonomic design of environments, tools, and work organization. The growth prospects for this research area are the refinement of the procedures for the detection and processing of signals; the expansion of the study to assisted working methods (assistive robots, exoskeletons), and to categories of workers suffering from pathologies or disabilities; as well as the development of risk assessment systems that exceed those currently used in ergonomics in precision and agility. Keywords: biomechanical risk assessment, deep learning, ergonomics, health monitoring, inertial measurement unit, machine learning, occupational medicine, physical ergonomics, wearable sensors, work-related musculoskeletal disorders 1. Introduction Ergonomics deals with the design of work environments so that they are suitable for humans, and aims at the objectives of health and safety, and productivity at work [1]. Ergonomics as a discipline stands out for its systemic approach, design orientation, and the joint consideration of human well-being and performance [2]. Physical ergonomics is concerned with human anatomical, anthropometric, physiological and biomechanical characteristics as they relate to physical activity. Relevant topics include working postures, materials handling, repetitive movements, work-related musculoskeletal disorders (WMSDs), workplace layout, physical safety, and health [3]. High exposure to physical work is a known risk factor for developing poor health [4] and sickness absence [5], and for the increase in musculoskeletal morbidity [6], as well as for the reduction in working life expectancy [7]. A thorough ergonomic assessment is the foundation for creating a safer, healthier, less injury-prone workplaces, and improving overall workplace wellness [8]. Ergonomists traditionally use various methods of analysis to determine risk factors per job or task to quantify stressors and prioritize them, in order to assist in the development of appropriate controls [9,10]. Specific analysis techniques may include biomechanical models, energy expenditure evaluations, time and motion studies, force measurement, postural analysis, and standardized evaluation tools. Collected data are compared against scientific information and normative data, and interventions in the workplace are planned to eliminate or control risk factors. Technological innovation, and in particular wearable devices [11,12], offer the possibility of objectively and automatically detecting both the physical stress associated with job requests, and the strain caused on the worker involved. This is carried out independently of the presence of an external observer or physical instruments applied on the worker, with minimal invasiveness, and even in complex occupational situations. Moreover, continuous and context-related measurement through sensors integrates motor behaviours and execution techniques adopted by the worker, offering the possibility to study these aspects in their association with efficiency, productivity, and job security [13]. The treatment of data obtained from sensors for a diagnostic, prognostic or preventive purpose [14] takes advantage from the application of Artificial Intelligence (AI) [15,16]. In particular, Machine Learning (ML) and Deep Learning (DL) allow us to extract interesting features and to study them by detecting any associations with the onset of WMSDs [17], the occurrence of injuries [18], or other prognostic factors [19]. In current and foreseen employment contexts, characterized by the complexity of the work organization, the absence of exactly decodable tasks—as well as the aging of the workforce, and the emergence of situations with exposure to multiple risk factors [20]—it is of fundamental interest to adopt a holistic vision of the system [worker-activity-environment]. Ideally, the combination of wearable sensors and AI could help ergonomics in identifying the factors that promote occupational well-being, directing the targeted use of economic resources to implement ergonomic design that contributes to the primary prevention of health issues in the workers. Secondly, the use of wearable sensors and AI could help to verify the long-term tolerability conditions of work, through an accurate recognition of the exposure conditions, integrating the strain aspects developed by the worker, and comparing them with the work requirements. Innovations in AI (sensors, robots, ML algorithms) have been shown to increase productivity, and could potentially improve the safety and health of workers in the workplace [21]. Therefore, it is very crucial to have a thorough understanding of AI methods, and of the effects of these methods on the workers and workplaces as well. As reported by Karwowski [22], the conventional domains of ergonomics can be summarized in three classes: Physical ergonomics related to physical activity concerning human anatomical characteristics; Cognitive ergonomics related to mental processes; Organizational ergonomics related to optimization of socio-technical systems. To the best of the authors’ knowledge, no systematic reviews consider the potential combined use of wearable devices and AI algorithms in physical ergonomics applications. Some reviews have focused on the potential use of wearable devices in ergonomics [12,23,24,25], while others have focused on the role of ML in the prevention of WMSDs [17,26]. This systematic review aims to fill this gap in the literature, considering the growing use of wearable devices and AI in medicine, and particularly in occupational medicine. 2. Research Strategy The systematic review is a method of selecting, evaluating, and summarizing studies based on a specific topic [27]. Our systematic review is presented according to the Preferred Reported Item for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines [28]. Search Methodology and Study Selection The literature search was conducted on Scopus and PubMed databases, and it was limited to English documents. Each database was queried using the following keyword structure: (“wearable” OR “sensors”) AND (“ergonomics” OR “occupational medicine” OR “occupational health”) AND (“AI” OR “ML”). In order to simplify our research, the exclusion criteria were: Conference reviews, reviews, book chapters and erratum; Papers not available; Papers duplicated. Concerning the screening by title, abstract, and full text, the following exclusion criteria were defined: Papers proposing human-machine interface solutions without wearable devices, and not explicitly related to occupational medicine (e.g., touchless control interface in an underwater simulation environment [29]); Papers proposing wearable devices for cognitive ergonomics (e.g., [30]); Papers proposing only a wearable device solution without AI (e.g., [31]); Papers proposing wearable devices for other purposes (e.g., rehabilitation [32]). Documents were screened evaluating, firstly, title and abstract contents and, in case the documents did not meet the inclusion criteria, secondly the full text. Figure 1 shows the PRISMA workflow, and the number of documents included in this systematic review. Figure 1. Open in a new tab Summary review workflow. 4. Conclusions The ergonomic analysis technique that makes use of sensors and AI is mainly aimed at the prevention of WMSDs, and particularly affects the body sectors of the upper limbs and back, widely treated in ergonomics. Through this approach, aspects related to the posture of the whole body have also been partly explored, addressed in ergonomics only recently, and for which, in the literature, there are still no clear thresholds of sustainability or indications of optimal levels of variability over time. The application of this approach provides useful information on the needs of ergonomics to improve the conditions of safety at work, and the comfort of the worker; to design suitable work environments and equipment; or to set up work organizations that avoid the onset of phenomena of accumulation of fatigue or overload. Above all, this approach can be advantageous for the analysis of complex or difficult to observe work situations. As the diffusion of this approach progresses, the wealth of knowledge could help improve the prevention of WMSDs, both associated with acute and cumulative load. This could provide useful information for setting up working methods that are well tolerated, even during the entire working life—an important aspect especially for professions with high biomechanical wear, such as for construction operators or healthcare professionals. This approach assists not only in the study of the characteristics of force, repetitiveness, and posture (classic risk factors in physical ergonomics), but also in the kinematic traits of the worker’s behavior. Specific kinematic traits could be useful as indicators to control and predict the appearance of any alterations capable of endangering the integrity of the worker, but also to monitor the critical phases during the return to work for people with dysfunctions, disabilities or previous pathologies. Furthermore, the data detectable through sensors can enrich the value of the ergonomic intervention of evaluation and design, attracting interest also on aspects properly investigated by other disciplines, such as engineering, psychological, organizational, medical, but also economic ones. The technological approach can be all the more innovative the more it uses prototypes (rather than commercial standard tools), often made with open-source resources, and not pre-deterministically channeled towards a single aspect of interest. Considering some variables detectable through sensors, the design of optimal work situations can be addressed to specific categories of workers, such as the elderly, in order to be able to implement targeted adaptations of the workplace that guarantee the expected levels of productivity and safety. In addition to the purposes of monitoring, evaluation, and design, the combined technique that uses sensors and AI opens up new scenarios for ergonomic interventions of an educational and participatory prevention type; this provides a contribution for workers to explore new ways of carrying out work, possibly also with the adoption of technological aids and devices, such as exoskeletons. The illustrated approach also opens the way to analysis and consideration of multiple conditions of exposure to physical, chemical, environmental, organizational factors at work, for which neither consolidated methodologies for risk assessment are currently available nor is evidence of association available, with the motor, physiological or biomechanical functions of the human operator. Further studies may make improvements to the illustrated technique, specifying the optimal positioning of the sensors, defining the best AI system, but also proposing the elaboration and development of other methods of ergonomic analysis, different from those already used and accepted by classical ergonomics. An interesting aspect of the study related to the topic presented here, and mainly focused on WMSDs, concerns the interpretation of worker well-being as an integrated construct that includes physical, psychosocial, and organizational aspects (1948 WHO definition of health). As it has, in fact, been demonstrated by various studies, these aspects act with reciprocal influence on the conditions of the human operator, and the intervention on one of the risk factors could have repercussions on the other dimensions. This broadening of perspective also affects the long-term benefits that can be prepared for, and guaranteed by, short-term investments in improving occupational safety and health. Furthermore, given the multifactorial nature of the underlying causes of WMSDs, a future study perspective could concern the assessment of exposure associated with prolonged low-intensity static work, typical of teleworkers and the increasing digitalization of work. This article presented a systematic review of the combined use of wearable devices and AI for ergonomic purposes, selecting 25 relevant studies from the scientific literature. The analysis highlighted a deep interest, which has grown in recent years, for the use of wearable sensors coupled with AI algorithms (both ML and DL) to monitor the biomechanical risk to which workers are exposed to during their activities. The review provides the researcher with an overview of the latest uses of AI and wearable sensors in the context of physical ergonomics. Additionally, this review could be useful to support professionals in selecting the most suitable wearable technology and AI strategy for ergonomic assessments and improvements in industrial and non-industrial settings. Acknowledgments Authors thank dott.ssa Monica Panigazzi for her advice and Ing. Claudia Biondillo for her strong support in the selection of articles. Author Contributions Conceptualization, L.D., N.P. and E.C.; methodology, L.D., G.C. and N.P.; software, L.D.; validation, L.D., N.P. and E.C.; formal analysis, L.D. and N.P.; investigation, L.D. and E.C.; resources, E.C.; data curation, N.P. and G.C.; writing—original draft preparation, L.D., G.C. and N.P.; writing—review and editing, C.R., A.M.P. and E.C.; visualization, N.P. and G.C.; supervision, L.D. and E.C.; project administration, E.C. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement The study was conducted in accordance with the Declaration of Helsinki. Informed Consent Statement Informed consent was obtained from all subjects involved in the study. Data Availability Statement Not applicable. Conflicts of Interest The authors declare no conflict of interest. Funding Statement This research received no external funding. Footnotes Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2022-12-05T00:00:00
2022/12/05
https://pmc.ncbi.nlm.nih.gov/articles/PMC9776838/
[ { "date": "2022/12/01", "position": 92, "query": "machine learning workforce" } ]
Implementing systems thinking and data science in the ...
Implementing systems thinking and data science in the training of the regenerative medicine workforce
https://www.nature.com
[ "Plant", "Anne L.", "Biosystems", "Biomaterials Division", "National Institute Of Standards", "Technology", "Gaithersburg", "Piscopo", "Crispr Therapeutics", "Cambridge" ]
by AL Plant · 2022 · Cited by 4 — ... learning and enhanced training opportunities for entry-level workforce. ... intelligence and machine learning technologies. J. Am. Med. Assoc. 324, 1397 ...
The demand for a data-literate workforce creates a need for synergy between stakeholders across sectors (e.g., industry, academia, and government) of the RMAT enterprise. In academic settings, curricula to prepare data scientists for work in regenerative medicine fields might include topics in regulatory affairs, clinical development, and manufacturing. These topics will equip students to understand the unique context of regenerative medicine and appreciate systems thinking. Academia can also bolster the RMAT ecosystem by increasing the knowledge base of educators and providing resources for students interested in nonacademic careers. Synergy occurs from sectors working together on key areas of interest and providing both formal and informal educational opportunities that bridge data science and regenerative medicine. The organizations mentioned below are examples of strong multisector activities that can play an effective role in developing the RMAT workforce. Regulatory Workforce development in data-dependent regulatory issues warrants special attention. Because RMAT products are highly variable and customizable, the regulatory landscape continually evolves, and the regulatory sector must keep pace with advancements. To facilitate crosstalk and opportunities to use data science for regulatory decision-making, the regulatory affairs workforce needs to understand data science fundamentals, and the data science workforce needs a basic understanding of regulatory issues. Now is a key time to train a data science workforce to manage the regulatory environment of a rapidly evolving product and application space. Regulatory training can begin through academic courses and degree programs36. The U.S. Food and Drug Administration (FDA) interfaces with several academic institutions within the Centers of Excellence in Regulatory Science and Innovation (CERSI) program to help train students in regulatory science37. However, regulatory instruction could also be incorporated into existing courses. Reallocating modest portions of curricula across multiple academic stages could better prepare students to interact with regulatory guidance and to prepare documents for regulatory agencies. Statistics courses could train students to analyze large datasets with the goal of introducing regulatory concepts such as critical quality attributes (CQAs), critical process parameters (CPPs), normal operating range (NOR), and proven acceptable range (PAR). Lab-based courses could teach students to prepare process descriptions in accordance with FDA guidance documents and to describe theoretical process-characterization strategies based on risk-assessment exercises conducted in class. Advanced courses could encourage students to discuss the applicability of data science approaches (e.g., real-world data, digital transformation, AI/ML) for regulatory decision-making through answering questions such as: How can real-world data address regulatory filing requirements from clinical, preclinical, and manufacturing perspectives? How can data science tools help establish the safety and efficacy of a regenerative medicine product? Clinical and translational science Discoveries in stem cell biology and associated technologies often occur in basic science laboratories, but the promise of regenerative medicine is realized in the clinic. Academia can prepare students with interest in clinically oriented roles by emphasizing data science and theoretical analysis in programs like the Institute for Clinical and Translational Research (ICTR) at the University of Wisconsin–Madison, which offers minors in Clinical Investigation for PhD students, or the Georgia Clinical & Translational Science Alliance (CTSA), an NIH-funded program across Georgia-based universities that offer a Master of Science in Clinical Research and certificate program in translational research for PhD trainees. Such programs encourage students to consider the human impact of scientific discoveries and promote translational research. Ranging from data acquisition and harmonization to patient privacy, data-related challenges can impede the translation of regenerative medicine discoveries to the clinic. Small sample sizes often limit research in the rare disease space, and data integration among research groups and datasets can hamper progress. To contend with these issues, several institutions established their own data-sharing capabilities, including Johns Hopkins, the Mayo Clinic38, CMaT39, and others. CMaT, for example, works to standardize methods across its eight-university ecosystem and partners with companies to record data in a unified format via batch recording software. Data can be stored in the cloud for collaborators across partner organizations to access and analyze. Started by nine medical research organizations, another initiative, the National Center for Data to Health (CD2H), facilitates data sharing and collaboration across the community of health informatics researchers34. The National Heart, Lung, and Blood Institute (NHLBI), including the NIH-wide Regenerative Medicine Innovation Project (RMIP)40, also established a data-sharing platform for NIH-funded projects—the Biodata Catalyst—which is intended to serve as a central data repository for open sharing among eligible researchers20. Moreover, a workforce that can manage, interpret, and deploy data science efficiently and securely will allow the RMAT industry to capitalize on potential benefits, both nationally and globally41. Manufacturing As the RMAT industry grows and new, increasingly complex products enter the pipeline, highly trained and appropriately certified workers will be in-demand to manufacture high-quality products, at scale, with low batch failure rates and maximal reproducibility—all while ensuring efficacy and patient safety and maintaining strict regulatory standards. Biomanufacturing is increasingly digital and therefore requires proficiency in both quantitative methods and biology-based lab techniques. Moreover, a potential future model of manufacturing is decentralized, remote manufacturing in hospital settings. To support this scheme and other distributed manufacturing approaches in the future, understanding remote access, digital networks, and AI would benefit the workforce. Academia can leverage guidance documents designed for industry to better prepare students for roles in manufacturing regenerative medicines42. When designing curricula, academic entities could integrate discussions with industry and clinical manufacturers to better understand the skillsets needed for biomanufacturing. Manufacturing sciences as a separate discipline is only available at a few academic centers; rather, manufacturing is often taught through mechanical or chemical engineering departments. Collaboration between experts in manufacturing sciences and other specialized domains (e.g., cell therapy, biofabrication) would help ensure curricula and training materials reflect domain knowledge in the context of broader manufacturing principles. This collaborative, transdisciplinary effort across engineering, cell biology, clinical translation, and industrial manufacturing will be critical to prepare the next generation of RMAT workforce. Beyond educational efforts, academic scientists can better coordinate with industry by considering manufacturing guidelines, such as Good Manufacturing Practice (GMP) regulations, in their research. To facilitate translation from bench to bedside and increase uptake of innovations in the clinic, academic laboratories and researchers could aim to develop new devices, tools, software, and technologies that are GMP-compatible, follow Quality-by-Design principles, incorporate standardized analytical tools and measurements, and implement regulatory constraints. Finally, key technology hubs and public-private partnerships play an important role in integrating data knowledge into workforce development. Organizations focused on RMAT manufacturing include the aforementioned CMaT, Marcus Center for Therapeutic Cell Characterization and Manufacturing (MC3M), National Institute for Innovation in Manufacturing Biopharmaceuticals (NIIMBL), BioFabUSA by the Advanced Regenerative Manufacturing Institute (ARMI), the Catapult Network in the UK, and the Centre for Commercialization of Regenerative Medicine (CCRM) in Canada. Examples of important steps these organizations have taken to build a dynamic workforce include: Collaborative course module development for both technical and ethics/regulatory competencies. Collaborations with 2-year college systems for hands-on training and curriculum development. NSF-funded Future Manufacturing Network (FMNet) Consortium for “Building a Network to create the Workforce Foundation, Actionable Roadmap, and Infrastructure Design to Integrate Data Science, AI, and Predictive Analytics throughout Biomanufacturing”. Notably, most of these efforts are in the early stages and need investment to scale up nationally or internationally. Community-based, distributed workforce training programs that build industry-identified skillsets and incorporate robust certification could significantly advance the successful use of large-scale data in regenerative medicine.
2022-12-01T00:00:00
https://www.nature.com/articles/s41536-022-00271-2
[ { "date": "2022/12/01", "position": 96, "query": "machine learning workforce" } ]
Machine Learning and the Five Big Ideas in AI (Journal Article)
Machine Learning and the Five Big Ideas in AI (Journal Article)
https://par.nsf.gov
[ "Touretzky", "Gardner-Mccune", "Seehorn", "Martin", "Lim", "Min", "Vandenberg", "Cateté", "Mott", "Park" ]
by D Touretzky · 2023 · Cited by 147 — To promote equality and equity in society and increase competitiveness in the AI workforce, it is essential to broaden participation in AI education.
This article provides an in-depth look at how K-12 students should be introduced to Machine Learning and the knowledge and skills they will develop as a result. We begin with an overview of the AI4K12 Initiative, which is developing national guidelines for teaching AI in K-12, and briefly discuss each of the “Five Big Ideas in AI” that serve as the organizing framework for the guidelines. We then discuss the general format and structure of the guidelines and grade band progression charts and provide a theoretical framework that highlights the developmental appropriateness of the knowledge and skills we want to impart to students and the learning experiences we expect them to engage in. Development of the guidelines is informed by best practices from Learning Sciences and CS Education research, and by the need for alignment with CSTA’s K-12 Computer Science Standards, Common Core standards, and Next Generation Science Standards (NGSS). The remainder of the article provides an in-depth exploration of the AI4K12 Big Idea 3 (Learning) grade band progression chart to unpack the concepts we expect students to master at each grade band. We present examples to illustrate the progressions from two perspectives: horizontal (across grade bands) and vertical (across concepts for a given grade band). Finally, we discuss how these guidelines can be used to create learning experiences that make connections across the Five Big Ideas, and free online tools that facilitate these experiences. Award ID(s): 1846073 PAR ID: 10357450 Date Published: 2023-01-01 Journal Name: International journal of artificial intelligence in education ISSN: 1560-4292 Format(s): Medium: X
2023-01-01T00:00:00
2023/01/01
https://par.nsf.gov/biblio/10357450-machine-learning-five-big-ideas-ai
[ { "date": "2022/12/01", "position": 99, "query": "machine learning workforce" } ]
AI & Labor - Alphabet Workers Union
Alphabet Workers Union-CWA
https://www.alphabetworkersunion.org
[]
Alphabet Workers Union-CWA Local 9009 announced the creation of the AWU-CWA 9009 Committee on Artificial Intelligence in Summer 2024.
Alphabet Workers Union-CWA Local 9009 announced the creation of the AWU-CWA 9009 Committee on Artificial Intelligence in Summer 2024. The committee will bring together AWU-CWA members from various locations, product areas, and employment classifications to guide our union’s approach to the opportunities and challenges presented by the development of artificial intelligence (AI) at Alphabet and in the broader tech industry. AI has become an essential part of Alphabet's business and it is critical that our union is at the forefront of understanding its impact on our members and industry. This committee will utilize members' skills to ensure that AI products prioritize the interests and well-being of our members. The committee will engage in a member-driven comprehensive campaign on AI, labor, and policy, with a sociotechnical approach to how AI may reshape the landscape of work at Alphabet. It will also consider how contract language, legislation, and organizing strategies can be adapted to ensure that workers receive a fair share of the value of any productivity increases unlocked by AI uptake. The committee includes members across product areas who will create a coalition with subject matter experts from academia, social justice organizations, and the tech industry. The Committee on AI continues Alphabet Workers Union-CWA’s tradition of engagement with issues around AI. Some highlights of that engagement include: When Google AI researcher Margaret Mitchell was suspended and then fired for blowing the whistle on discrimination and unethical AI practices inside the company, AWU spoke out in her defense. When a group of Google Help workers, contracted through Accenture, were pulled off of their regular jobs and reassigned to work on Google’s AI tool Bard, AWU was there to support them as they formed a bargaining unit. We were there when Google and Accenture retaliated by laying off almost the entire Google Help team who had worked on Bard. And we were there to file an unfair labor practice charge with the NLRB for these retaliatory firings. When workers at Appen, a Google contractor working on the Bard AI tool, reported that their workloads were skyrocketing and they were being forced to view disturbing content including child sex abuse material, we were there to help them organize. When one of our members sent a letter to Congress raising concerns about working conditions for AI workers leading to unchecked disinformation, we were there to support and amplify. And when Google and Appen had AI raters illegally fired in retaliation for organizing, we were there to push back and helped Appen workers win reinstatement and back pay. We’ve lobbied Congress to advocate for the inclusion of worker protections in any federal AI regulation scheme. ‍
2022-12-01T00:00:00
https://www.alphabetworkersunion.org/campaigns/ai-committee
[ { "date": "2022/12/01", "position": 7, "query": "AI labor union" }, { "date": "2022/12/01", "position": 15, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 14, "query": "artificial intelligence labor union" }, { "date": "2023/04/01", "position": 6, "query": "AI labor union" }, { "date": "2023/05/01", "position": 7, "query": "AI labor union" }, { "date": "2023/07/01", "position": 7, "query": "AI labor union" }, { "date": "2023/07/01", "position": 14, "query": "artificial intelligence labor union" }, { "date": "2023/08/01", "position": 10, "query": "AI labor union" }, { "date": "2023/09/01", "position": 6, "query": "AI labor union" }, { "date": "2023/09/01", "position": 39, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 39, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 39, "query": "artificial intelligence labor union" }, { "date": "2024/02/01", "position": 18, "query": "artificial intelligence labor union" }, { "date": "2024/03/01", "position": 10, "query": "AI labor union" }, { "date": "2024/04/01", "position": 8, "query": "AI labor union" }, { "date": "2024/04/01", "position": 17, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 13, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 7, "query": "AI labor union" }, { "date": "2024/06/01", "position": 17, "query": "artificial intelligence labor union" }, { "date": "2024/06/26", "position": 36, "query": "artificial intelligence workers" }, { "date": "2024/07/01", "position": 16, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 16, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 8, "query": "AI labor union" }, { "date": "2024/09/01", "position": 14, "query": "artificial intelligence labor union" }, { "date": "2024/10/01", "position": 10, "query": "AI labor union" }, { "date": "2024/11/01", "position": 16, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 43, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 38, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 33, "query": "artificial intelligence labor union" }, { "date": "2025/04/01", "position": 8, "query": "AI labor union" }, { "date": "2025/05/01", "position": 6, "query": "AI labor union" }, { "date": "2025/05/01", "position": 34, "query": "artificial intelligence labor union" }, { "date": "2025/06/01", "position": 6, "query": "AI labor union" } ]