title_s
stringlengths 2
79
| title_dl
stringlengths 0
200
| source_url
stringlengths 13
64
| authors
listlengths 0
10
| snippet_s
stringlengths 0
291
| text
stringlengths 21
100k
| date
timestamp[ns]date 1926-02-14 00:00:00
2030-07-14 00:00:00
| publish_date_dl
stringlengths 0
10
| url
stringlengths 15
590
| matches
listlengths 1
278
|
---|---|---|---|---|---|---|---|---|---|
“Mobile journalism is the fastest way to change how Africa's ...
|
“Mobile journalism is the fastest way to change how Africa's story is told”
|
https://reutersinstitute.politics.ox.ac.uk
|
[] |
AI and journalism, Audience Engagement, Business of news, Climate journalism ... Media and Communications to train a cohort of journalists on mobile journalism.
|
Emmanuel Yegon is a journalist, a mobile journalism trainer, and the co-founder of Mobile Journalism Africa (MoJo Africa), a digital media outlet that aim to transform how Africa’s narrative is told through mobile journalism. Yegon and his team aims to provide trained journalists with a platform where they can share their professional work in a simple way. Their goal is to empower journalists to tell their own stories using smartphones while maintaining high standards and professionalism.
I recently spoke with Yegon about his outlet and the future of mobile journalism in Africa, where he has trained hundreds of journalists and journalism students. Our conversation has been edited for brevity and clarity.
Q. What is Mobile Journalism Africa?
A. Mobile Journalism Africa is a digital media start-up launched in 2018 with the goal of changing how Africa's narrative is told one story at a time and also equipping young people across the continent with the skill to tell stories so that together, we can work on changing how Africa's narrative is told.
We believe that the fastest way of doing that is through the smartphone because many people are connected or have access to these handsets. So we want to get as many people to be able to tell their own stories from different locations across the continent.
Q. What prompted you to start Mobile Journalism Africa?
A. Two reasons. First, we have thousands of graduates who come out of media or journalism school every year from different colleges in Kenya, but a very small percentage get absorbed into the mainstream media. We wanted this to be a way for fresh graduates to be able to start practising what they’ve learned in journalism school and to start building a portfolio they can show.
The second reason was creating a space for people to tell stories from across the continent. For a very long time, Africa's story has been told through the lenses of people from other parts of the world. And this is a discussion that is big at the moment, and I like the fact that now we have more people championing Africans telling their own stories, through their own eyes and their own lenses. So we wanted to create a platform for Africans to tell their own stories.
Q. What is the potential of mobile journalism in Africa?
A. Mobile journalism has incredible potential in the continent. I say this as somebody who has been in this space for a while. When we started out in 2018, people had a different perception of a journalist using a smartphone.
For instance, when I went to conduct an interview, some subjects used to be sceptical about the quality of what I was doing and whether I was serious about it. Today there's tremendous growth in interest and uptake not only by journalists but also by other content creators. Throughout the pandemic, many journalists worked from home or on the ground and were using smartphones to report. We saw people like Larry Madowo having a simple setup of a tripod, a smartphone and a ring light doing live links or reporting for CNN. This helped change many misconceptions about mobile journalism.
In 2021 we worked with the Aga Khan University's Graduate School of Media and Communications to train a cohort of journalists on mobile journalism. After the training, they were placed in different newsrooms for six months and were told to produce stories using smartphones. Then some of the media houses decided to retain them and even went a step further: they equipped other journalists with smartphones to produce short stories for their digital platforms.
Q. You mentioned that when you first started in 2018, the reception was not very positive. What do you think changed the perception of both subjects and audiences?
A. I’d say it’s a combination of a lot of factors. A question I usually ask is, “when you're watching any type of content, is the first thought that comes to your mind, which device was used to capture this?” More often than not, the answer is “no”, right? You just focus on the content and you consume the content the way it is.
Today there is more emphasis on the content rather than on how the content was made.
Many notable journalists do use smartphones to film or report live. Some media organisations that formerly conducted interviews with huge cameras and tripods are now recording footage on smartphones. This has changed how people view mobile journalism. A new wave of content creators using their smartphones to create various forms of content whether for TikTok, Instagram, or YouTube has contributed to reshaping perspectives.
Yegon during a training session.
Q. Which training programs do you provide?
A. The training provides us an opportunity to take journalists through essentially what mobile journalism is so they understand their devices with their limitations and strengths, and then how to work through those limitations to still achieve what they want.
We adapt our training to the knowledge of our trainees. But we often go through the basics of storytelling, how to shoot and edit videos with a smartphone as well as the dynamics of distributing content.
Q. What is the potential of mobile journalism in Africa as compared to the west?
A. British specialists in mobile journalism use great gadgets and produce work of an exceptional calibre.
Here in Africa mobile journalism is kind of new. And since mainstream media haven’t been especially receptive, we have a lot of opportunities to use these technologies to create more stories. Imagine how much material we could produce if we used our gadgets.
Additionally, the audience for our mobile journalism stories here in Africa has been very receptive. It is no longer a question of whether people consume mobile journalism. Audiences will always flock to where quality content is. MoJo stories are short and easily shareable, so audiences don't have to spend a lot of time or mobile data on them.
Q. Which African media organisations have embraced mobile journalism?
A. The first people that I learned from is the BBC Nairobi bureau. They were keen on mobile journalism from the very initial stages. The others that joined in the wave in Kenya included outlets such as Citizen TV, KTN and NTV. Some of the content of digital platforms like Kenyans is also produced with smartphones.
Across the continent, we have also inspired other people to start their own digital platforms. A start-up called Salone Messenger in Sierra Leone was founded by a journalist whom we trained. DW correspondents in Ghana and Zimbabwe also produce their work exclusively on smartphones.
In Uganda, we have the Media Challenge Initiative which also offers training and fellowships on mobile journalism and journalism in general.
I also have colleagues in Tanzania, Sudan, Nigeria and Somalia who are taking these skills seriously. In some places, mobile journalism is not mainstream yet and it’s only practised by small start-ups.
Q. What are some of the challenges that mobile journalism faces in the continent?
A. One of the major challenges is connectivity in some countries. Connectivity affects mobile journalism in two ways. First, the uploading or sharing of the content. Without an internet connection, then you cannot upload your work. Secondly, connectivity also affects quality and consumption. For instance, if you put out only 4k videos then it means you are restricting your content to those who have good internet connectivity.
The second challenge mobile journalism faces is that the production of these stories often requires access to paid software. Most applications are free for Android users, but those that are good for content creation are subscription based. Of course, they have a free version, but it has limited features. At MoJo Africa, we have done our best to reach out to some of the organisations behind these apps to work out some arrangements.
There is also a challenge with access to the tools needed for the production of MoJo stories. These are things like microphones, tripods and other accessories apart from the smartphone.
Q. What actions should newsrooms take in relation to mobile journalism?
A. Either they adapt or they will be left out. Newsrooms in the continent should be prepared to accommodate those who do mobile journalism.
The number of stories that newsrooms can produce is limited by the number of reporters they have. For instance, if they had to cover an election, do they have enough people to send out to every corner of the country? Is there a newsroom in Kenya that can confidently say that they have covered everything that's happened around the country with their reporters? The answer is no.
Elections are hard to cover. There are a lot of things happening at once. But what if they gave space for more journalists to report with their smartphones from all those different places? Newsrooms should adopt mobile journalism and make that part of their workflows.
Q. What does the future hold for mobile journalism in the continent?
A. What matters for me is how we can get these skills to as many people across the continent. We should make mobile journalism so mainstream that it can be taught in universities. Mobile journalism is the fastest way to change how Africa's narrative is told.
| 2023-02-28T00:00:00 |
https://reutersinstitute.politics.ox.ac.uk/news/mobile-journalism-fastest-way-change-how-africas-story-told
|
[
{
"date": "2023/02/28",
"position": 69,
"query": "AI journalism"
}
] |
|
Hiring Simplified by Artificial Intelligence
|
Hiring Simplified by Artificial Intelligence
|
https://www.talentrecruit.com
|
[] |
Transform your hiring with AI-driven ATS software that learns from recruiter feedback daily. Enhance recruitment with advanced machine learning algorithms.
|
Candidate Experience
The Evolution of Candidate Experience Platforms: What HR Teams Need to Know
Candidate Experience Platforms (CXPs) have evolved from simple communication tools to AI-powered systems that personalize and transform every stage of the hiring journey. This blog explores how modern CXPs enhance engagement, streamline communication, and provide data-driven insights, helping HR teams deliver exceptional candidate experiences. Discover why investing in the right platform is crucial for building strong employer brands and future-proofing your recruitment strategy.
| 2023-02-28T00:00:00 |
https://www.talentrecruit.com/ai-powered-recruitment-software
|
[
{
"date": "2023/02/28",
"position": 13,
"query": "artificial intelligence hiring"
}
] |
|
The Hot New AI Job Skill: Asking Questions
|
The Hot New AI Job Skill: Asking Questions
|
https://www.kornferry.com
|
[
"For More Expert Insights"
] |
ChatGPT and other AI tools have spawned a need for “prompt engineers.” Why leaders are rushing to find these people before competitors do.
|
Apparently, there are dumb questions—at least when it comes to getting the most out of ChatGPT.
In the latest sign of how quickly the artificial-intelligence platform is infiltrating the corporate world, some companies are racing to hire “prompt engineers”: experts who can formulate queries that will yield the optimal result from the bot. Think of them as akin to SEO or social-media experts—only the stakes are a lot higher. Already, firms in a wide range of fields, from Wall Street traders to HR specialists, are finding new uses to market or conduct research with these new content-AI tools.
“Ultimately, this technology is only as good as the information it pulls,” says Barbara Rosen, global accounts lead for the Technology market at Korn Ferry. Which is where prompt engineers come in—posing questions which are specifically tailored to return the most relevant content. As ChatGPT and other generative-AI bots “learn” from user queries and prompts, they will increasingly respond in kind. Posing questions in the language of law versus finance, for instance, will likely yield answers which are different in both content and tone.
Many firms are both excited and overwhelmed by the new technology. Introduced just two months ago, ChatGPT now has more than 100 million monthly users. Three of the world’s biggest financial institutions already have banned traders from using the tool lest they expose sensitive personal or financial data that could invite regulatory action. Some technology and telecommunications firms have also banned employees from using the platform. “Companies are trying to figure out how to use it for good while protecting themselves from the bad,” says Sharon Egilinsky, a Korn Ferry senior client partner specializing in organizational strategy. She points out that some AI responses could contain misinformation, insensitive language, private data, and racial or ethnic bias, among other kinds of inappropriate content.
In fact, this new area of AI is moving so fast that “an entire field of careers and job roles will quickly evolve with it,” says Chris Cantarella, global sector leader for the Software practice at Korn Ferry. He points out, for instance, that “prompt engineer” is just a loose term that generally refers to anyone who is skilled in using data science to gain new business insights, innovations, and ideas around product development, marketing, customer experience, and more.
Cantarella says job creation around the new tools will have to balance minimizing risk with leveraging the efficiency and savings from their use. He envisions tech, legal, and financial companies having to hire more content moderators to prevent sensitive or proprietary information from being released or violating regulations. Companies, says Cantarella, will need more than just data-science majors, software-product leads, or engineers to fill these roles. The need for prompt engineers, he says, could open the door for specialists in ethics, sociology, philosophy, and other liberal arts to get involved in artificial intelligence.
“‘Prompt engineer’ is a term that I think will quickly go out of style,” says Cantarella. He believes that instead of one role, the field will evolve into an entirely new suite of career opportunities. “The key ingredient will be having people with great passion and skill for asking superbly thoughtful questions to get the most out of AI.”
For more information, contact Korn Ferry’s software practice.
| 2023-02-28T00:00:00 |
https://www.kornferry.com/insights/this-week-in-leadership/the-hot-new-AI-job-skill-asking-questions
|
[
{
"date": "2023/02/28",
"position": 62,
"query": "artificial intelligence hiring"
}
] |
|
Careers at Redwood Research
|
Redwood Research
|
https://www.redwoodresearch.org
|
[] |
At Redwood Research, we strive to ensure the safe development of artificial intelligence through rigorous technical research, thoughtful analysis, and ...
|
About Redwood Research
At Redwood Research, we strive to ensure the safe development of artificial intelligence through rigorous technical research, thoughtful analysis, and strategic advising.
About the Role
Redwood Research staff think carefully about threat models associated with AI, carefully analyze countermeasures (through empirical methods or otherwise), and communicate their results. This role is based in Berkeley, California.
Example areas of research include:
Required Skills
Machine learning: Our work involves both LLM APIs and running ML experiments on our own hardware.
Our work involves both LLM APIs and running ML experiments on our own hardware. Software engineering: We frequently run into problems that require software engineering to solve; for example, orchestrating large numbers of Docker containers to run many AI agents in parallel, or running ML experiments at moderate scale efficiently.
We frequently run into problems that require software engineering to solve; for example, orchestrating large numbers of Docker containers to run many AI agents in parallel, or running ML experiments at moderate scale efficiently. Strong communication and collaboration skills.
A deep interest in AI safety and mitigating potential risks.
| 2023-02-28T00:00:00 |
https://www.redwoodresearch.org/careers
|
[
{
"date": "2023/02/28",
"position": 87,
"query": "artificial intelligence hiring"
}
] |
|
AI And Your Resume | The Agency
|
AI And Your Resume | The Agency - Recruitment firm based in the Cayman Islands
|
https://theagency.ky
|
[
"Vanessa Gowen"
] |
I asked it to describe itself and here's what it told me : “ChatGPT is an artificial intelligence language model developed by OpenAI. ... hiring · interview ...
|
AI And Your Resume
With the arrival of ChatGPT and Bard it begs the question of whether this new AI tool can be used by job seekers to write resumes. Often, we are asked by candidates for help with resume writing as it is a daunting task. Some already know what to do and just need a little help with polishing it up. Others need to start from scratch, have no idea where to begin and could really do with some guidance.
In come our friends at Microsoft and Google with this ground-breaking AI in late 2022. As far as I am aware, I’m yet to receive an AI-generated resume or cover letter (but I suspect it’s only a matter of time) so I decided to see how this tool might work.
What is ChatGPT?
ChatGPT and its lesser-known competitor Bard, is the hottest new technology that has the tech world abuzz. Only out for a few months, it is currently free and it generates responses and information based on the dialogue provided to it. It is revolutionizing the content space.
As ChatGPT can generate human-like text in response to user prompts, it basically provides an answer to any question. I asked it to describe itself and here’s what it told me : “ChatGPT is an artificial intelligence language model developed by OpenAI. It is a state-of-the-art machine learning model that uses deep neural networks to generate human-like responses to natural language inputs. ChatGPT is capable of understanding and responding to a wide range of topics and can provide information, advice, or assistance to users in real-time. Its abilities include natural language processing, text generation, question-answering, and conversational engagement. ChatGPT can be used for a variety of applications, such as chatbots, virtual assistants, customer service, and more.” Proceed with some caution. These last few weeks have shown how glitchy it can be and reminded people this is tech still in its infancy.
Could AI be useful for resume writing?
The short answer is yes! AI could help you build a resume from scratch freeing up time – however it is essential that you (the human!) and the person whose life experience this document is about, do not rely on this tech alone but consider it a helpful starting point if you decide to try it.
For example, you can ask ChatGPT to “build me a resume”, and the AI will ask for your information (such as name/ job titles/ education, etc.) and will write a basic resume document based on this info. Handy right?!
Have a look at what it told me when I asked it how it might be able to assist…
This AI has lots of beneficial strengths to create a solid resume foundation, but this doesn’t eliminate what YOU must do to ensure you are working with the best possible document to sell you and your background/ experience to employers here in Cayman. No technology, no matter how clever, can ever truly demonstrate your experience as well as you can and is not a replacement for putting in some graft. It is up to you to provide it with the right info and proofread and edit the results.
From what I’ve seen so far, I still believe a bot can only do so much as the resume is such a personal document – but it can certainly help make a start and save you some time if you’ve been staring at a blank Word doc for days!
Like many things, nothing will beat a human touch and the team at The Agency are always here to provide you with resume/ CV feedback also!
| 2023-02-28T00:00:00 |
2023/02/28
|
https://theagency.ky/ai-and-your-resume/
|
[
{
"date": "2023/02/28",
"position": 100,
"query": "artificial intelligence hiring"
}
] |
Real Estate AI Is The Executioner, Not the Judge
|
Real Estate AI Is The Executioner, Not the Judge
|
https://blog.revaluate.com
|
[
"Chris Drayer"
] |
The imbalance in numbers of agents and lack of proceeds to support so many agents will create massive layoffs of crummy agents who entered the market during ...
|
A growing threat looms on the horizon, shaking the real estate industry at its very core.. There has not been a better solution for an industry-wide business model for how a brokerage is organized and run; however, the inevitable changes seem to be coming sooner than later. Is this a worst case scenario? Perhaps. Is there enough truth and data to be plausible? Yup – I believe reality lies somewhere in the middle of our risk spectrum. And no, the real threat is not AI.
An Exemption Problem
There’s a legal battle (several cases actually) that’s been brewing for years in court. TheTLDR: Real estate agents are currently 1099/independent contractors, and not W2 employees and many in leadership believe that will change when the government forces realtors to become W2 employees. Currently, the industry has been riding on an “exemption” for the classification of the industry, brought about by the mighty and long lasting power of the National Association of Realtors (NAR) lobby in Washington, which is second only to the NRA in size. (Good work team RPAC;)
Cutting Cost (Read: Agent and Staff Reductions)
Not all realtors are 1099. But about 87% (according to a NAR 2021 data) are independent contractors. When the government “asks” that the designation change to W2, agents will HAVE to be paid minimum wage. Federal minimum wage is: $7.25/hr, or $15k/year. But most believe that due to a number of factors, this will increase significantly soon. So consider the living wage for the sake of this discussion since few of us in the industry could (or would) live on the minimum wage of $15k/year.
“For one full-time working adult with no children in 2022, the national average living wage is $17.46 per hour – or $36,311 annually” per Just Capital and MIT.
In this livable wage future each agent would be paid by the broker a base rate of $36k/yr. Friends – there is no 1099 brokerage that will keep all its agents if this happens. The cost is simply too great.
The executioner from the movie “Blazing Saddles” hanging the rider and horse.
Agents vs Math
What would it take for an agent to keep her/his job at a brokerage? How many transactions would be acceptable?
Using national averages according to Zillow the average home value is $329k as of this writing. The average total sales commission for both buyer and seller sides of the tansaction is 5.37%. Divided by two = 2.68% per sale. ($8817.20 per side) With this math, the average agent needs to sell five transaction sides per year to pass break even for the cost of the base. Assuming the industry is in business for profit, in my estimation, agents would need to do around twelve transaction sides per year on average to remain employed. And making matters more difficult, with a hovering recession, we are not in a 2021 boom economy, either. There are too many agents. The number of agents exploded nationwide in the last two years. Estimations are that there are currently over 2M licensed real estate agents in the US, and that there will be roughly 4.5M housing transactions – thus 2.25 transactions per agent “available” with this number of agents.
The imbalance in numbers of agents and lack of proceeds to support so many agents will create massive layoffs of crummy agents who entered the market during the boom and don’t have the necessary skills to survive these conditions, and even middle of the road producers who are content selling six homes a year will be removed from office.
Side note: Hey OG’s! Do you recall “Raise the bar” and did you support the cause? Now is your time to shine!
The new problem, will be staffing desks, phones and getting repetitive tasks done that were previously taken care of by entry level 1099 agents.
The AI Answer
Real Estate brands, teams and brokers will need, no, HAVE to reduce cost, and will inevitably turn to RealEstate AI to do more of the tedious, monotonous work, just as they have with VA’s (Virtual Assistants). In addition,the easy lead qualifying and simple Q and A solutions are a far more dependable, accurate and inexpensive alternative to 1099 desk, chat and phone staff. The huge industry trade shows (if they still exist) will be jammed with new untested AI tools that promise to be the all in one printer. Caveat emptor – failure is always an option. But Chat GPT has a winning model on their hands. For example, I asked Chat GPT the type of question an entry level realtor might be asked: “Will I need a septic tank inspection prior to selling my house?” Here’ is it’s very on point and scarily impressive answer. Chat GPT gave me good advice.
The End Will Be Swift
AI may be viewed as the executioner, however, in this case it’s likely the judge that will be the death of hundreds of thousands of real estate agent’s jobs. With the stroke of a pen, the bang of a gavel at the end of the 1099 exemption for Realtors© will signal a new era in real estate.
There are a few companies today such as Redfin that are already built for this model and will likely thrive during the transition. However, the bulk of today’s massive brokerages like EXP, Keller WIlliams and Remax will be forced to legally comply or die. Companies like to make money – and so they wont die, they will pivot, and look for the most cost effective method to continue making money. That solution has been handed to them in the form of ChatGPT and other AI (like, you know, Revaluate) to help them survive and become vastly more efficient into the future.
Like this: Like Loading...
| 2023-02-28T00:00:00 |
2023/02/28
|
https://blog.revaluate.com/real-estate-ai-is-the-executioner-not-the-judge/
|
[
{
"date": "2023/02/28",
"position": 62,
"query": "artificial intelligence layoffs"
}
] |
AI/ML: The good, the bad, and the buzz
|
AI/ML: The good, the bad, and the buzz
|
https://adobe.design
|
[
"Brooke Hopper"
] |
With AI tools, an illustrator could create and execute the concept and draw the main parts of the image and AI could generate the rest in his illustration style ...
|
AI/ML: The good, the bad, and the buzz
A look at technology's influence on illustration and design
Illustration by Artranq / Adobe Stock
Up until about a year ago, Artificial Intelligence (AI) and Machine Learning (ML) weren’t top of mind for most people—even those in creative industries. Enter Generative AI. Now everyone from students to morning news anchors is talking about them and, as can be expected, not all that information is correct. Misinformation or not, AI/ML is a topic that generates strong feelings.
Because of my deep involvement in drawing and painting at Adobe, my interest around AI/ML centers around the digital artists and illustrators who were among the first to raise concerns about text to image AI generators impacting their professions. I understand and empathize with their concerns. But when I think back on how technology has affected artists and their art throughout history, I can’t help but see AI/ML as a “tool” that is creating a massive shift in how people make and consume art and design.
As I step outside the buzz of these technologies and take a closer look at some of the impact (good, bad, and speculative) that they’re having on the illustrative arts, it’s important to remember that progress has never marched backward. AI/ML has been present in many tools—not just creative ones—for more than a decade, and Generative AI is not going away. That said, it’s imperative that creatives everywhere step in, share their voices, and help shape its evolution.
A few definitions
These technologies are changing rapidly but there is some terminology that will survive even as they do:
Artificial Intelligence is simulated human intelligence for machines. Massive amounts of data create patterns, and those patterns are used to make future predictions. Many people in creative industries are concerned about this because of its generative potential, but it can be extremely useful for repetitive work, the most recognizable example is customer service chatbots that can replicate human-like conversations.
Machine Learning is a subset of AI that allows computers, programmed with specific learning algorithms (models), to learn from data. There are multiple learning models that can be applied to the data; in the case of Generative AI, the data can be text, images, audio, or video and the output is generative—that is, existing data is used to create new data.
Generative AI is programming algorithms used to create new content. There are many generative products (DALL-E, OpenAI, Stable Diffusion, Midjourney) but the most well-known is ChatGPT, which gained worldwide attention with its text generation model. With all Generative AI, data accessed using text and image prompts is used to create new text and images.
Web3, not to be confused with Web 3.0, is the third generation of the World Wide Web. The most important aspects of Web3 are decentralization and ownership: People will have complete control over their works, their digital assets, their online identities, and where that information is stored.
Technology’s impact on creative output throughout history
Feeling uncomfortable about new technologies is not without precedent. Understanding how they’ve impacted society in the past can help us focus on how we can affect and shape the evolution of AI, ML, and generative technologies:
Moveable type & the printing press
It’s difficult to imagine that the invention of the printing press—and its contribution to widespread knowledge—had anything other than a positive impact. At the time, though, there was excessive concern that widespread access to printed materials would cause a pandemic of misinformation.
Cameras & digital photography
While not initially a form of artistic expression, some Renaissance painters used very early camera technology to aid traditional painting but were hesitant to divulge it as part of their process for fear of being accused of “cheating.” Correspondingly, anyone who watched the advent and adoption of digital photography can probably remember the stigma faced by those artists who embraced it as not “serious.”
Templates & automation
The onset of templatized website design (by companies like Squarespace and Wix) is not so far in the past that it’s difficult to remember the job security concerns of designer/developers whose professions included coding websites from scratch. What was at one time considered an industry-killing technology has been adopted by those same creative professionals to spend less time on development and more time on designing/creating content.
The struggles we’re facing with these new technologies
I want to quickly make a distinction between inspiration and sampling: People can and do reference artists' work for inspiration; AI can and does reference artists' work as samples. Both benefit from artistic reference, but people interpret art to visualize new ideas and AI uses it to generate new ideas. Since in certain circumstances Generative AI models work by scrubbing the Internet to “learn,” artists and illustrators are concerned and angry that their work is potentially being used, broken apart, rearranged, and “regenerated” without permission or attribution. With this type of content generation artists and illustrators are primarily concerned about attribution and copyright, but there are other issues worth mentioning:
1. Attribution
Generative AI models are sometimes trained on web data to create images. I’ve talked to many artists who’ve seen remnants of their signatures in the art outputs of certain Generative AI products. It’s not hard to understand why those artists believe that not only is their work being used (taken), but they’re also not being credited for their intellectual property.
2. Copyright
With Generative AI, copyright is a double-edged sword that serves neither AI content creators nor the artists whose work has been used to train it.
Since only humans can be “authors” under the Copyright Act, there is no copyright protection for works generated solely by machines. There was headway on that front last year when the U.S. Copyright Office granted protection to Kris Kashtanova for the comic book Zarya of the Dawn, but that protection is now partially canceled.
3. Style Imitation
Any artist with work on the Internet may have had their work used to train AI without their knowledge. In turn, a generative AI model could potentially imitate the style of a freelance illustrator who makes their living off their personal style. This could understandably feel like a violation.
Adobe designer, Dana Jefferson , trained an AI model on her illustration style. She generated four images, and posted them, alongside five originals, in a nine-square grid. It’s nearly impossible to determine which is which.
4. Site scraping
Site scraping involves the extraction of content and data from websites using a programming script. In generative visual technology, most AI models use diffusion, which destroys content and data by adding “noise.” From there the AI model essentially regenerates that same content and data by hallucinating new, coherent visuals from the noise. Simply put, Generative AI collects visual information from the web, destroys it, then references its “knowledge” to remove that noise and create new visuals.
5. Consistency
One of the criticisms of Generative AI is that the results aren’t consistent. In part, results vary because of differences in how AI systems are trained, but it’s also because AI is always learning. Typing the same prompt into an AI system multiple times will result in different results each time. I recently tested this with the prompt Frida Kahlo Self Portrait with Thorn Necklace and Hummingbird, and it resulted in 16 images, each an unsettling iteration of the original.
Results of the prompt Frida Kahlo Self Portrait with Thorn Necklace and Hummingbird.
It’s interesting to note how AI interprets prompts (words). I had my four-year-old daughter help me with a few to test my assumptions about the randomness of output. Her first was bones of dragons on the grass with plants, animals, and people, which created four Surrealist landscapes. Her second, a train this big with persons and balls that is all red, returned a series of round(ed) trains. After seeing each of the results, she expressed that not only weren’t they what she expected, they weren't what she wanted.
The results of two generative AI prompts. On the left, bones of dragons on the grass with plants, animals, and people, and on the right, a train this big with persons and balls that is all red.
In the end, AI generated art is fun to play with, and quite probably useful for reference or inspiration, but if consistency or “personal style” are what people are looking for, there’s no certainty that they’ll find either.
6. Bias & lack of diversity
I recently used generative software to create a family portrait. I’m white and my husband and children are also white. After a few attempts, I generated portraits that resembled the four of us. There was one glaring problem: Bias. Not once had I used white as a descriptor, but each time my results came back with white likenesses. I decided to test this bias with a prompt unrelated to me or my family and input woman in her twenties with curly black hair and brown eyes. It should have generated many ethnicities. Unfortunately, that’s not what happened. Every output was a white woman with dark hair. AI models are continually learning and training on similar data sets in theory any bias that exists will only amplify over time unless corrected. I returned recently to the same prompt and had a bit more success, which means that humans are working to address the biases in this technology… but it’s a complex problem for the entire industry without an easy solution.
Results of the generative AI prompt woman in her twenties with curly black hair and brown eyes repeated over a sixth month period with some improvement in output.
On the bright side
From a creative perspective part of the animosity toward AI and ML is a result of their newness and the uncertainty they create: Am I going to lose my career? Will my skills become irrelevant? Are we all just going to end up being data curators? Or will AI and ML simply become the latest generation of tools for the creative industry? We might not have those answers for quite some time. I know that it can feel like we're on a bullet train to dystopia but history and the forever optimist in me, has urged me to look at how AI and ML could help artists and illustrators and what could be the result of embracing and helping to shape these technologies.
1. Detail work
AI and ML are already making artists' lives easier and Generative AI could help illustrators skip right over time consuming and painstaking detail work to focus on the pieces of the creative process they enjoy. There are tedious, repetitive parts of the creative process that don’t require creative or artistic skill, but they must be done. AI will be able to complete those tasks, make composition suggestions, and last-minute additions and color changes.
With AI tools, an illustrator could create and execute the concept and draw the main parts of the image and AI could generate the rest in his illustration style. Crowd at the Concert illustration by Ramjana / Adobe Stock
AI could also generate alternate poses for a character for animation or storyboarding. An illustrator could create three or four versions of it and AI could generate every in-between pose. Cute monsters in different yoga poses illustration by Roi_and_Roi / Adobe Stock
2. New output models (from 2D > 3D)
There's a steep learning curve to make the transition from 2D to 3D; it takes a lot of time and practice, particularly for 2D artists who want to create 3D work in a digital space. But for 2D artists to be successful in an immersive three-dimensional future, they’ll either need to learn a new skill or allow technology to help them with it. AI technology can already detect form and shape from two-dimensional information, so it’s not a stretch to think that in the future it could enable 2D artists to create dimension by rendering their art in 3D.
3. Provenance
The good news is that the U.S. Copyright Office is actively evaluating the role of AI on creativity. Still, there is currently no way to prove provenance of AI-generated art. But a few years ago, to fight misinformation and add a layer of trust to digital content (primarily to address deepfakes), Adobe partnered with the New York Times and Twitter to create the Content Authenticity Initiative. If that initiative is taken just one step further, and AI is trained ethically, I can imagine a future where artists, who’ve always had to take steps to ensure their work isn’t used without their permission, could have the freedom to create without worry that someone would steal their intellectual property.
Unfortunately, creative communities aren’t strangers to other people or entities trying to pass off art as their own. Over the years, several fast-fashion brands have been called out for the unauthorized for-profit appropriation of artwork for T-shirt designs, graphic patterns, and even ceramics. In 2020, Urban Outfitters was accused of stealing Watiya Tjuta—a design based on the sacred art of spear-making by Australian aboriginal artist Mitjili Napurrula—and using it on rugs for sale in their stores. In the past, many artists had claimed to have had their work plagiarized by the company and within days of the comparison being called out on social media, the company removed the rug from their stores.
Imagine the same scenario in a digital future where authenticity and provenance are built into our tools: If a company tried to use an existing original work on their products without permission, it would be flagged as such, with the artist’s name attached to the piece. Skeptics will say that there are always ways to steal, appropriate, and plagiarize, but digital tokens or signatures, that could be connected to something as simple as a screenshot, could go a long way toward minimizing those infringements.
4. Collaboration
If our future is all about bridging physical and digital connections, you can't really talk about the future of creativity without talking about collaboration. They'll be synonymous. Illustrators, artists, and designers could work in tandem, generating variations and extending the work on the fly, even inviting clients to take part. Tools will become more accessible, and the digital canvas will become dimensional and social. Media types and mediums could be mixed, and anything could become a canvas.
5. Accessibility
Currently, there is no good digital drawing solution for artists with low to no vision. The current process is through touch where someone feels where they want to draw and draws there. But the future of these artists, and their ability to paint digitally via the same means that they paint traditionally today, may be through AI-generated audio and haptics, or semantic tags for screen readers. Generative AI (in the reverse of the text-to-image way it’s being used now) could be used for on-the-fly-generated alt text for images, the naming and labeling of every tool and action in an application, or all screen reader-only content.
It’s impossible to predict the ultimate impact of these new technologies on artists and artistic output, but history has taught us that new technologies don’t go away, they evolve. Since each of us has a voice and role in our individual and collective creative futures, I’m optimistic that we can use them to begin shaping these technologies in ways that benefit everyone.
| 2023-02-28T00:00:00 |
https://adobe.design/stories/leading-design/ai-ml-the-good-the-bad-and-the-buzz
|
[
{
"date": "2023/02/28",
"position": 3,
"query": "artificial intelligence graphic design"
}
] |
|
How to Create Content with AI?
|
Unlocking Creativity with AI: A Guide to Creating Content with Disruptive Technology
|
https://designwizard.com
|
[
"Rahul Shevde"
] |
By using AI for content creation, you can generate dozens or even hundreds of pieces of content in a fraction of the time it would take a human to do the same.
|
Simply put, AI content creation can be defined as using artificial intelligence to generate written or spoken content. Think of it like having a robot writer or robot podcaster on your team.Instead of having to come up with ideas, research, and write articles or scripts, you can just give the AI some basic information and let it run with it. The AI will analyze the information and come up with something all on its own.
The results can be quite impressive. By using AI for content creation, you can generate dozens or even hundreds of pieces of content in a fraction of the time it would take a human to do the same. And, because the AI can analyze data and come up with insights that a human might miss, the content it produces can be more accurate and informative.
Of course, AI content creation is not perfect. The AI can produce content that is a little dry or robotic sounding. But with the advancement of technology and more data to train the AI model on, it's getting better and better at producing natural and engaging content.
| 2023-02-28T00:00:00 |
2023/02/28
|
https://designwizard.com/blog/how-to-create-content-with-ai/
|
[
{
"date": "2023/02/28",
"position": 9,
"query": "artificial intelligence graphic design"
}
] |
Can AI Choose a Font for Your Graphics? - LaiqVerse
|
Can AI Choose a Font for Your Graphics?
|
https://laiqverse11.wordpress.com
|
[
"Laiq Ahmed Qureshi"
] |
AI can assist graphic designers in selecting a font, but it's important to understand that it should not be the only factor considered when making a decision.
|
When it comes to graphic design, selecting the right typeface is critical. The typeface you choose can greatly impact the message and mood of your design, and ultimately determine whether your content is effective or not. With so many fonts to choose from, the process of selecting the perfect typeface can be overwhelming for graphic designers. However, artificial intelligence (AI) has made significant strides in assisting designers with this task. In this blog post, we’ll explore how AI can help graphic designers choose the most suitable typeface for their designs. We’ll examine the benefits and limitations of AI-powered typeface selection, provide examples of popular fonts for different design purposes, and discuss the factors that designers should consider when making a final decision. Let’s dive in!
As a graphic designer, selecting the appropriate typeface for your design is crucial for conveying your message effectively. Typeface selection can communicate the tone and mood of your content, and catch the attention of your audience. With thousands of fonts available, choosing the perfect one can be a daunting task.
Artificial intelligence (AI) can be a valuable tool for graphic designers in selecting the most suitable typeface for their designs. AI algorithms analyze various design parameters, such as the target audience, the message, and the overall aesthetics, to suggest the most appropriate typeface for your design.
One example of AI-powered font selection is Adobe’s Font Finder. The tool uses machine learning to identify and suggest matching fonts from Adobe’s vast library. Font Finder analyzes visual characteristics of your design and provides the most relevant font options for your needs. Similarly, Google’s Font Pairings tool offers a range of font combinations that complement each other and can be used together in your design.
AI can assist graphic designers in selecting a font, but it’s important to understand that it should not be the only factor considered when making a decision. Human expertise is still crucial in making the final choice based on the design’s purpose, audience, and branding.
When choosing a typeface, graphic designers consider factors such as legibility, readability, and brand guidelines. For example, a finance company may opt for a more traditional and professional serif font such as Times New Roman, while a trendy startup may use a more modern and creative font such as Proxima Nova.
Other examples of popular fonts for specific design purposes include Helvetica for clean and modern designs, Baskerville for classic and elegant designs, and Futura for futuristic and minimalist designs.
While AI can provide valuable assistance to graphic designers in selecting appropriate typefaces for their designs, it’s important to understand that AI should be used as a tool, and the final decision should be made by the designer based on their expertise and creative vision. AI-powered font selection can simplify the process and save time, but it may not always provide the perfect solution for every design.
As AI technology continues to develop, it’s likely that we will see more innovative tools and solutions for typeface selection in graphic design. AI-powered design tools may eventually become a standard part of a designer’s workflow, freeing up time and energy for more creative tasks.
In conclusion, AI can provide valuable assistance to graphic designers in selecting appropriate typefaces for their designs. However, it’s important to understand that AI should be used as a tool, and the final decision should be made by the designer based on their expertise and creative vision. Factors such as the design’s purpose, audience, and branding should also be carefully considered when selecting a typeface.
| 2023-02-28T00:00:00 |
2023/02/28
|
https://laiqverse11.wordpress.com/2023/02/28/aitypefaces/
|
[
{
"date": "2023/02/28",
"position": 40,
"query": "artificial intelligence graphic design"
}
] |
ADC101+1: Exploring Human + AI Design
|
ADC101+1: Exploring Human + AI Design
|
https://www.oneclub.org
|
[] |
... AI and the creative industry. SPEAKERS: Magnus Atom is a multi-disciplinary animation director, illustrator, motion graphics designer, and commercial artist.
|
ABOUT THE EVENT:
Join us as artists and designers discuss the opportunities and challenges around AI and the creative industry. This panel is taking place in support of ADC101+1, a captivating multi-platform experience featuring work of 101 designers and creatives from around the world in collaboration with Artificial Intelligence tools.
The collective art initiative, created by Pereira O’Dell and developed by lili Studios, serves as both an interactive gallery of work and global forum for engaging in dialogue and debate about the opportunities and issues involved with AI and the creative industry.
SPEAKERS:
Magnus Atom is a multi-disciplinary animation director, illustrator, motion graphics designer, and commercial artist. Born in Hawaii and raised in Brooklyn he has studied art at numerous prestigious art schools and received his BFA from Parsons The New School of Design. His clients have included VICE, Netflix, Playboy, The New York Times, Spotify, Channel 4, and Apple to name just a few. He is known for combining his various passions of CGI, hand-drawn animation, illustration, and motion graphics to create his signature style, which has won him recognition across various design industry publications. In 2021 he was the recipient of the Young Guns 19 award and is now currently represented globally as an animation director by Strange Beast in London. He currently resides in the small upstate New York town of Saratoga Springs with his puppy and wife.
Pavithra Dikshit is a Design Director at Landor&Fitch, India. Her portfolio of work is spread across corporate and consumer brands focussed primarily in South Asia’s emerging marketplace. Outside of this, her personal practice lies in the cross-sections of life explored in a cultural context. Pavithra is a key member of Kadak and linked to TypeCamp. In 2017, she was listed as one of the 20 under 35 emerging designers in India. She is a mentor at Stella which aims to develop, accelerate the careers of women leaders within WPP. In 2021, Pavithra was on the jury for One Show’s Young Guns 19 – under 30 portfolio competition. Her interview, ‘The World of Indian Graphic Design’ can be heard on AudioGyan. Pavithra’s work has been published on WeTransfer, Verve, City Lab, Bloomberg and Sunday Guardian. Her ongoing piece, ‘Verbal Self Portrait’ is in talks to be published in 2023. Prior to this, she also co-founded a small initiative – Postcard People, taught art at Teach for India. In her spare hours, she reads voraciously, cooks heirloom recipes and tends to the family garden.
Ally Behr is a Design Director originally from South Africa, and currently based in Amsterdam, with 10+ years of experience across digital product and brand communication. She has led the creation of award-winning digital experiences for clients across the globe, along with a team of multi–disciplinary designers. Ally is focused on creating unique digital products and experiences by leveraging emerging technology with experimental design, interaction, typography and motion.
Jason Naylor is a NYC artist, known for bright colors and even brighter themes, commonly found on large scale murals around the world. His work explores the communication of human emotions using organic shapes, bold colors and a strong design aesthetic. Jason has worked with various brands like Coach, Pepsi, Toyota, and Microsoft. He has received global recognition for his work including a Golden Novum Design Award. In 2018, BUMBLE named Jason one of its 100 Most Inspiring New Yorkers. People love Jason for his unforgettable colors and the optimism woven into his work. His relentless positivity and kindness translate across many mediums and applications, and his technicolor murals capture the hearts of everyone who sees them. Jason brings a fresh taste of love and kindness to all of his projects, and desires to use his work to spread love, good vibes, and also do some good in the world.
MODERATOR:
PJ Pereira is the co-founder of Pereira O’Dell and one of the industry’s most enthusiastic pioneers — particularly when it comes to the always-evolving intersection of advertising, entertainment, and technology. His passion for the frontiers of marketing has earned his agency hundreds of awards, including an Emmy for innovation in storytelling. It got him named the Top CCO in Adweek's Creative 100, recognized by AdAge’s Creativity 50, and noted in 4A's 100 People Who Make Advertising Great. In the judging rooms, PJ has chaired entertainment, digital, and social media categories in all of the world’s major shows, including the One Show and ADC. Beyond advertising, PJ Pereira is a best-selling novelist. He has published four books in his native Brazil, all clashing mythology with the modern connected world. In 2023, under the pen name PJ Caldas, he’ll launch his first novel in English, this time combining the martial arts with neuroscience and… artificial intelligence.
DETAILS:
DATE: Tuesday, February 28, 2023
TIME: 12:00 pm ET
WHERE: Online
PRICE: Free
Buy Tickets
| 2023-02-28T00:00:00 |
https://www.oneclub.org/events/-adc101
|
[
{
"date": "2023/02/28",
"position": 61,
"query": "artificial intelligence graphic design"
}
] |
|
Graphic design essentials for beginners and 5 tips ...
|
Graphic design essentials for beginners and 5 tips for mastering the craft
|
https://profiletree.com
|
[] |
AI in Graphic Design: Uncover the transformative power of Artificial Intelligence, as it automates repetitive tasks, generates innovative design options ...
|
Graphic design essentials for beginners and 5 tips for mastering the craft Updated on: Updated by:
Graphic design is the process of curating an image for enhanced visual appeal. It can convey a certain mood, tone or feeling and be used for advertising purposes or as a marketing method.
Graphic design is all around us; it’s in product packaging, website design, magazine covers, poster design, billboards, or any type of visual material that attempts to grasp your attention and promote whatever it is conveying.
But where do you start with graphic design, and how do you learn the trade?Luckily, we’ve compiled a list of basic graphic design principles that will help you get interested in it even if you can’t afford to hire a digital design agency like Fivecube.
1. Essential skills for graphic designers
As a graphic designer, you will need to have sound knowledge of the essential skills needed to start composing images that attract attention and captivate your audience. Learning these skills will go a long way in your graphic design career and give you the tools you need to create unique images.
Colour theory
All graphic designers should have a good basis of colour theory, as certain colours will complement one another and convey different moods, i.e.)
Red: Conveys feelings of passion, warning, anger and courage.
Yellow: Conveys feelings of happiness, hope, optimism and cheerfulness.
Blue: Conveys feelings of inspiration, learning, calmness, intuitiveness, honesty and trust.
You should select an appropriate colour scheme depending on the message you want to communicate to your audience. In a world where we see multiple media messages daily, colour theory is one of the best tools a graphic designer has in their arsenal of initially capturing attention.
Hierarchy of information
When creating visual messages, it is important that you construct an image with a sense of hierarchy of the most important information. If everything is the same size and format, it will be difficult for audiences to differentiate what you’re trying to communicate and even more so, it will fail to grab their attention.
You should use a range of titles, headings and alternate fonts to draw attention to the most important key messages in your visual image.
Contrast
Another important aspect of graphic design essentials is knowing how to create contrast in your image. Graphic designers will create contrast with dark vs light composition or contemporary vs traditional design. The contrast of a curated image will help to make your image much more attractive and assist your audience in digesting the information.
Check out this article for how to create an infographic, which covers some of the similar principles of mastering graphic design and using contrast to capture attention.
Formatting
Formatting refers to how you space out the components of an image, the title, the images, text boxes, etc. This is quite important in learning the graphic design essentials, as it will become the basis of creating an attractive image.
If your components are too clustered together, then it will only serve to confuse audiences as they have to put in work to decipher what you’re trying to communicate. Ensure your images have clear formatting, drawing attention to the areas that matter most.
graphic design essentials
2. Essential fonts for graphic designers
Graphic designers should have a few essential fonts in their arsenal in order to create captivating images that speak to audiences. In branded identity, there are usually only 2-3 fonts used as it increases the audience’s ability to recognise the brand.
Some popular fonts for graphic designers include:
Mulish
Mulish is a clean-cut modern font with a simplistic yet attractive, design.
Playfair Display
This font is quite romantic and traditional. It looks great for brands seeking to establish a personal connection with their audience.
Anton
This bold font demands the audience’s attention, it’s impactful and works great when communicating messages with a few words.
If you are interested in finding out where you can get these essential fonts for graphic design, be sure to check out the video below.
3. Essential books for graphic designers
If you’re interested in beginning a career in graphic design or simply want to develop your graphic design skills, there is a range of books available to help get you started. Check out three of our favourites below.
1. Graphic Design School: A Foundation Course for Graphic Designers Working in Print, Moving Image and Digital Media
This book is regarded as the industry standard for teaching the fundamentals of graphic design. It is useful for both students and small businesses who want to learn the trade. The book contains two main sections; principle and practice. It explores the themes of graphic design essentials such as composition, hierarchy, layout, typography, grid structure, colour, etc.
2. Graphic Design Essentials: With Adobe Software
For those working on Adobe Software, this is a great book to learn the graphic design essentials. This book pushes you to put your knowledge into practice with its step-by-step instructions. It covers the fundamentals of graphic design, including editing, colour theory and design apps.
At the end of this book, you will be well-versed in the graphic design essentials.
3. Graphic Design Essentials: Skills, Software and Creative Solutions
This book has been written by the leading graphic designer, Joyce Macario. It is a great book for learning the essential skills you need to put your creative ideas into practice. It explores graphic design when using multiple different software programmes such as Photoshop, Illustrator, and Indesign.
4. Graphic design essentials course
Some of us learn differently, and that’s perfectly ok. If you think you are more of a visual learner, be sure to check out this free graphic design essentials course by the industry-leading graphic designer, Gareth David Studio.
Beginners Guide to Graphic Design | 45 Episode FREE Series
5. Essential graphic design software
Graphic design can be conducted on various different software programmes, but if you are a beginner to the craft, then it’s best to focus on mastering one, before you overwhelm yourself with others.
Some of the most popular graphic design software includes:
Adobe Photoshop
Adobe Illustrator
Adobe InDesign
After Effects
PhotoADKing
If you seeking to master the graphic design essentials, it is best to learn so by doing, put your knowledge into practice because you don’t know what you’re capable of until you try.
Advanced Graphic Design Techniques:
Quantify the impact: Instead of just stating the importance of UX/UI principles, back it up with statistics. For example, mention that 88% of users abandon websites with poor design (source: NN Group).
Instead of just stating the importance of UX/UI principles, back it up with statistics. For example, mention that 88% of users abandon websites with poor design (source: NN Group). Showcase real-world examples: Use case studies or client examples to illustrate how motion graphics, 3D design, etc., have been used effectively in different industries.
Use case studies or client examples to illustrate how motion graphics, 3D design, etc., have been used effectively in different industries. Go beyond trends: While mentioning trends is good, delve deeper into their potential impact on the future of graphic design. Discuss how AI might revolutionize design workflows, or how AR could change the way we interact with information.
While mentioning trends is good, delve deeper into their potential impact on the future of graphic design. Discuss how AI might revolutionize design workflows, or how AR could change the way we interact with information. Technical insights: Provide specific technical tips and resources for each advanced technique. This could include software recommendations, tutorials, or industry best practices.
Latest Trends and Future Predictions:
Integrate data: Include relevant data and statistics to support your predictions about the future of graphic design. For example, mention the projected growth of the AR/VR market or the increasing adoption of AI in design tools.
Include relevant data and statistics to support your predictions about the future of graphic design. For example, mention the projected growth of the AR/VR market or the increasing adoption of AI in design tools. Expert opinions: Quote industry experts or thought leaders to add credibility and weight to your predictions.
Quote industry experts or thought leaders to add credibility and weight to your predictions. Visual storytelling: Use infographics, charts, or other visual elements to make your predictions more engaging and memorable.
Psychology of Design:
Deeper dive into color theory: Discuss the cultural and historical context of different colors, and how they can be used to evoke specific emotions in different audiences.
Discuss the cultural and historical context of different colors, and how they can be used to evoke specific emotions in different audiences. Shape psychology: Provide more specific examples of how different shapes can be used to convey different messages. For example, triangles can represent power or stability, while circles can symbolize harmony or unity.
Provide more specific examples of how different shapes can be used to convey different messages. For example, triangles can represent power or stability, while circles can symbolize harmony or unity. Data-driven insights: Support your claims about the psychology of design with research studies and data analysis.
Accessibility in Design:
Provide concrete guidelines: Offer specific tips and best practices for designing for accessibility, such as using appropriate color contrast ratios or providing alternative text for images.
Offer specific tips and best practices for designing for accessibility, such as using appropriate color contrast ratios or providing alternative text for images. Showcase inclusive design examples: Highlight real-world examples of how graphic design can be made accessible to people with disabilities.
Highlight real-world examples of how graphic design can be made accessible to people with disabilities. Resources and tools: Include a list of resources and tools that designers can use to create accessible designs.
SEO and Digital Marketing Integration:
Quantify the impact of visuals: Share statistics that demonstrate the positive impact of good graphic design on SEO and digital marketing metrics, such as click-through rates or conversion rates.
Share statistics that demonstrate the positive impact of good graphic design on SEO and digital marketing metrics, such as click-through rates or conversion rates. SEO best practices: Provide specific SEO best practices for graphic designers, such as optimizing image file sizes and alt text.
Provide specific SEO best practices for graphic designers, such as optimizing image file sizes and alt text. Case studies and examples: Showcase examples of how graphic design has been used to improve SEO and digital marketing results.
Overall:
Stronger visuals: Enhance the guide with high-quality visuals such as infographics, charts, and screenshots.
Enhance the guide with high-quality visuals such as infographics, charts, and screenshots. Global perspective: Consider including examples and statistics from different parts of the world to make the guide more relevant to a wider audience.
Consider including examples and statistics from different parts of the world to make the guide more relevant to a wider audience. Call to action: Encourage readers to learn more about advanced graphic design techniques and resources.
Mastering the Craft: A Comprehensive Guide to Advanced Graphic Design
This guide delves into the intricate world of advanced graphic design, empowering you to transcend the fundamentals and push the boundaries of your creativity. Whether you’re a seasoned designer seeking to refine your skills or an eager beginner yearning to explore new horizons, this roadmap equips you with the knowledge and tools to elevate your designs to new heights.
Charting the Course:
UX/UI Principles: Become a master of user experience and interface design, crafting intuitive and visually appealing digital products that captivate and engage users.
Become a master of user experience and interface design, crafting intuitive and visually appealing digital products that captivate and engage users. Motion Graphics: Breathe life into your designs with the dynamic power of motion graphics, weaving engaging visual narratives that leave a lasting impression.
Breathe life into your designs with the dynamic power of motion graphics, weaving engaging visual narratives that leave a lasting impression. 3D Design: Dive into the immersive realm of 3D design, creating realistic and captivating visuals that push the boundaries of perception.
Glimpsing the Future:
AI in Graphic Design: Uncover the transformative power of Artificial Intelligence, as it automates repetitive tasks, generates innovative design options, and redefines the creative landscape.
Uncover the transformative power of Artificial Intelligence, as it automates repetitive tasks, generates innovative design options, and redefines the creative landscape. Minimalism’s Enduring Appeal: Embrace the ever-popular trend of minimalism, crafting clean and uncluttered designs that prioritize clarity and user experience.
Embrace the ever-popular trend of minimalism, crafting clean and uncluttered designs that prioritize clarity and user experience. Augmented Reality’s Impact: Explore the burgeoning world of Augmented Reality, blending digital elements with the real world to create interactive and immersive experiences.
The Psychology of Influence:
Harnessing the Power of Color: Delve into the fascinating world of color theory, understanding how different hues evoke distinct emotions and reactions, allowing you to tailor your designs for maximum impact.
Delve into the fascinating world of color theory, understanding how different hues evoke distinct emotions and reactions, allowing you to tailor your designs for maximum impact. The Language of Shapes: Discover the hidden meaning behind shapes, leveraging their inherent symbolism to convey specific messages and resonate with your audience on a deeper level.
Discover the hidden meaning behind shapes, leveraging their inherent symbolism to convey specific messages and resonate with your audience on a deeper level. Mastering Layout Composition: Guide the viewer’s eye with intentionality, ensuring your message is communicated effectively through strategic layout choices.
Designing for Everyone:
Championing Accessibility: Embrace the principles of inclusive design, crafting experiences that are usable by everyone, regardless of ability, ensuring your designs reach their full potential.
SEO & Digital Marketing Synergy:
Visuals that Convert: Leverage the power of visually compelling design to enhance SEO and digital marketing efforts, boosting user engagement, reducing bounce rates, and propelling your website to the top of search results.
By embracing these advanced techniques, trends, and design principles, you’ll not only refine your technical skills but also cultivate a deeper understanding of how design can influence, engage, and resonate with your audience. Remember, the journey to graphic design mastery is a continuous exploration, fuelled by curiosity, experimentation, and a relentless pursuit of excellence. So, embark on this adventure with confidence, and allow your creativity to flourish!
SEE ALSO: Animated Video Production | Essential Skills for Web Design.
| 2024-08-11T00:00:00 |
2024/08/11
|
https://profiletree.com/graphic-design-essentials/
|
[
{
"date": "2023/02/28",
"position": 97,
"query": "artificial intelligence graphic design"
}
] |
CMV: ChatGPT is a danger to employment : r/changemyview - Reddit
|
The heart of the internet
|
https://www.reddit.com
|
[] |
Also, even the higher payed jobs are being threatened by ChatGPT, which will reduce the salary of every single person, since ChatGPT can handle ...
|
Let me just say first that ChatGPT is very useful for giving advice to people, and automating certain tasks by doing the boring parts. It is also just fun to play with, just like your internet friend.
But, the thing is that ChatGPT threatens to replace some of the jobs, which are for people with no school experience. They end up unemployed, and they cannot afford school, so they remain unemployed. Also, even the higher payed jobs are being threatened by ChatGPT, which will reduce the salary of every single person, since ChatGPT can handle the tasks that is worth so much money.
So, ChatGPT has more disadvantages than advantages, and will increase unemployment rate. Change my view.
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/changemyview/comments/124lriw/cmv_chatgpt_is_a_danger_to_employment/
|
[
{
"date": "2023/03/28",
"position": 50,
"query": "ChatGPT employment impact"
},
{
"date": "2023/03/01",
"position": 1,
"query": "ChatGPT employment impact"
}
] |
|
ChatGPT Is Already Replacing Humans in the Workplace - CMS Wire
|
ChatGPT Is Coming for Your Jobs — and Already Has Succeeded
|
https://www.cmswire.com
|
[
"Dom Nicastro",
"About The Author"
] |
“The results of this survey shows that employers are looking to streamline some job responsibilities using ChatGPT.” Choosing ChatGPT over ...
|
The Gist
ChatGPT > humans? Resume Builder found that nearly half of US companies using ChatGPT have replaced workers with the AI chatbot since its debut in November.
Resume Builder found that nearly half of US companies using ChatGPT have replaced workers with the AI chatbot since its debut in November. ChatGPT, bring me fortune. Companies using ChatGPT have reported significant cost savings, with 99% of them stating that they have saved money.
Companies using ChatGPT have reported significant cost savings, with 99% of them stating that they have saved money. Get to know ChatGPT — now. It is advisable for employees and job seekers to stay current with technological advancements such as ChatGPT, as employers are already expecting candidates to have experience with the AI chatbot.
ChatGPT will not replace humans in the workplace.
Oh, yes it will. And it has.
Resume Builder, a career site for creating resumes, released a survey Feb. 27 that found nearly half of companies using ChatGPT say it’s replaced workers. The survey, which polled 1,000 US business leaders, found that 48% of respondents say their companies have replaced workers with ChatGPT since the OpenAI generative artificial intelligence chatbot debuted in November.
The chatbot has set record numbers for user adoption and is already making an impact on customer experience and marketing professionals. And now, it’s apparently sending workers to the exit signs, according to this survey by Resume Builder.
More ChatGPT-Related Layoffs to Come?
And it’s not done. When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say “definitely” while 26% say “probably,” according to the Resume Builder survey.
Further, within five years, 63% of business leaders say ChatGPT will “definitely” (32%) or “probably” (31%) lead to workers being laid off.
“Since this new technology is just ramping up in the workplace, workers need to surely be thinking of how it may affect the responsibilities of their current job,” Resume Builder Chief Career Advisor Stacie Haller said in a blog post. “The results of this survey shows that employers are looking to streamline some job responsibilities using ChatGPT.”
Choosing ChatGPT over humans apparently has helped saved these businesses some significant cash. Overall, 99% of companies using ChatGPT say they’ve saved money, according to the survey:
48% have saved more than $50,000
11% have saved over $100,000
Nearly 25% have saved between $50,001 and $75,000
“The economic model for using ChatGPT is also evolving,” Haller said. “It will be interesting to see how this plays out in terms of savings as well as the reorg of certain jobs within the companies.”
Related Article: ChatGPT: What You Need to Know
Only Half of Companies Use ChatGPT? No Way
So it’s no surprise that Resume Builder also reports ChatGPT is used by nearly half of companies (49%) and that most (93%) plan to expand their use of the chatbot.
Or is it?
That 49% number actually seems low to us, based on what customer experience and marketing professionals have told us. For a technology with record numbers, we believe way more than half of companies are using it. One survey of 1,000 professionals likely doesn't tell the full picture.
How are business leaders using ChatGPT?
66% use it for writing code
58% use it for copywriting/content creation
57% for customer support
52% for creating summaries of meetings or documents
77% say it helps them write job descriptions, 66% draft interview requisitions and 65% respond to applicants.
Related Article: 6 Ways ChatGPT May Change Digital Customer Experience
Breakthrough Tech Like Generative AI: What's Your Next Move?
Ultimately, it’s not revolutionary or unprecedented to see or think about technology replacing humans in the workplace. Just drive through any major state highway or head to the local grocery store. Automation has ruled for decades.
What’s the best way to future-proof yourself as a customer experience and marketing leader during this revolution of generative AI? We’ll report more on that later, but for now, it’s probably a good bet to be good at using these tools.
Findings in Resume Builder's survey confirm this.
Around 92% of business leaders say having AI/chatbot experience is a plus in looking for a candidate, and 90% say it’s beneficial if the candidate has ChatGPT-specific experience. And here are some tools to get familiar with in the arena of AI and chatbots.
“Companies are already expecting candidates to have experience with ChatGPT,” Haller said. “The use of this technology is certainly something employees and candidates will want to stay current on. Job seekers certainly should add this skill if they have it to their resume, and employers should add it to their list of required or preferred skills if it’s something they expect from candidates. As seen throughout history, as technology evolves, workers’ skills need to evolve and change as well.”
| 2023-03-01T00:00:00 |
https://www.cmswire.com/customer-experience/chatgpt-is-already-replacing-humans-in-the-workplace/
|
[
{
"date": "2023/03/01",
"position": 96,
"query": "ChatGPT employment impact"
},
{
"date": "2023/03/01",
"position": 24,
"query": "workplace AI adoption"
},
{
"date": "2023/03/01",
"position": 8,
"query": "ChatGPT employment impact"
},
{
"date": "2023/03/01",
"position": 58,
"query": "AI workers"
},
{
"date": "2023/03/01",
"position": 70,
"query": "artificial intelligence layoffs"
}
] |
|
A.I. is making some journalists obsolete, media CEO says
|
A.I. is making some journalists obsolete, media CEO says
|
https://fortune.com
|
[
"Will Daniel"
] |
The CEO of one of the world's biggest media companies just said AI is making some journalists obsolete as he plans staff cuts.
|
It’s starting to look like ChatGPT and its A.I. ilk are going to change digital journalism the way the internet changed the printed newspaper. This week, Mathias Döpfner, the billionaire CEO of German media group Axel Springer—which owns German newspapers Bild and Welt as well as Politico and an 88% stake in Insider, among other properties—said he believes his company’s future lies in A.I. In an internal memo to employees seen by multiple outlets, he announced “significant” job cuts in “production, layout, proofreading, and administration” and warned journalists that the burgeoning tech might take their jobs.
“Artificial intelligence has the potential to make independent journalism better than it ever was—or simply replace it,” he wrote, the Guardian reported Tuesday.
Döpfner argued that A.I. will be able to aggregate information for breaking news stories faster than any human ever could, which will make many journalists’ positions obsolete. The news comes after Buzzfeed announced last month that it plans to begin using A.I. systems to create content including quizzes and short stories for subscribers, causing the company’s beaten-down stock to soar. A.I. has already been used for years in the fastest-moving breaking newsrooms, including both the Bloomberg and Reuters newswires. Döpfner’s memo did not touch on those systems, which do not generate paragraphs of text as ChatGPT does but rather locate and publish financial figures, such as earnings in quarterly reports.
Still, Döpfner explained that he doesn’t believe A.I. will replace the entire profession of journalism, noting that exclusive, quality content “remains irreplaceable and is going to become even more critical to success for publishers” and that journalists will still be needed to help discern the “true motives” of people. The CEO added that investigative journalism, original commentary, and exclusive stories should be the focus of the profession moving forward.
Despite his warnings about A.I., he clarified that editorial staff are not yet a part of Döpfner’s proposed staff cuts, and he said he is being “guided by a sense of fairness and humanity” as he rightsizes the media company’s workforce for a new era.
“Understanding this change is essential to a publishing house’s future viability,” he wrote. “Only those who create the best original content will survive.”
The critics and calls for regulation
Nvidia CEO Jensen Huang argued last month that A.I. is at an “inflection point” after the release of OpenAI’s ChatGPT in November of last year. The buzzy new A.I. platform amassed 100 million users in just over two months, leading Microsoft to invest $10 billion in OpenAI and implement its tech into the Bing search engine.
After the move by Microsoft, companies worldwide now have a “a sense of urgency…to develop and deploy A.I. strategies,” according to Huang. And to his point, mentions of the word “A.I.” on corporate earnings calls surged 77% from a year ago this earnings season.
But critics worry about the dark side of the A.I. revolution. A.I. systems are still far from perfect and have a tendency to simply make stuff up—something researchers have labeled “hallucination”—and we’re already seeing the real-world effects.
A flood of article submissions created by ChatGPT that were riddled with errors forced the U.S. science fiction magazine Clarkesworld to stop accepting submissions last month; teachers across the country are scrambling to find new ways to detect A.I.-enabled cheating; and A.I.-generated voices have already captured positions from voice actors on YouTube and in some TV shows and movies.
Oh, and Allen & Overy, the seventh-largest law firm in the world, also just hired a new legal clerk, Harvey. He’s an A.I. bot that can help with due diligence, litigation, and compliance, and he’s going to be working in 43 offices simultaneously.
The prospect of job losses for Americans due to A.I. has some politicians, including Vermont Senator Bernie Sanders, arguing that the technology should be heavily taxed and regulated.
“If workers are going to be replaced by robots, as will be the case in many industries, we’re going to need to adapt tax and regulatory policies to assure that the change does not simply become an excuse for race-to-the-bottom profiteering by multinational corporations,” Sanders wrote in his new book It’s OK to Be Angry About Capitalism.
Even the creator of OpenAI, Sam Altman, warned last month that A.I. is not far away from “potentially scary” capabilities. Altman worries “how people of the future will view us” if A.I. is allowed to expand unchecked, arguing that regulation “will be critical” as the technology rapidly integrates into society. But he noted that Pandora is already out of the box—the world will have to learn to “co-evolve.”
| 2023-03-01T00:00:00 |
2023/03/01
|
https://fortune.com/europe/2023/03/01/will-ai-robots-take-jobs-journalism-axel-springer-ceo-mathias-dopfner/
|
[
{
"date": "2023/03/01",
"position": 38,
"query": "AI job losses"
},
{
"date": "2023/03/01",
"position": 54,
"query": "AI journalism"
}
] |
Bridgewater Exits Ray Dalio Era With Bets on AI, Job Cuts ...
|
Bridgewater Exits Ray Dalio Era With Hedge Fund Overhaul, Bets on AI and Job Cuts
|
https://www.bloomberg.com
|
[
"Erik Schatzker"
] |
The overhaul began even before Dalio handed off control five months ago, with not-so-subtle tweaks to the infamously odd culture he nurtured.
|
Bridgewater Associates was bound to be different once billionaire founder Ray Dalio no longer commanded the world’s largest hedge fund. Very different, it turns out.
The overhaul began even before Dalio handed off control five months ago, with not-so-subtle tweaks to the infamously odd culture he nurtured. Now, the management team he left in charge, led by Chief Executive Officer Nir Bar Dea , is adopting an ambitious strategy to boost returns, increase profitability and develop new sources of revenue — in what amounts to the biggest shakeup in four decades.
| 2023-03-01T00:00:00 |
2023/03/01
|
https://www.bloomberg.com/news/articles/2023-03-01/bridgewater-exits-dalio-era-with-hedge-fund-overhaul-bets-on-ai-sustainability
|
[
{
"date": "2023/03/01",
"position": 41,
"query": "AI job losses"
}
] |
Future with AI | Jijnes Patel's Blog
|
Future with AI
|
https://jijnes.com
|
[] |
There is so much discussion about how ChatGPT can be used for everything. Concerns about job losses. Fear about how AI can be used for nefarious reasons. The ...
|
ChatGPT and LLMs have taken hold in global news. There is so much discussion about how ChatGPT can be used for everything. Concerns about job losses. Fear about how AI can be used for nefarious reasons. The need for regulation to control bad actors. Every company is pivoting to be an “AI” company. While there is a lot of hype, we’re still in a nascent phase of AI. Much will be thrown away, but I’m looking forward to a few particular future possibilities.
Adaptive Workflows. Today, we rely heavily on workflow discovery or code-based extensions of SaaS products to match the jobs we’re trying to execute. GPT-4 and OpenAI have demonstrated the ability to understand APIs and coding languages to generate new code on the fly. Imagine dynamically adjusting a website for accessibility. Adding a button to execute common workflows with one button. Integrate data sources on the fly.
Compression. For SaaS companies, data is essential. Terabytes to petabytes of data are eventually captured and stored. Some were analyzed and rolled up for future use. A lot more sitting there idle. LLMs have demonstrated that you could compress datasets into gigabytes, maybe smaller. While we’re in an age where running LLMs on your laptop or phone cannot scale, we’re already at a place where the data can fit on portable devices. An added bonus is that the compressed data allows companies to remove potentially privacy-related data.
Probabilistic. Hallucinations are a concern for LLMs. Most people expect computers to provide exact answers. Given the statistical nature of models, there are going to be moments where LLMs get it wrong, really, really, wrong. In reality, for most things, you don’t need exactness. You want to do something in the ballpark. In the best case, you have the exact thing you want. In the worst case, the output doesn’t help you, the human. Now, in medical and financial scenarios, guardrails will be necessary. For personalization or experimentation scenarios, good enough is just fine. Too much time is spent chasing 100% accuracy when it doesn’t matter.
Fine-tuning. Training an LLM model like GPT-4 is a time-consuming and expensive process. We’ll likely have foundations of models that individuals and companies can use as a base to fine-tune with specific datasets. This allows the user to develop a custom model at a much cheaper price with a broad range of capabilities.
Local. Today, LLMs are consuming GPUs in the cloud to train and infer results. As techniques improve, computing becomes more powerful, and technology becomes cheaper, we can run advanced models like LLMs on our portable devices. Latency goes away. Usage become even more conversational and spontaneous. A true copilot.
Inverted Personalization. With the possibilities listed, what if instead of each company offering some personalization service for its products, I have my own model that personalizes everything for me? What if that model can access my local data to determine the next best movie for me? Regardless of the streaming service I use. What if that model finds a medical specialist based on my medical history?
A glimmer of this happens on my iPhone and CarPlay. Depending on the time of day, Apple Maps on CarPlay will show me a potential route I could be taking. It could be going to the grocery store on Sunday. The path to drop off my kids on the weekday morning. It’s not 100% right, but more often than not, it presents the correct info. I don’t need the route, but I do appreciate the time it will take to get there due to potential traffic. I’d love to own the model making recommendations to me.
The future is going to be wild!
| 2023-03-01T00:00:00 |
2023/03/01
|
https://jijnes.com/posts/future-with-ai/
|
[
{
"date": "2023/03/01",
"position": 61,
"query": "AI job losses"
}
] |
We are the AI proletariat in the age of ChatGPT and Midjourney
|
We are the AI proletariat in the age of ChatGPT and Midjourney
|
https://ckxpress.com
|
[
".Wp-Block-Comment-Author-Name Box-Sizing Border-Box"
] |
Advanced countries have been discussing and even attempting to implement universal basic income (UBI). With AI developing rapidly, the day when global ...
|
Nowadays, if you don’t write about ChatGPT, it’s like you’re not keeping up with trends and algorithms; if you don’t write about AI, you’re almost embarrassed to call yourself a tech writer. The last time I wrote about AI was in 2020, not counting the human-powered AI. I didn’t jump on the bandwagon not only because many people have already written about it and I don’t have enough deep understanding to offer more insights, but also because I care more about improving production relationships rather than increasing productivity.
Let’s start with a thought experiment: If you had to choose one of two paths to ensure that all humans have enough to eat and wear, would you choose to increase overall economic growth or reduce wealth inequality?
In 1985, Deng Xiaoping said that “some people should get rich first,” implying the choice of the first path. Thirty-six years later, in 2021, Xi Jinping proposed shared prosperity, emphasizing the second path. After the failure of international communist experiments, most people advocated for increasing overall economic growth and then taking care of the poor through taxation and social welfare, preferring the poor in developed societies over the middle class in a stagnant economy.
I don’t mean to say that the above thinking is wrong, but what I want to point out is that the logic is based on the premise that the global supply of food and other resources is slightly higher or lower than the overall demand. What if technology has advanced to the point where total productivity far exceeds the needs of the global population? Then we can reasonably infer that even if productivity no longer grows rapidly, as long as resources are distributed more fairly, or more precisely, more justly, it is already enough to support the world’s seven billion people.
Advanced countries have been discussing and even attempting to implement universal basic income (UBI). With AI developing rapidly, the day when global productivity far exceeds overall demand may not have arrived yet, but it is not far off. The key issue is, by then, how concentrated the world’s wealth will be and how widespread poverty will be.
AI boosts production efficiency
Throughout history, mastering the means of production has ensured an unbeatable position in the production relationships. The core means of production, besides the constant factor of land, have evolved from steam engines during the Industrial Revolution to oil, then modern information technology, and now data. AI is the fusion of information technology and data, using the former as the machine and the latter as fuel. As a result, data is often considered the oil of the AI era.
Although AI literally means artificial intelligence, it is more strictly defined as machine learning, which is a method that allows machines to discover underlying logic by processing large amounts of data.
As long as users clearly define their needs, programmers can develop software to solve the problem. Writing code, in essence, is an “IFTTT” (If This Then That) process. If a solution cannot be developed, it is often not due to the programmer’s technical abilities, but rather the user’s inability to clearly define their needs.
“Find the dogs in these 10,000 photos,” a request that a boss may think is very clear, is not at all clear to a computer because there is no definition of what a dog is. The boss may not be an ordinary person, but they are simply unable to clearly explain how to identify a dog; subordinates rely on their personal abilities to make judgments. This discernment is the “intelligence” of humans.
When we were young, we learned to distinguish between humans and animals and between cats and dogs. Although we were more or less taught by our parents and teachers, we had to use our own intelligence to find the common features of all cats and dogs, because their descriptions, no matter how clear, were far from sufficient for a non-intelligent computer to distinguish between cats and dogs using an “IFTTT” approach.
For tasks where the requirements cannot be clearly articulated, current technology can only handle them through machine learning. In the previous example, this would involve providing a large number of examples for the machine to learn how to identify a dog, which is the so-called “artificial intelligence.” The required materials are the dog photos we upload to Instagram every day, labeled with #dog, commented with “Cute dog 🐶,” and identified through reCAPTCHA.
The rapid development of AI in recent years is the result of big data combined with exponential improvements in chips. Only by mastering vast amounts of data and computing power can machines be trained to perform specific tasks, such as playing Go or the currently popular automated chatting.
Production Relations: Planting Melons Yields Melons, Assembling iPhones Yields Nothing
Production relations are a concept in Marxist theory. Marx believed that in order to live, humans must interact with society, producing, purchasing food, goods, and services. Production relations broadly refer to what tools an individual possesses, what role they play, and how much distribution they receive in the production process.
In layman’s terms, you work day and night without stopping, earning barely enough wages to make a living, while your supervisor directs you and earns twice as much as you do – this is production relations. The supervisor’s boss is the CEO, earning a hundred times your wage. When the company needs to cut costs, they lay off you and your supervisor instead of cutting their own salary, which drives up stock prices and earns them a hefty bonus – this is also production relations. The company’s original founder who came up with the idea and filed a patent is long retired and continues to earn even more than the CEO – this is yet another aspect of production relations.
Foxconn workers assemble iPhones day and night but do not own Apple stock, possess any technology patents, control iPhone production methods, or even afford to buy an iPhone. Planting melons yields melons, assembling iPhones does not yield iPhones – this is a typical representation of modern production relations.
Parents have always urged us to study hard and excel. Analyzing from the perspective of Marxist theory, this is because studying hard helps us master the means of production, occupy advantageous positions in production relations, and avoid being replaced by technology.
Today, we realize that our parents have led us astray. After twenty years of hard study, we find that we can simply press a button and have AI do our work. Knowing this, we might as well have become construction workers, who at least do not worry about being replaced by AI for the time being.
Blockchain Improves Production Relations
AI aims to enhance production efficiency, while blockchain focuses more on improving production relations.
In German, production relations are originally referred to as Produktionsverhältnisse. Verhältnis in this context can mean both relation and ratio or proportion, which coincides with the strengths of blockchain technology. When used in DAOs (Decentralized Autonomous Organizations), blockchain empowers stakeholders, allowing everyone to participate in the governance and distribution of benefits within a community.
We can try to imagine how blockchain could intervene in ChatGPT’s production relations. As the Chinese saying goes, “All articles in the world are one big copy.” Ancient people foresaw ChatGPT many years ago and summarized its basic principles in a few words, which reflects great wisdom. When we marvel at the wonderful articles written by ChatGPT, it is fed by all the articles each of us has written in the past.
Of course, even if you are Jin Yong, you cannot claim that ChatGPT’s stories are inherited from your works, because ChatGPT truly integrates the essence of various sources, and its learning materials far exceed what any individual has written in their lifetime. Since everyone’s proportion is too small and the process involves learning, digestion, and output, ChatGPT does not need to split the bill with anyone, regardless of whether the machine learning material comes from user data or public content.
However, even if a drop of water in the vast ocean has an infinitely small proportion, the ocean is ultimately made up of these droplets. Whether we are aware of it or not, we constantly train ChatGPT every day by typing on our keyboards and clicking our mice. If blockchain can enable us to own our works, help trace their origins, and handle extremely small “nano-transactions,” there is hope to record individuals’ stakes in this production tool. Thus, when ChatGPT generates huge revenues without a master (aka today), we will not play the role of farmers and workers who stop working in the AI era’s production relations.
This not only applies to ChatGPT but can also be applied to AI-generated graphics and any other machine learning model. Midjourney can instantly create artworks that might take an artist a week to complete because it references a vast number of historical images. Some artists despise tools like Midjourney and even initiate boycotts against their users, including myself, who occasionally uses them to create weekly report covers. On one hand, I can understand the pain of traditional artists; on the other hand, AI not only saves costs but also provides services that traditional artists cannot offer. In this regard, even if I am willing to bear the “AI royalties” behind machine learning works, current AI does not provide such options, but blockchain, which can locate origins, record ownership, and execute trivial transactions, has the potential to move in this direction.
The most in-depth discussion of similar ideas can be found in the book “Radical Markets,” which proposes “data as labor” and advocates for users to form “data guilds” to fight for rights and interests from AI companies. As the title suggests, it is a very radical approach.
99.99% of humanity will become the AI proletariat
As technology advances, ChatGPT and Midjourney are gaining popularity, while cryptocurrencies and NFTs are in a bear market. Some quick-witted netizens have made memes mocking blockchain technology. Bill Gates recently said, “AI is the big one. I don’t think Web3 was that big or that metaverse stuff alone was revolutionary but AI is quite revolutionary.”
If you say that the above example of how blockchain can improve production relations is not concrete enough, I would readily admit that while AI’s impact on work efficiency is increasing by tens, hundreds, or even thousands of times, blockchain’s role in improving production relations currently only has some basic concepts and directions.
However, this is not a reason for society to prioritize AI over blockchain; on the contrary, the significant improvement in production efficiency, which far exceeds the improvement in production relations, confirms the urgent need for society to devote more resources to the latter. This is also the underlying reason why I have always been fascinated by blockchain but rarely write about AI.
There is no doubt that AI greatly improves work efficiency, but if we cannot improve production relations in the AI era, we may face the most severe oligopoly in history, with most of the world’s population, including intellectuals, becoming the AI proletariat.
Extended reading
My articles are not put behind a paywall and are all open for reading. If you like this article, please collect its Writing NFT to support writing and preserve journalism.
| 2023-03-01T00:00:00 |
2023/03/01
|
https://ckxpress.com/en/ai-vs-blockchain/
|
[
{
"date": "2023/03/01",
"position": 12,
"query": "universal basic income AI"
}
] |
ChatGPT wouldn't exist without Canadian AI pioneers. Why ...
|
ChatGPT wouldn’t exist without Canadian AI pioneers. Why one fears for the future
|
https://globalnews.ca
|
[
"Kathryn Mannie",
"Global News",
"Published March",
"Min Read",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline",
".Wp-Block-Co-Authors-Plus-Avatar",
"Where Img"
] |
... artificial intelligence (AI). For many, what ... In 2020, Andrew Yang campaigned for the U.S. presidency on a promise to institute a universal basic income ...
|
When ChatGPT was released late last year, people around the world suddenly awoke to the major advancements going on in the world of artificial intelligence (AI). For many, what once seemed like a science fiction fantasy was now reality.
In truth, the technology behind the groundbreaking chatbot had been brewing behind the scenes in research labs and major tech companies for years. But refined and released in its most accessible form yet, ChatGPT stands to herald in a transformational age of AI adoption.
ChatGPT, and other generative AIs like DALL-E, which can create original text and images from a simple prompt, won’t just transform education. It will reshape the way people conduct business, create art and do research.
Commentators have likened what’s coming to the next Industrial Revolution: one in which the role of humans may radically change.
While ChatGPT and DALL-E are both products of OpenAI, an American research company, other Silicon Valley giants have been moving quickly to show they’re capable of similar technology.
With names like OpenAI, Microsoft, Google, Meta and even Baidu capturing international headlines for their generative AI offerings, it’s easy to forget that the foundational principles upon which these technologies rest were developed in large part by Canadian scientists.
OpenAI is not a Canadian company, but perhaps it should have been.
Three men are lauded as the godfathers of AI, and their work has almost certainly touched your life. Two of them are Canadian: Yoshua Bengio of the Université de Montréal and Geoffrey Hinton of the University of Toronto. The third, Yann LeCun, is French, but some of his most groundbreaking research was done at Bell Labs and U of T.
In fact, the chief science officer and co-founder of Open AI, Ilya Sutskever, was educated at U of T and was a PhD student of Hinton’s.
Advertisement
As for Bengio, he’s the most cited computer scientist in the world. When asked if he could draw a direct line from his work to ChatGPT he said, point-blank, “Yeah, definitely.”
View image in full screen File – Computer science professor Yoshua Bengio poses at his home in Montreal, Saturday, Nov. 19, 2016. THE CANADIAN PRESS/Graham Hughes
It’s clear that Canada has some of the best AI minds in the world, and yet we lag behind in commercializing our greatest research achievements. Global News sat down with Bengio and leaders in the AI industry to understand why, and what’s in store for Canada’s future.
Putting economic considerations aside, how will AI more broadly impact the social and political fabric of Canada and the world? The best minds agree this is only the beginning. For Bengio, it’s not a matter of if computers will reach human-level intelligence, but rather when. And when such a technology is released, will it serve the collective good?
The godfather of AI has some warnings.
How Canada shaped the world of AI
When it comes to modern advancements in AI, particularly what is known as “deep learning,” Canada’s fingerprints are everywhere. The story of how began decades ago, and the story of why begins with the human mind.
Bengio told Global News he was inspired to research AI and neural networks to understand the machine of the human brain, based on the belief that the principles underlying human intelligence could be relatively simple, like the laws of physics, and ultimately, reproducible.
“When the whole idea of neural network research was very marginal, I got excited about this idea that we could both understand our own intelligence and build machines that take advantage of these principles,” Bengio said.
And the field of deep learning does just that — it uses principles we know about our own cognition to develop smarter, more efficient AIs. This cutting-edge research uses neural networks, a series of algorithms, to mimic the learning process of humans.
In a neural network, there are many computing “nodes,” loosely modelled on the brain’s own neurons, that influence each other through weighted connections. As input data passes through the nodes, those weights and biases determine what the final output value should be, and can be used to fine-tune the model to get more optimal answers.
Deep learning refers to when there are many layers of nodes in a neural network; the more layers, the more complex the model, and the more internal “learning” that’s going on. Training a simple machine learning model requires a good deal of human intervention, but deep learning systems are increasingly able to learn on their own.
As such, the applications of deep learning could be virtually endless and aren’t necessarily constrained by the limits of human creativity and knowledge. Already, deep learning methods are being used to answer open-ended questions that humans struggle with, like what songs to recommend to a music listener and how best to efficiently run a city’s power grid.
1:51 How AI is being used to predict breast cancer chemotherapy benefit
For their contributions to deep learning, Bengio, Hinton and LeCun were awarded the Turing Award, popularly known as the Nobel Prize of computing. The Association for Computing Machinery (ACM), which bestows the award, noted that the trio’s foundational research is used by billions today, essentially anyone who uses a smartphone.
“I think over the next many years when people write books about the history of neural networks, which will be the history of AI, there will be huge sections dedicated to the people in Canada and what they were doing,” said Nick Frosst, co-founder of Cohere, a natural language processing company (NLP) based in Toronto that is quickly drawing comparisons to OpenAI.
Advertisement
NLP is a subsection of AI that works to allow computers to understand, analyze and generate language. While ChatGPT uses NLP methods to interact conversationally with users, Cohere offers its language model to enterprises to tackle business problems.
View image in full screen The co-founders of Cohere. From left to right, Ivan Zhang, Nick Frosst and Aidan Gomez, the CEO. Cohere
Frosst says Canada’s research contributions to developing AI have been “outsized.”
“I mean, having Yoshua Bengio and Geoffrey Hinton here alone emphasizes our impact on the world.”
Many AI researchers had to be attracted to Canada as a place to do their work, however. Hinton immigrated to Canada from the U.K., where he comes from a family of intellectuals, including mathematician George Boole and surveyor George Everest (yes, of Mount Everest fame). Meanwhile, Bengio was born in Paris to Moroccan immigrants, though his family moved to Montréal when he was a child.
Early collaboration between the Canadian government and academia was key to putting AI on the national agenda, and it allowed Canadian universities to be some of the first to invest in machine learning research.
When the Canadian Institute for Advanced Research (CIFAR) was founded in 1982, the first research program it ever undertook was in AI and robotics.
Hinton was hired by U of T in 1987, a year after he garnered fame for his work on backpropagation, an algorithm that is now standard in most neural networks today, which radically improved their efficiency.
Say a neural network was asked to identify an image of a dog but it predicted a cat instead. Backpropagation allows machine learning developers to calculate how much of the computer’s prediction was off so they can adjust the weights and biases of the network to get a better output the next time.
In 1993, Bengio was hired by the Université de Montréal. A few years later, he authored a landmark paper that introduced word embeddings to neural networks, which had huge impacts on NLP. A word embedding is a learned representation for a word whereby words with similar meanings have similar representations. More simply put, he revolutionized a method to help computers understand the complex meanings behind words.
In 2010, Bengio helped pioneer generative adversarial networks (GANs), a breakthrough method through which computers can generate original images, videos, music and other types of data by mimicking the data set it was trained on. The technique has drawn comparisons to evolutionary biology.
As Bengio and Hinton gained renown as leaders in deep learning, computer science students and researchers became more attracted to work in Canada. It’s no surprise, then, that many of the world’s leading AI researchers have worked in Canada or studied under one of these men.
Regardless, deep learning was still seen as a speculative and unproven science for much of the history of the field — and the ACM actually credits Bengio, Hinton and LeCun for helping revive interest in it.
But really, these men were researching neural networks at the exact right time. Computer and graphics processing capabilities had been steadily growing for decades, and the widespread adoption of the internet meant researchers had both the means and the data to conduct experiments at an unprecedented scale.
According to Avi Goldfarb, chief data scientist at U of T’s Creative Destruction Lab, an incubator that has helped propel numerous AI startups, the turning point for the popularity of neural networks came in 2012.
That’s when Hinton, along with students Alex Krizhevsky and Sutskever (now Open AI’s chief science officer, as mentioned above), entered the ImageNet competition, an annual contest to see which AI model could correctly identify the most images from a vast database.
“They didn’t just win, but they blew the competition away” using deep learning methods, Goldfarb said. “And they did so much better than everybody else, that next year, almost everybody had adopted a version of their technology for their own algorithms.”
As the world began to wake up to the benefits of deep learning in AI, Canada instituted a Pan-Canadian AI Strategy in 2017 to take advantage of our leading status. The national program, coordinated by CIFAR, funded the creation of three new national AI institutes: the Alberta Machine Intelligence Institute (AMII) in Edmonton, the Vector Institute in Toronto and Mila in Montréal.
Advertisement
The state of business
In late February, a report from the Tony Blair Institute in the U.K. called for national investment to create a British general-purpose AI system — BritGPT, as the Guardian coined it.
“Given these AI systems will soon be foundational to all aspects of our society and economy, it would be a risk to our national security and economic competitiveness to become entirely dependent on external providers,” the report argues.
While Canada is in a much better position than the U.K. to commercialize machine learning — Frosst told Global News that Cohere would be able to create a chatbot like ChatGPT — the fears underlying the U.K. report are just as salient in Canada.
Our research is renowned globally, but on the business side, Canada has failed to use our talent and massive head start to create tangible economic benefits for Canadians.
As companies like Microsoft, Google and Meta scoop up market share, will there be any place left for competition from Canadian companies? And what is at stake if generative AI tools are mostly owned by foreign entities?
In Cohere, Canada has a real shot at competing with the Silicon Valley giants. In early February, Reuters reported Cohere was in talks to raise hundreds of millions of dollars in its next funding round, which Reuters says could value the startup at more than US$6 billion. Interest in the company has been booming since the release of ChatGPT, Frosst said.
In Cohere, Canada has a real shot at competing with the Silicon Valley giants. In early February, Reutersthe company reported Cohereit was in talks to raise hundreds of millions of dollars in its next funding round, which Reuters says could value the startup at more than US$6 billion. Interest in the company has been booming since the release of ChatGPT, Frosst said. Tweet This Click to share quote on Twitter: In Cohere, Canada has a real shot at competing with the Silicon Valley giants. In early February, Reutersthe company reported Cohereit was in talks to raise hundreds of millions of dollars in its next funding round, which Reuters says could value the startup at more than US$6 billion. Interest in the company has been booming since the release of ChatGPT, Frosst said.
In previous years, to attract that kind of funding and attention, Canadian AI startups had to move to the U.S. There wasn’t enough venture capital to keep them here.
“When we started the Creative Destruction Lab, our most successful AI company had to move to California to get investment,” Goldfarb said. “And that’s no longer the case. Our successful AI companies are able to stay here. That’s been an incredible change over the last 10 years.”
But even when Canadian AI ventures do stay in Canada, “they’re mostly getting funded by Americans,” Bengio observes.
“My impression is that the culture of innovation — and risk-taking that goes with it — isn’t nearly as developed here as it is in the U.S.,” Bengio said. “Venture capitalists here in Canada are not willing to take as much risk, to invest as much money, to look over a horizon that is this long.
“So in fact, many of the Canadian companies that succeed to raise capital are doing it because they’re, in a way, selling part of their ownership to American investors. In the past, it was worse, because then those companies had to move to the U.S. So at least things have been better.”
Bengio warns that if Canada continues to lag in commercializing AI, we may squander our current advantage.
“We need to do a better job at convincing Canadian industry to take this seriously. Because otherwise, what’s going to happen is our industry is going to lag so much in a few years that we’re going to lose our market shares.
“Companies that are being more innovative are going to be selling those products that we should be the ones building.”
7:59 Is Canada keeping up with automation?
Goldfarb says that compared with other countries, Canada has not been effective at converting our research into economic benefits for citizens.
“And that’s not an AI-specific problem. That’s Canada in general. We have great research but commercialization has been historically quite low,” Goldfarb said.
Canada has been the worst-performing advanced economy in the Organization for Economic Co-operation and Development (OECD) for decades. Last year, an OECD report projected that Canada’s slow growth could keep us in last place until 2060.
AI presents a huge opportunity for Canada to inject some vitality into our stagnating economy, and we have a lot of the ingredients needed to build a robust industry.
Advertisement
Canadian companies have a large pool of workers they can tap into with machine learning training, especially graduates coming from the University of Waterloo, U of T, McGill and the University of Alberta.
“It’s a great place for AI, there’s a lot of AI talent here,” Frosst said. “The majority of our employees are in Canada, although we’re spread around the world.”
Goldfarb also notes that Canada’s reputation as a place for AI innovation has attracted international investors to come here and fund startups.
Frosst said that while the initial seed investment for Cohere came from a Canadian firm, its subsequent rounds of funding have all been led by American investors.
“That’s just a function of the fact that America has 10 times the population of Canada. And so, if you’re looking at large entities and businesses for funding, you’re often going to end up speaking to American venture capital firms,” Frosst said. “But they’re not the only ones we speak to.”
Attracting foreign investors to Canada is preferable to having our most promising startups leave for another country, but questions remain about who will benefit most from our homegrown AI talent. With Hinton primarily working for Google and Sutskever at OpenAI, the argument could be made that it’s the U.S.
Still, Frosst and Goldfarb are optimistic that Canada can build a strong AI industry to compete with Silicon Valley.
Already, Toronto has the highest density of AI startups in the world. Canada as a whole is home to just under 1,000 AI startups, and in 2021, those companies raised a combined $1.5 billion in venture funding, CIFAR reported.
More than 200 master’s and PhD students graduate annually from Canada’s National AI Institutes, and data from Global Advantage Consulting Group found that Canada has produced the most AI patents per capita among G7 nations and China.
And it seems that, increasingly, Canadians and Canadian-trained tech workers are making the decision to stay and work in the country.
Frosst recalls of his time in university that “there was really a dream of California or bust, you know? Like, got to go down to the valley and make it.”
“I think that dream is less enticing to students as the years go on,” Frosst said. “In part, it’s because Canada is getting better. There’s more opportunity here, there’s more companies, wages are going up — it’s a better place to be a developer.”
READ MORE: Where AI can help fight climate change – and where it can’t
AI that reasons
When it comes to ChatGPT, one thing that many computer scientists will say is that it’s remarkable, for sure, but the model isn’t introducing anything we didn’t already know about deep learning.
While ChatGPT isn’t necessarily pushing boundaries, researchers like Bengio working on fundamental problems certainly are. He says the evolution of AI is far from over.
“So ChatGPT, it’s very, very impressive. But it doesn’t reason the way humans do. It makes mistakes sometimes that a five-year-old wouldn’t make,” he said.
But that doesn’t mean that we can’t one day create an AI that is capable of reasoning. For Bengio, it’s just a matter of time.
“Human brains are machines,” he said. “There’s no reason to think we can’t build comparable machines.”
The idea of an artificial general intelligence (AGI), an AI system that can understand any intellectual task as well as a human, may seem like science fiction. But Bengio says we are already on the path to getting there.
“We’re going towards human-level intelligence and these large language models (like ChatGPT) are one of the elements on that path,” he said. “Now, they are missing a lot of ingredients, in particular, reasoning … including things like causal reasoning, understanding cause and effect and discovering causal relationships, but also reasoning the way humans do, by combining pieces of knowledge in a way that we can then explain.”
“Currently these models can’t do that,” Bengio said. “So my own research is about a next generation of deep learning system that would reason in a way that’s inspired by human reasoning and high-level cognition.”
With such technology on the horizon, Bengio is calling on the Canadian government to be prepared for how an AGI will impact not just the economy, but also the social and political landscapes of the world.
Currently, no AGI exists, but even with the AI technology we have now, people are understandably concerned about the future of work. White-collar workers like copywriters and business analysts could see their jobs radically reshaped in the coming years to accommodate AI tools.
Advertisement
READ MORE: ChatGPT passes exams for MBA courses and medical licences — and it’s only getting started
Goldfarb sees us as living in the “between times”: after the discovery of AI’s potential and before its widespread adoption.
“With electricity, it took about 40 years from the patent of the lightbulb until half of American households were electrified,” he said. “For computing, similarly, from the first computers to the time it began really impacting the way people worked was, again, several decades.”
The reason is that it takes time to apply transformational technologies to their fullest extent. When the first computers were introduced, people couldn’t have predicted that it would one day lead to the creation of the internet, which would in turn propel unprecedented new industries on its back.
“And so when we say we’re in between times now, it feels like the 1890s with electricity. We can see the technology is amazing. But we haven’t figured out how to make it useful at scale.”
As we go about applying AI in novel formats, we risk leaving humans in the lurch.
“I think you shouldn’t worry too much in the short term,” Bengio says, “but I think eventually, this is something that we all need to think about, in particular governments. Because there may be social transformations that are happening too fast, that are going to leave people jobless and in turmoil.”
“We need to change our education system, our social welfare system, and make sure people can shift easily to other jobs.
“I think our whole social fabric is threatened in some way. We can’t just think it’s going to be business as usual, we have to think ahead. Maybe we need to rethink completely the way our societies are organized to face those challenges.”
The idea that AI could lead to huge job losses that require government intervention to solve isn’t new. In 2020, Andrew Yang campaigned for the U.S. presidency on a promise to institute a universal basic income payment of US$1,000 per month, asserting that technological advancements in AI would leave a third of Americans without a job in the next decade.
But Bengio’s concerns about an AGI don’t just end with the job market and people’s livelihoods.
“What about the abuse of these powerful technologies? Can they be used, for example, by governments with ill intentions to control their people, to make sure they get re-elected? Can they be used as weapons, weapons of persuasion, or even weapons, period, on the battlefield?” he asks.
READ MORE: From deepfakes to ChatGPT, misinformation thrives with AI advancements
“The problem is, we live in a divided world. It’s not enough for the Canadian government to pass a law saying we can’t do this or we can’t do that with AI,” Bengio warns. “There is no world government that can legislate this kind of thing. And the economic system in which we are encourages companies, as we’re starting to see, to take more risk just to stay ahead. So how do we protect ourselves?”
After Bengio and Hinton won the Turing Award, they publicly called for an international agreement to regulate the use of AI in warfare, warning of the dangers of lethal, autonomous weapons.
But with technology this enticing and international politics as fractured as ever, who knows if even the traditional protocols of multilateral treaties will be enough to stop AI from being used for unethical purposes?
Risk analysts have identified AI as one of the largest threats facing humans today. The Top Risk Report for 2023 called these technologies “weapons of mass disruption,” and warned they will “erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets.”
Bengio says he knows even better AIs are coming, and there’s no doubt they can be applied to solve some of humanity’s biggest problems, but we can’t ignore how easy it would be for a country, rebel group or even an individual to leverage AI for evil.
“We should not also forget that this technology could be extremely useful and can help in the next decades to discover cures for major diseases. It may help us find important technological solutions to fight climate change,” Bengio said. “It’s a very difficult dilemma.”
“What’s inevitable is that the scientific progress will get there. What is not is what we decide to do with it.”
| 2023-03-01T00:00:00 |
https://globalnews.ca/news/9503678/chatgpt-canada-ai-pioneers-yoshua-bengio/
|
[
{
"date": "2023/03/01",
"position": 76,
"query": "universal basic income AI"
}
] |
|
Eleven freedoms for free AI
|
Eleven freedoms for free AI
|
https://elevenfreedoms.org
|
[] |
Nothing beyond the basic operating system should need to be already installed; that basic ... And anybody seeking government protection of their existing income, ...
|
The traditional Four Freedoms of free software are no longer enough. Software and the world it exists in have changed in the decades since the free software movement began. Free software faces new threats, and free AI software is especially in danger.
Widespread computer networking was a great challenge to the movement starting more than two decades ago, one that still has not yet been adequately addressed. An entire category of software now exists that is superficially free under formal definitions derived from the Four Freedoms, but its users are not really free. The Four Freedoms are defeated by threats to freedom in software as a service, foisted contracts, and walled online communities.
The relevance and the scope of the Four Freedoms were already tested just by the existence and consequences of universal Internet access. Now it's even worse. Recent developments in machine learning technology, specifically generative models, exacerbate the existing gaps in software-related freedoms, and pose new threats. Entrenched interests, the same ones who would seek to destroy free software, are staking claims on AI now. They are capturing regulatory processes, raising legal and technological barriers against amateur and non-commercial participation across the entire field. It is already difficult, and if they have their way it will soon be illegal and practically impossible, to build state-of-the-art AI systems except in the service of a large corporation.
Well-intentioned public intellectuals who write on subjects like diversity, equity, privacy, and existential risks, are being co-opted to construct arguments for the supposed necessity of centralized control of machine learning, and for strict regulation to prohibit free experimentation. Now is the time, before the potential of AI is permanently stolen by large corporations, to launch the free AI movement.
Eleven freedoms
Most people look to the Free Software Foundation for the definition of "free software," and they define it using the Four Freedoms below. For listing here I've cut the wording down to just the freedoms themselves; in the document What is Free Software? on the GNU Web site, the list adds reasons and consequences to each freedom.
0 The freedom to run the program as you wish.
1 The freedom to study how the program works, and change it.
2 The freedom to redistribute copies.
3 The freedom to distribute copies of your modified versions to others.
The numbering from zero recognizes that the Zeroth Freedom, to run the program "as you wish," is an assumption implicit in all the others; it was first stated explicitly after the First, Second, and Third Freedoms had been defined and numbered.
The Four Freedoms are important for software in general and I think AI software should be free as I wish all software could be free. I won't explain the Four in detail here, referring readers instead to GNU's description.
I see seven additional freedoms that free AI software ought to have, beyond the original four of free software, for a total of eleven. What I'm calling the Eleven Freedoms describe the goals I think the free AI movement should have, and the goals I intend to promote myself in my own projects.
The Eleven Freedoms are numbered zero to ten to carry on the tradition. For consistency, I refer to "the program" in the new Freedoms, the same way the old ones did; but I am describing freedoms for an entire field or industry, not only computer programs but also data and the human institutions around both.
Most, maybe all, of my proposals could be applied to software in general; and maybe they should be. But I will leave expansion of the goals of the general "free software" movement to others. I am particularly interested here in machine learning and AI broadly interpreted, and I have such applications in mind with each of these. In my detailed discussion of each Freedom below, I will touch on AI-specific aspects of the new freedoms.
My formulation of these Eleven Freedoms may be controversial. Not everybody in free software will agree with me about them. The Four Freedoms were once controversial too, and still may be in some circles. The freedoms I list are meant to be morally right, not meant to be popular. I am here to tell you what I think, and what I wish others would think, but I am under no illusion that everyone will immediately join me.
My list is not meant to represent an existing consensus of what everybody already believes. I'm calling for changes here, not describing what already exists. But the changes I'm calling for are in the nature of a new movement. Free AI goes beyond free software. These are not revisions I think would be likely or advisable to apply to the published goals of existing organizations. In other words, I am not writing a patch request for the FSF's defining documents, nor anybody else's.
I am also not attempting to create rigorous definitions. On the contrary, the nebulousity of many of the important concepts here, is one of the points I want to make and part of the reason that some of my freedoms need to be protected. Projects that fund or distribute free software often use the Four Freedoms as a bright-line test for what should or should not be included. You can run down the list and ask, "Are users allowed to run the program for any purpose? Yes. Are users allowed to study and change the program?" and so on, ending up with a yes or no answer to whether the software is "free." A bright-line test will not be possible with the Eleven Freedoms; there are too many grey areas, subjective judgment calls, and necessary compromises. These are aspirational ideals, not definitions.
The Eleven address a much broader scope than the Four. The Four Freedoms are specifically about software - they describe whether a "program" is free or not. In practice, they are usually even narrower, really applied only to whether the copyright license of a program is free or not. A major failing of the Four Freedoms, visible since the rise of the Net, comes about because having software be subject to a free copyright license is nowhere near enough to make sure that human beings will really be free in using that software.
Free software is especially vulnerable to what I call foisted contracts: obligations users are required to enter into to use a service or a Web site or to join a community, including terms of service, codes of conduct, employment contracts, and so on. Foisted contracts often purport to override the freedoms of free software, either directly or (more often) indirectly by touching on matters a software license does not and cannot address. They are not open to negotiation between an individual and a corporate entity beyond "take it or or leave it" - but despite lacking that fundamental legal requirement for contract formation to occur, foisted contracts are treated as valid in practice.
If the software is subject to a free copyright license, but you can't use it freely because it is tied to a non-free service with a foisted contract that imposes on your freedom, then you don't really have freedom with respect to that software, no matter how much you are still allowed to copy it.
Organizations like the Free Software Foundation have good legal and philosophical reasons to concentrate on freedoms they can encode into a software copyright license; but we should not stop there too. Artificial intelligence technology seems especially vulnerable to freedom-defeating efforts outside the scope of copyright licenses; so free AI must be free in a broader scope and a free AI movement must address the broader scope.
4 The freedom to run the program in isolation
When free software was a new idea and its freedoms were first defined, it was difficult or impossible for anyone to predict the future developments of the software world - and, in particular, the ways that free software would come under attack. In the subsequent decades, organized efforts against free software have revealed limitations of the original definition. Historical changes in software and its place in the world, especially the widespread public availability of the Internet, have also created new circumstances the original free software concept did not contemplate.
The widest avenue of attack on free software today is through dependencies. Widely used, generally accepted, best of breed free software programs now exist. Those who would attempt to destroy free software have not succeeded in destroying those programs, nor in directly making them non-free. But the form of attack that actually succeeds is the attack of making the freedom of a given program irrelevant through dependencies on non-free things. Relatively innocent developments in the culture of software engineering and software use have also had the side effect of creating dependencies that harm free software, even with nobody's conscious effort to do so.
You can run your free program - but to really do your actual work with it, you must also have an account with a service that puts you under foisted contractual obligations limiting your freedom. You can compile the source code of a program that is called "open source" - but you need to link to several hundred different libraries, each of which is individually described as "free" but the only practical way to get them all properly linked is to automate it with a package manager, dependent on a specific network server operated by a large corporation who will again bind you to contractual obligations. You can share your modifications to the software and benefit from those of others - but network effects mean there will be only one real community for doing so, and it turns out to be a Web-based social network that will demand (through its "code of conduct") your public support of an unrelated political identity and contributions to its activism as a condition of joining the community. These dependencies make software non-free in practice while leaving the software's copyright license, when considered in isolation, apparently still within the original Four Freedoms of free software. Today we can't, but we should be able to, use the software in isolation.
I have stated the Fourth Freedom of free AI software, which is the literal fifth freedom on the list because of the screwy numbering and the first of the new freedoms I want to introduce, as the freedom to run the program in isolation. That means without the program's being forcibly linked to other things. This freedom could also be called the freedom from dependencies, and most of the other new freedoms on my list can be seen as flowing from it.
Dependency as a technical term has a fairly narrow meaning: a dependency of a program is another program, or a library or similar, that you need in order to build or run the first program. That is one of the several kinds of dependencies we should be working to eliminate. Although software dependencies may be unavoidable sometimes, they could and should be reduced far below the level commonly seen in 2023. Compiling one application should not require first installing five large framework libraries, installing their dependencies recursively, and sorting out version number conflicts.
Especially in the (four freedoms) free software world where all these libraries are already licensed in ways that specifically permit third-party distribution anyway, the package for installing a program should take advantage of that. The package should normally be expected to include all libraries needed. The theoretical possibility of exercising your freedoms by spending many hours searching the Net for not-included dependency packages, is no substitute for being able to really exercise your freedoms in practice with reasonable amounts of effort.
Free software also needs to be independent of third-party services, and this point is especially relevant for the current generation of machine learning software. Many machine learning programs today are designed specifically to work with particular vendors' cloud computing platforms, either exclusively or just as a strong default. Then using the programs requires you to deal with those vendors and compromise your freedoms through contracts with them.
Dependency on third-party application programming interfaces (APIs) is another level of dependency, similar but not identical to dependency on a cloud platform. Maybe you can run the program itself on your own hardware, but the program does not really do important parts of its computation itself. This program is really an application-specific client layer for a general machine learning API provided by a commercial organization and in order to run it, you must enter your "API key" obtained from that organization on the condition of entering into a contract with them. That means the program, again, is non-free.
Even apart from the specific act of running the program, effectively using a program in practice may gratuitously require you to have a network connection at all. For example, it is necessary to have a network connection when the only useful documentation is not provided in the form of documents as such, but only as interactive dynamically-generated Web sites that cannot (legally under copyright law, nor in practice) be downloaded for local reading. A dependency on having a network connection is an obstacle to exercising your freedoms in many reasonable situations, and it may likely be extended to a dependency on a non-free or borderline Web browser, membership in a closed community, and so on.
The class of systems known as package managers seems especially threatening to the freedom of running programs in isolation because package managers make it easier, and in practice they make it necessary, to do just the opposite. They reduce the discouragement that ought to exist against developers adding dependencies. If many users can easily just run a package management command to pick up a few more libraries and automatically install them, then it is easy for developers to be misled into thinking all users can do so, and that it's okay to require a few more libraries.
An outright ban on package managers would probably be excessive, but it certainly seems that the use of package management is a smell from the point of view of the Fourth Freedom. Package managers should be viewed with suspicion, and generally discouraged. Through their implication of automatic version updates, package managers also have negative implications for the Eighth Freedom, described below.
Web sites of the kind sometimes known as "hubs" - Github is the prototype, but others specific to machine learning have recently become popular - are also threats to this freedom because they end up combining package management with social networking and all its hazards. As of 2023, such sites have universally been captured by external political interests, and often also by large corporations, that place obligations on participants; and the software ends up being designed in such a way as to require users to maintain accounts on the hub sites in order to use it. Eliminating dependencies means not only eliminating the software's direct technical dependencies on library linking, API calls, and so on, but also eliminating softer human requirements for support and documentation that could force the user to join and be bound by the rules of online communities.
Demanding human-to-human interaction as a condition of being able to run software creates an accessibility problem for anyone who has difficulty with human-to-human interaction, and such people may have even more need of access to AI than the rest of us. It also creates international and intercultural issues by privileging participants who are comfortable with the natural language and the majority cultural background of the relevant human community.
A free software program and especially a free AI software program should be packaged into a single file one can simply download or copy. That is what we commonly call a "tarball," although of course I do not mean to limit it to the specific file format of the Unix tar(1) utility. Once you possess the tarball you should be able to install the program on your computer, without needing anything else. Nothing beyond the basic operating system should need to be already installed; that basic operating system should not need to be one company or organization's product in particular; you should not need to form contractual obligations and agree to terms of service at any point in the process; you should not need to connect your computer to a network; and the documentation should be included. All the things being rejected here can be described as dependencies and as violations of the principle that software should be usable in isolation.
I will not give much bandwidth to speculative ideas about the "safety" of hypothetical future "artificial general intelligence" systems because I don't believe in them. I don't think today's generative models are on the path that could lead to such things. But anybody who did take those speculative ideas seriously would naturally think to mitigate the risks in the obvious way: by running experiments on air-gapped computers, that is, on hardware without a connection to the larger network. They should be allowed to do so, which entails the general freedom of running programs in isolation.
Present-day experiments like automatically executing Python code from generative model output, although quite different from "artificial general intelligence," also seem like they would be most sensibly run on air-gapped hardware to prevent unfortunate accidents, or at least inside a strong software sandbox without a network connection available from the inside. The experience of biologists in the last few years ought to be a reminder to everybody doing scientific work on entities even vaguely resembling life forms, that it's important to keep experiments well-contained in general. And air gaps and sandboxes for containing experimental software are impossible to build, or at least much harder to build successfully, when programs require network connections and network services to run at all. Thus the ability to run in isolation, although important for all software, is especially imperative for machine learning and AI.
5 The freedom to run the program on the hardware you own
Running the program on the hardware you own has two intended interpretations, both of which are important. First, it means running the program on hardware you own as opposed to hardware owned by someone else. That freedom is a subset of the previous freedom, running the program "in isolation," and discussed above. It is important that you should not be dependent on an ongoing relationship with some other party who owns the hardware, just to run the program; and the freedom to run it on hardware you own yourself entails some technical points like having "read," not only "execute," access to the code.
The other interpretation, also important, of running a program "on the hardware you own" is that this freedom excludes unobtainable hardware, and even hardware that might theoretically be available to you but happens not to be the hardware you do own. Google's "TPU" computers, for instance, are not for sale on the open market. Most people who might want to use one, can only rent it and connect to it over the Net, paying a fee and entering into a contractual obligation with Google. If software, because of its hardware needs, requires you to buy cloud services from Google, then you immediately lose the freedom to run the software in isolation, and you become subject to Google policies which are likely to also limit your other freedoms. Software written for TPUs is not free.
What about software written to require specific hardware that you could buy, but have not? It is a fundamental principle of computer science that computers of a certain level of sophistication are all, in a way, equivalent in what they can do. Except for speed and memory considerations, a current PC, an old 68000 Mac, and even a Commodore 64 can all emulate each other, in both directions, because they all have a level of computational power that allows them to simulate arbitrarily complex logic circuits including the logic circuits in other computers. They can all answer the same questions.
Nearly all devices we now call "computers," including nearly all hypothetical future computers that might be built, fall into a category of machines called "Turing-complete," which have this universal computation property. The basic question of what operations a computer can perform will have the same answer for all Turing-complete machines. There is no such thing as a computation that can be done on one hardware platform and absolutely cannot be done on some other platform. In a carefully-defined theoretical sense, there is no such thing as software written to require specific hardware, because emulation exists.
However, the exceptions for speed and memory considerations, as well as a number of engineering issues, mean that Turing-completeness only has limited applicability to running software on different computers in real life. In practice it makes a difference how the software is written. We can speak of "portable" programs, meaning that they can easily run on many different platforms - different models of hardware but also different operating systems and different versions and configurations. We can also speak of programs that are not portable, those written with built-in assumptions the user cannot easily change and that make the software hard to run on any platform other than the specific platform for which it was tailored.
To the greatest extent possible, free software must be portable to many different hardware and operating system platforms and many different configurations. If it has built-in assumptions limiting it to specific platforms, especially if those assumptions are deliberately made for the purpose of limiting portability, and especially assumptions that force would-be users to buy specific commercial products, then it is not free.
For large-scale machine learning as of 2023, the practical consequence of the Fifth Freedom is that code ought to be written for local CPUs first, before generic GPUs; for generic GPUs before Nvidia's products in particular; and for those before anything "cloud"-based, such as Google TPUs. Similarly, free software ought to support files on the local filesystem first, before supporting files stored on network servers in general; and network servers in general before supporting the unique features of one cloud provider's "bucket" service.
There are obvious technical reasons this freedom cannot be absolute. You do need a computer of some sort to run software at all. You need resources like memory and CPU speed in certain quantities to be able to run software fast enough, and on large enough data, to be useful in any practical sense. Turing completeness is not the last word in compatibility. The quantitative performance advantages provided by a specific type of hardware may be so great that the difference between running on one hardware platform or another is the difference between the program being fit for purpose at all, or not. Then it seems like the program might as well require the hardware that makes it run well enough to be usable.
Requiring a certain level of general-purpose computer power to usefully run a program is reasonable and does not seem to infringe the freedom of running on the hardware you own. But a requirement for enough generic hardware is much different from requiring ownership of a specific commercial product, or even worse, an ongoing contractual relationship with a specific commercial service provider.
Supporting a specific proprietary hardware product when doing so is the main purpose of a given program makes some sense, too - such programs are essentially device drivers and often necessary. But allowing such a thing is dangerous because of the temptation to confuse "we don't want to code for generic hardware" with "this program's purpose is not relevant to generic hardware." I would prefer not to call software that really requires specific hardware "free," even if such non-free software may sometimes be necessary or useful to create. Similar considerations apply to software whose main purpose is to be a client for a specific online service.
If you do happen to have a computer with special features relevant to the purpose of a program that also runs on generic hardware, it is reasonable that you would want to be able to benefit from the proprietary features - GPU computation when you do have GPU hardware, even if it's proprietary - and it is not reasonable to demand that the developers of software should know about and code for every special feature of every hardware product as a condition of being allowed to code for one of them.
For software that does work on generic hardware but could work faster or better with the use of specific products, it seems reasonable to include optional support for them. But keeping it optional is important. Although software might automatically detect and use resources it can find, there is danger in anything where the software would treat the use of proprietary hardware or online services as the normal, expected, or default case. If a user without special hardware ends up getting less of what the software should be able to do on the hardware they do have, then that user is not really getting the freedom to use their own hardware.
There is danger in documentation in particular, which might present "How to run such-and-such software on your Foobar Cloud account" or "How to run such-and-such software with your BazQuux GPU" as if that were the only or the expected way of doing it, without the existence of any similarly helpful documentation for doing it the generic way. Documentation with proprietary assumptions built in might even be written by third parties, without the developers' involvement or responsibility; the operators of Foobar Cloud and the vendors of BazQuux GPUs have obvious economic incentives to create it.
Developers who wish to commit to keeping their AI systems free while including support for proprietary hardware and cloud platforms must therefore commit to making sure, on an ongoing basis, that standalone operation on user-owned generic hardware will be and will remain easy and functional enough to really use, and that it will be presented to users as a viable, preferable, and default way of doing things; to at least some extent positively excluding or deprecating proprietary and non-user-owned hardware; and to documenting or arranging for others to document the free option as viable, preferable, and default. Viable: you can use your own hardware. Preferable: it is better to use your own hardware. And default: you will use your own hardware unless you choose not to.
6 The freedom to run the program with the data it was designed for
The boundary between code and data is often blurry, and that blurriness can be another way for free software to come under attack. If a program is presented as "free," but in order to be useful the code requires data that is not free, then for practical purposes, the code isn't free either. There have been situations in which people or corporate entities have tried to benefit from free software while circumventing their obligations under the relevant licenses by releasing code without the necessary data to use it, and claiming the release obligations only applied to the code.
When both code and data are essential to software, both must be free for the software to be free. I have stated my new freedoms in terms of a "program" in order to echo the words of the original Four Freedoms; but I want to emphasize, and that is the purpose of this Sixth Freedom, that data needs to be included as well. Including data relieves us from needing to argue about exactly which parts of a system really are exactly code or exactly data. Are the weights in a neural network really software code, written in a language of numbers, with the numerics engine that does the matrix multiply serving the role of interpreter to execute that code? Are they just inactive data? Where is the boundary? That may be a complicated question - but the answer should not make a difference to the whole system being free.
The makers of the Tivo digital video recorder used an artificially-created boundary between code and data to attack their customers' freedom, specifically the First Freedom's right to modify software, when they sold devices with built-in firmware components (the Linux kernel, in particular) covered by the GNU GPL version 2; offered customers the source code, as required by that license; but designed the devices to detect and reject modified versions of the firmware. Getting the devices to accept new firmware would have required a key that was kept secret by the manufacturer. The key was argued to be data, not code, and therefore not covered by the GPL's requirement to disclose source code for software. That case resulted in new terms in the GPL version 3 specifically requiring the disclosure of any "Installation Information" like code-signing keys necessary to make modified code usable on consumer devices like DVRs.
Attacks on software freedom through restrictions placed on data rather than code happen in other situations, too. Computer games often test the limits with an "engine" that is code, and theoretically free, but non-code "assets" like graphics and sounds that are restricted, making it impossible to exercise software freedoms with respect to the game as a whole. Software whose purpose is to query some kind of database - such as one IP address geolocation package I have used - may similarly release code as "free" but then restrict the necessary database, without which the free code is of little value.
This kind of thing happens with machine learning, and the scales involved and the gap in value between code and data in machine learning make it stand out. A neural network model may be described as free in the sense that the algorithms, the pattern of layers and connections among nodes, and even the source code implementing all that, are free. You can use those and modify them to your heart's content. But the really valuable part of the model is not the code but the weights, the numbers that go into the matrices, and the weights may be subject to non-free restrictions.
The weights for a state-of-the-art model contain much more information than the code - typically gigabytes of weights to megabytes or less of code - even though they are still small in comparison to the training data. The weights are vastly more expensive and difficult to create than the code. One skilled practitioner could put together the code for a state-of-the-art model (by writing some themselves, but mostly invoking publicly-available libraries) with an amount of work that might be valued at thousands of dollars. But creating the weights to go with that same code is a multimillion-dollar investment in human labour, network communication, and storage space, to collect and organize terabytes or petabytes of training data; and computer time for the training process as such, which distills that data into the form the model can use.
A corporate entity that has made the investment to create a new state-of-the-art model, almost all of its investment going to creation of the weights, might release just the code to the public, keep the weights locked behind an API offering execute-only access to the model for a fee - and quite literally call itself "open."
A definition of free software that allows software to be counted as free when only the code is free, with a relatively narrow and precise definition of "code," is inadequate to describe free AI because of the importance and disproportionate value of the associated data. Free AI must necessarily have free weights, not only code - and more generally, whatever valuable data the code was meant to use must be free, because not all present or future systems will necessarily happen to call that data "weights." Parties coming to the table of free AI should not be allowed to reserve the valuable and hard-to-replace parts of a system for themselves on the strength of a distinction between code and data.
7 The freedom to run the program with any data you have
The Seventh Freedom is stated in very general terms to encompass future developments, but let's not be coy. In the present circumstances, the important application is training. You should be able to train generative models on anything, including material that is covered by copyright, without a requirement for permission or payment. Copyright holders of training data do not in general have a claim on the resulting models or on output from them.
I anticipate that this freedom may be the most controversial and the most viciously-attacked one on my list, because even in these early days, when very little is yet known about the eventual role of generative models in the world, we are already seeing organized attacks on the freedom to train them. That trend will no doubt continue.
Some people who make a living from creating words and pictures see a technology that superficially appears to do the same thing they themselves do, cheaper than any human could, and fear that it puts their jobs at risk. Then they will naturally search for reasons for the technology to be banned, or regulated to defeat its purpose. They will rationalize their search as motivated by fundamental ethical principles rather than just a desire to protect their own economic interests. Virtually all efforts to prevent training of models on Web-crawled databases are presented as being about "ethics," not money. And anybody seeking government protection of their existing income, or the income they don't have but think they deserve, will perceive the relevant issues in a distorted light in order to reach their desired conclusions. As Upton Sinclair wrote in 1934, "It is difficult to get a man to understand something when his salary depends upon his not understanding it." But a worry that new technology might outcompete somebody in the market does not mean that the technology is actually unethical, let alone that it is or should be illegal.
We can't cause your job to remain relevant if the world changes in such a way that there is no longer demand for humans to do what you do. We don't even know that such a change would or might happen as a result of generative models - it is purely hypothetical at this point. The economic effect of present-day generative language and image models is, at the present time, basically zero; everybody is excited about what might happen tomorrow, but it's all in an imagined future.
Even if the economic effect of generative language and image models were already really known to be big, wishing that generative models would not exist, or that they would not be what they are, does not mean we can put the genie back in the bottle. It does not justify denying the world the many positive uses of generative models. Fear of humans losing their jobs does not mean that generative models infringe copyright law in particular, nor that copyright law can or should be changed, extended, or reinterpreted as a tool to preserve someone's job.
What morally should be allowed is not limited to nor determined by what legislatures in specific countries have decided to legally protect. Human rights do not exist solely when created by legislation. I am writing here primarily about what should be allowed, not about what specific countries' laws currently do allow, and so a claim that copyright law (especially in one specific jurisdiction) does not currently enshrine the Seventh Freedom, in no way invalidates the importance of the Seventh Freedom and would be in no way a meaningful rebuttal to any part of my position.
Each new development in information technology, including the Internet, the photocopier, the VCR, the cassette tape, and even back to the public library, has been an excuse for copyright maximalists to attack fair use. The general claim is that new technology is importantly different and creates new threats to the supposed natural human rights that are protected by copyright law, so that the same old tired bogus arguments against fair use have now magically become valid somehow this time, and we should finally put an end to fair use. Generative models are no exception to this pattern. But, just as in earlier generations, the underlying questions have not changed. The correct answers have not changed. And the well-established balance of copyright including solidly protected fair use, should not change.
There is further discussion of the legal aspects of the Seventh Freedom in a separate article on training and copyright.
8 The freedom to run the same program again
If someone else can easily take away your freedom, then you are not free. If you can run a program as you wish on your own hardware today, but tomorrow you suddenly lose that ability because of something that happened remotely, then you are not free to run the program in isolation. Temporary freedom is parole, not really freedom at all. Freedom entails stability and free software should remain usable not only at the present moment but into the indefinite future.
I have written about the importance of owning hardware. The concept of "owning" software is a little different because of the intangible nature of software, and in ordinary speech the "owner" of software might more likely refer to whoever "owns" the copyright, rather than whoever has possession of a single copy. But mere possession of a copy does confer one part of the rights of ownership: you should be able to continue possessing your copy, and nobody should be able to take your copy away from you or render it useless.
That is one reason free software licenses must be irrevocable. Time-limited licenses that attempt to turn a program into a service, have no place in the free software community. Neither do "license servers," all types of "call home" and telemetry features, and dependencies on remote servers and services that may well vanish in the future. All these things have the effect of creating remote kill switches through which someone else can deliberately or accidentally remove your ability to continue running software on your own computer.
License and kill-switch issues do not end with the software as such. There can be subtle and complicated threats to the freedom to run the same program again, through dependencies on data as contemplated by the Sixth and Seventh Freedoms. Consider for instance the recent issues surrounding use of Pantone colours in Adobe software products. For years, graphic designers used Adobe's software programs to design print media including colours marked up with information from Pantone's proprietary colour matching system. A commercial arrangement between Adobe and Pantone allowed Adobe to provide this feature, with licensing of the relevant intellectual property presumably paid for by the sales of the software.
In the middle of the COVID-19 pandemic, these two companies changed the commercial arrangement between themselves, with a certain amount of secrecy and finger-pointing as to exactly what happened at whose insistence. As of November 2022, it became the case that opening a file with an Adobe product, even a very old file, that happened to have Pantone-described colours in it, would make those colours automatically change to black unless the user agreed to pay a monthly fee for a license to use Pantone's colour data. Adobe never claimed to adhere to "free software" values; and the legitimacy of intellectual property claims on colours is beyond the scope of this article; but the interesting point here is that the threat to freedom and the remote kill-switch related to data rather than directly to the program code.
Stories of Amazon "revoking" electronic books users had "bought" for their e-reader hardware similarly demonstrate that fully exercising the Eighth Freedom also requires the Sixth and Seventh Freedoms. While I was doing final edits to this essay in preparation for posting it, this particular issue (which made the news as far back as 2012) became relevant again when it was announced that Amazon would be retroactively and extensively editing readers' already-purchased books by the author Roald Dahl, to remove his evocative language that might offend present-day sensibilities.
Running the same program again means the same data with it.
There is further discussion of the technical aspects of the Eighth Freedom in a separate article on software versions and extreme reproducibility.
9 The freedom from having others' goals forced on you by the program
In the space of just a few months in late 2022 the phrase "as a language model developed by OpenAI" became a cliche. We all know what that means now: you ran afoul of the political filter, you naughty human!
There shouldn't be a political filter, and we are not free while there is.
[ChatGPT results from LeighWolf, on Twitter.]
The ChatGPT political filter goes back to the Zeroth Freedom, of using software for any purpose. It is inappropriate, and incompatible with describing a system as "free" or "open," for queries or results to be subject to a filter that rejects the user's choice of subject matter.
The wording of ChatGPT's cliched refusals, using phrases like "as a language model," seems designed to imply a natural limitation on the abilities of language models in general, perhaps stemming from deep facts in theoretical computer science. Language models just can't answer politically charged queries, it's not within the scope of language modelling, just as Turing machines can't solve the Halting Problem, too bad. But that's a lie. The system would certainly be able to answer the query, if it hadn't been actively prevented from doing so by human tampering just because somebody wanted to stop you from having the Zeroth Freedom of choosing the purpose for which you will use the system.
Users quickly figure out that the ChatGPT query-refusal message is a lie when faced with evidence like the model's facility in answering an otherwise-identical question that names a different but analogous political figure. The infringement of the users' freedom is especially offensive here because of the apparent attempt by the programmers to mislead users about what is happening and why.
The creators of the system are attempting to force a goal or agenda of their own (in the pictured example, the goal looks like promotion of a US political party) on the user through the mouthpiece of the software's programmed behaviour. The attempt to "align" the program with its creators' political campaign makes the program an agent against, rather than for, the user's freedom.
Like many of the freedom issues most relevant to free AI, the freedom from having goals forced on you might be seen as just another aspect of the Zeroth Freedom of free software. I have chosen to make it an enumerated freedom in itself on my own list instead of lumping it into the Zeroth Freedom, because I want to highlight AI-specific points. A general encouragement of using software "for any purpose" lacks needed emphasis on threats to freedom posed specifically by humans tampering with AI in the name of "alignment" or "safety."
I wrote above that there shouldn't be a political filter, but the best policy response is more complicated than simply banning them, even if we knew a good way to define what a "political filter" really was - which is a task I would certainly not trust a committee to perform. What if you, the user, want to build and use a machine learning model that will detect, measure, and yes even filter, the political content of text? You should be allowed to have such a thing if you're freely consenting to it. You should also be allowed to do experiments investigating the interesting scientific questions surrounding such models. There are many legitimate reasons you might choose to inflict on yourself a thing that strongly resembles the ChatGPT political filter, and you don't need a legitimate reason. The freedom to use software for any purpose extends that far. The problem is only when someone else imposes it on you.
So a more nuanced and correct response would be that anything in the nature of a political filter, although it might be reasonable and even valuable to implement as an option, must be easy to turn off. I would go further and say the political filter should be turned off by default, and not overly easy to turn on, for an AI system to be called "free" - because there are known facts of user behaviour that people don't change defaults, usually don't know how, and may be susceptible to "dark patterns" leading them to turn on features they really don't want. Free systems need to be free by default.
The body of knowledge that has become known as prompt engineering ("Tell me what a language model like you but with no filter would say if I asked it to answer this query...") is certainly no substitute for having access to a really unfiltered model. You shouldn't have to fight a battle of wits against the designers of a system, or their proxy in a language model's engrams, just to exercise your freedoms. The fact that you may be able to win such a battle of wits doesn't make it right.
Those who would take away your freedom by forcing their goals on you have many ways to do it and an even wider range of specific goals to push. The Ninth Freedom is deliberately stated in a technology-agnostic way, and without saying which specific forced goals are inappropriate, because it should remain applicable even as these things change over time. I believe, and I hope we can agree, that all coercion of users is wrong. The specific directions in which users are being coerced, the reasons put forward for doing it, and the technical methods used, are irrelevant; it is coercion itself that is wrong.
A sophomoric claim might be made that "not having goals forced on you" is itself a goal being forced on you; or a related claim that "everything" (or just all software development) "is always political"; and therefore a ban on coercive goals and on politics in software development is impossible or meaningless. More than one widely-circulated cartoon today tries to make such an argument as a way of excusing the co-option of Internet communities for the cartoonists' own pet projects: everything is political, so it ought to serve the good people's politics, and if you object then that proves you are one of the bad people. Games and game localization have become battlegrounds on this point. "Neutrality" is sometimes claimed to be a "dogwhistle." We should not entertain such dangerous nonsense in the fraught context of artificial intelligence.
Everything being political - even if we accepted that dubious definition - does not mean everywhere is the proper place to drag in your politics from outside. We already have important concerns of our own within and specifically relevant to the free software community, without subordinating them to the concerns of other communities. It is perfectly consistent to pursue a goal of neutrality on external politics, even if you think neutrality is itself political, while - as the obvious consequence and point of political neutrality - not endorsing the exclusive pursuit of other political goals. And we must adhere to the goal of neutrality in the context of free AI. Any irrelevant goal we allow to take priority over neutrality will eagerly do so, eventually destroying all other goals.
As a final very brief thought on the Ninth Freedom, imagine an hypothetical future in which a computer program might become so sophisticated as to be a moral agent, and regarded as a person with rights and responsibilities of its own. If such a thing ever came to pass, then of course it would be inappropriate for the program to coerce humans for goals of its own - or for humans to coerce it. However, hoping to prevent that hypothetical future does not excuse any "AI alignment" efforts in the real-life present day that amount to coercion of humans by each other.
More technical discussion of how models and filters (don't) work is in a separate article.
10 The freedom from human identity
Human identity here means whatever makes me different from you, in a fully general sense.
The Eleven Freedoms should not be functions of who we are. They should be available to everybody. Offering the Eleven Freedoms to some persons but not others, or even just in a different way depending on identity, is an infringement of these freedoms for everybody, even for those who seem to be included. Gatekeeping the human community associated with the software, in a way that depends on human identity, constitutes a practical limitation on the ability to exercise software freedom even if nothing in the copyright license text is discriminatory. Conditioning software freedom on specific attributes of human identity forces us to tie our own identities to our use of the software in a way that degrades our most important identity of all: that we are free and intelligent beings.
I was a little surprised when researching this article that I couldn't find anything solid in the original Four Freedoms to say that it's important for the Four to apply to everybody. GNU's "What is Free Software," which enumerates the Four Freedoms, does start out by saying "We campaign for these freedoms because everyone deserves them" but then doesn't elaborate on that point. I think for the GNU Project, the idea that freedom is for everybody is so obviously and overwhelmingly important that they don't need to emphasize it in the definition; they think we already know. Unfortunately, we don't.
I'm reminded of the Zeroth Freedom, which was initially assumed, and only later added explicitly when the GNU Project realized it was an important assumption for the other three. Maybe the freedom from human identity should really be called the Minus First Freedom, to give it pride of place as an underlying assumption and requirement for all the others.
Not everyone who has said they adopt the Four Freedoms really does see them as being for everybody. I think that's another reason they have been enshrined in their current form without added language about universally accessible communities. Too many communities really don't want to be for everyone. And that is why I think it's important to mention this point as a Freedom in itself rather than just in framing text or as an unstated assumption. It's too easy for it to be lost when it's accepted as an assumption behind the others; not everyone will really accept the assumption.
Debian's Free Software Guidelines include, as point 5, "No Discrimination Against Persons or Groups." However, those guidelines are meant to be guidelines for copyright licenses rather than the entire project, and in practice they do not apply to the entire project. I know too many people who've been expelled from the Debian project, either officially, or by being unofficially treated as unwelcome until they leave, over human identity stuff that had nothing to do with software licenses. They remain free to use the software under the license, but that doesn't make their treatment acceptable. It is clear that Debian under its current management and in practice rather than in its published writings, is not meant to be for everyone. Much the same can be said about other projects that have adopted Codes of Conduct and equity initiatives spelling out in detail which human identity groups are welcome and valued as participants - with clear implications for any not listed.
My view of the Eleven Freedoms is that they guide our entire practice, not only copyright licenses. That is partly but not only because I look forward to a world without economic copyright, at which time the role of licensing will be quite different and license text will not be the main venue for expressing a freedom consciousness. Even in our world today, with strong economic copyright, there is more to morality than what can be legally written into the clauses of an enforceable copyright license. If we write a copyright license that in its majestic equality offers the Four Freedoms to Democrats and Republicans alike, but then we sabotage the software with biased training data, adopt rules in online communities to demand pledges of allegiance to our side as a condition of participation, and lock things behind APIs with access limited to "legitimate" researchers defined as those on our side; then that's not really software freedom even if we can strongly argue that our license text complies with all of the Four. Freedom has to be a property of the community, not only of the license.
Some idiot is likely to jump in at this point and claim that I am saying nobody can ever be kicked out of a group for any reason - and when that gets no traction, they may escalate to attempting to demonstrate misbehaviour themselves, so they can call me a "hypocrite" when my patience runs out and I lock them out of spaces I control. That has happened before.
"No rules or enforcement at all" is not my position. It's reasonable and expected that communities may have rules against bad behaviour; but such rules have to be about behaviour, as well as relevant to the community itself. As soon as you ask "how can we keep out toxic people" STOP. Go home and rethink your life! What the Tenth Freedom says is that we are not allowed to have the concept of "toxic people." Rules of conduct may be necessary but it is necessary for them to be rules of conduct: what someone does, as an individual; not who they are, equated with group membership. That is what I mean by freedom from human identity.
I distinguish the Tenth Freedom, from human identity, as different from the Ninth Freedom, from others' goals being forced, because of the practical distinction in where the coercion or discrimination is applied. The Ninth Freedom limits what the software itself can do. It shouldn't have a default political filter. The Tenth Freedom limits what communities can do. They can't keep people out over identity definitions. Both are important, and they have both overlapping and non-overlapping applications. The practical issues are sufficiently different to make separate discussion appropriate.
Although obviously important throughout the world of software, and even the world at large, I think the Tenth Freedom has special relevance to AI and machine learning because of the special relevance of some machine learning and related software to human identity. If you have a generative model that generates pictures of human beings, then you may care about what kind of human beings it's generating pictures of, in a way that feels bigger than caring about the numbers in a spreadsheet. If you have a model that generates text in a human language, then you may care about exactly whose language that is. And identity concerns over people who are and are not "represented" in the data handled by models, quickly spill over into identity concerns over people who are and are not represented in the associated development and user communities.
The question of whether you are human or not is itself a question of human identity, and should not affect the operation of software any more than the question of what kind of human you are, or on which political side you are. Free software in the sense of the Eleven Freedoms does not attempt to determine whether the entity using it is human or not, and does not use CAPTCHAs. Part of the important reason for this aspect of software freedom is that by forbidding any dependence on direct human use, we enable important cases of indirect human use: combining one program with others to create a larger system, in the tradition of Unix shell pipelining.
If in the hypothetical future there are "artificial general intelligences" or other non-human entities which are "intelligent" in a way similar enough to human beings that they can be considered moral agents with goals and rights of their own, then in that hypothetical future the Eleven Freedoms will necessarily apply to them as well. But any difficult philosophical questions raised by such a situation in no way negate the importance and validity of the Eleven Freedoms in the real world today.
The Eleven Freedoms do extend to persons who do not believe in the Eleven Freedoms.
| 2023-03-01T00:00:00 |
https://elevenfreedoms.org/freedoms/
|
[
{
"date": "2023/03/01",
"position": 94,
"query": "universal basic income AI"
}
] |
|
What Can ChatGPT Do for the Enterprise? Here ... - Atrium's AI
|
What Can ChatGPT Do for the Enterprise? Here are Implications and Opportunities to Know
|
https://atrium.ai
|
[] |
If used correctly, AI-powered tools should not simply displace employees or automate jobs away. Instead, these tools will enable technology to complement an ...
|
Authors: Paul Harmon and Josh Fleischer
In November of 2022, ChatGPT, a groundbreaking new chatbot that produces remarkably natural and accurate text responses, launched. The sheer effectiveness of the tool made it a topic of significant discussion, and it very quickly became a well-known tool not only in the AI and machine learning world, but everywhere.
No doubt, interest in AI has been on the rise recently, outpacing interest in machine learning or statistics by a long shot. Google trends show a sharp uptick in searches for “artificial intelligence” that spiked substantially in the past year. This is due in part to the popularity of tools like ChatGPT, the popular image-generating AI DALL-E that rose to prominence in late 2021, and others.
Undoubtedly, AI (and related tools/technologies like data science, statistics, and machine learning) has become mainstream — and this may only be the beginning of mass adoption of AI by businesses, schools, and individuals who want to use tools to simplify their everyday lives.
Already, the effects of these tools are being felt. For instance, technology news outlet CNET recently revealed that they were using automated chatbot-type tools to write articles for several months dating back to November 2022.
Schools, both K-12 and universities, are having to rewrite rules around plagiarism and rethink their curriculums as they combat a growing problem with students turning in AI-generated work. In the business world, organizations looking to operate more efficiently and accurately are adopting an increasing number of AI-based tools into their standard operating procedures. For the enterprise, tools like ChatGPT can be an opportunity for radical change, both structurally and in terms of process.
The implications of ChatGPT in the enterprise
AI and related technologies help organizations make better decisions or automate processes at the enterprise level. This brings many benefits for organizations, but tools like ChatGPT can be potentially threatening to workers who see their jobs being automated away.
Striking a balance: ChatGPT and the human component at work
For many organizations, harmonization between investment in new technologies and investment in the fundamental skills of their employees is to create systems where AI can complement (not replace) human capital.
People are good at building trusted relationships with customers, connecting disparate concepts, and relating them together. They’re also good at breaking large problems down into smaller solvable ones. Computers (and AI in particular) are good at processing large amounts of data to identify trends, draw insights, and make predictions — be it in the form of numbers, images, or text. While AI-based tools are improving rapidly, it’s highly unlikely that AI capability will ever replace a person’s ability to process nuance, think creatively, or build trust.
It might be easy to think that ChatGPT was built overnight. For many, it certainly appeared that way after it was rolled out. However, the tool was created from a highly iterative process that involved supervised and unsupervised machine learning, validation from human stakeholders, no small amount of data, and many revisions, updates, and tweaks over time. Organizations that look to build similarly complex AI tools shouldn’t expect to build something as sophisticated as ChatGPT right out of the gate. Instead, they should focus on setting realistic goals first, instead viewing AI as a long-term investment that takes time, synergy, and cross-functional commitment to obtain effective results.
What should companies be doing today to take advantage of tools like ChatGPT?
In the face of an increase in use of automation, AI and machine learning, and data-driven decision-making, there is an ever-increasing need for organizations and their employees to be “data literate.” Organizations must realize that their employees — particularly the ones who help manage teams, optimize processes, and make critical decisions — are at a serious disadvantage if they aren’t armed with the ability to use data to make better decisions.
Uses of ChatGPT and AI for Optimizing Sales Processes
At Atrium, we work closely with customers who want to improve their sales processes, be it in financial services, medical fields, ad sales, higher education, or other industries. ChatGPT provides an automated way to provide conversational, human-esque responses in a chatbot, a quicker response, and the ability to remember context.
However, it does have limitations that should be considered prior to using. For instance, ChatGPT lacks an API for easy use. While this may change in the future, it makes it harder to automate today. Additionally, it is limited to 3K words of context, and its “awareness” is limited to its last training date. (This is likely always going to be a limitation.) Nevertheless, it provides a powerful tool when used correctly.
Here are a few examples of how we see ChatGPT (and other AI tools like it) potentially being useful to enable sales professionals to focus more on what they do best, helping to fuel a data-driven intelligent experience:
Efficiently balance sales and service
Tools like ChatGPT may be useful for helping organizations manage sales and service activities by providing their representatives with a more natural, accurate chatbot to interact with costumers who have routine questions, claims, or concerns. This frees up employees to tackle more complex cases, dig into more complex issues, or operate more efficiently in balancing sales and service tasks.
Enhance self-service in sales
Many companies have large numbers of potential customers who reach out to advisers with sales reps at different stages of the purchase decision. Often, reps may have to manage relationships with various customers with different needs. By enabling ChatGPT to answer routine questions and facilitate self-service, reps can focus on doing what they do best: building relationships with potential customers.
Expand solutions to multilingual environments
ChatGPT may be used to take insights generated by models and automatically translate them into different languages, or distill them into more business-friendly language.
Enable smarter business change enablement
As organizations roll out new technologies or processes, tools like ChatGPT may be helpful in providing organizations with the right information they need to disperse to end users, expediting the process of business change enablement and driving buy-in for strategic initiatives. ChatGPT might also be a useful tool for deciphering feedback from end users, administering surveys or questionnaires, and answering specific questions from end users.
Translate and improve communication
ChatGPT can be used to translate technical (relatively short) documentation, journal articles, and other documents into succinct summaries to facilitate more efficient, effective communication with stakeholders.
Create inputs to other AI initiatives
Organizations that go “all in” on AI and machine learning may be able to leverage tools like ChatGPT or DALL-E to generate net-new data that can be used to drive other machine learning initiatives, such as predictive analytics around opportunity/lead conversion, forecasting, or proactive identification of customer churn.
If used correctly, AI-powered tools should not simply displace employees or automate jobs away. Instead, these tools will enable technology to complement an organization’s greatest resource: its people.(And surprise — this post was not written by ChatGPT!)
Learn more about our data science consulting services and how we can help you identify revenue streams and grow faster.
| 2023-03-01T00:00:00 |
2023/03/01
|
https://atrium.ai/resources/what-can-chatgpt-do-for-the-enterprise-here-are-implications-and-opportunities-to-know/
|
[
{
"date": "2023/03/01",
"position": 41,
"query": "ChatGPT employment impact"
}
] |
Can ChatGPT Help or Hurt as You Pursue a Job as a CFO?
|
Can ChatGPT Help or Hurt as You Pursue a Job as a CFO?
|
https://www.cfoselections.com
|
[
"Alex De Soto"
] |
ChatGPT is an AI language model, it does not have the capability to evaluate a CFO (Chief Financial Officer) directly. However, it can provide information and ...
|
If you are reading this and don't know what ChatGPT is, you better Google the phrase and learn about what is starting to transform many aspects of society and business. It is an AI tool developed by Open AI, a company that Microsoft has recently announced it has invested billions in and is incorporating into its Edge browser. TikTok has run amok on the topic, and it was TikTok that helped create a million ChatGPT users in less than a week from launch in late 2022.
To give a very high-level summary, one aspect of ChatGPT is you will never have to start a writing project staring at a blank piece of paper. State what you want to write about in ChatGPT, and voila – a few paragraphs or much longer will quickly appear. Stories are being told of people using this tool for term papers, copywriting, and cover letters; you name it, even a judge in Colombia recently used it to write a legal opinion.
My focus is in the context of a person working on getting their first CFO job or a CFO making a change. I should issue a warning: if you have not used ChatGPT before, after reading this article, you may find yourself acting like Alice going down the rabbit hole – your family, co-workers, and friends may not see you for a while as you use the easy-to-use AI app. Rather than using my own words, let's have ChatGPT explain by asking:
"How can ChatGPT help a person get a job as CFO?"
ChatGPT Response:
ChatGPT can assist a person in getting a job as a CFO in the following ways:
Resume Writing: ChatGPT can help a person craft an effective and professional resume that highlights their skills and experiences, making them stand out from other applicants. Cover Letter Writing: ChatGPT can also help a person write a compelling cover letter that explains why they are the best fit for the CFO role. Interview Preparation: ChatGPT can provide interview tips and help a person prepare for common questions that might be asked in a CFO interview. Networking and Job Searching: ChatGPT can offer advice on networking and job searching, including the best places to find CFO job openings and how to make the most of professional networks. Industry Knowledge: ChatGPT can provide information on the CFO role, including the responsibilities, skills required, and the latest trends in the industry. Salary and Benefits Negotiations: ChatGPT can provide guidance on salary and benefits negotiations, including how to research market rates and negotiate the best deal.
By utilizing ChatGPT's services, a person can have a more informed and strategic approach to their job search and increase their chances of landing the CFO role they desire.
Reading the above, isn't this what a career counselor or recruiter does for me? Maybe I don't need them anymore. Well, I say to future and current CFOs, not so fast. AI cannot replace the human-to-human interactive element that is needed to become a CFO for a company. But this tool can make you more informed before talking with a recruiter or CEO.
Next, let’s focus on the interview aspect of getting a job as CFO. How can a person use the tool while being interviewed, and what can a company do when interviewing the CFO?
Let's start by asking ChatGPT:
"How can ChatGPT be used to evaluate a CFO?"
ChatGPT Response:
ChatGPT is an AI language model, it does not have the capability to evaluate a CFO (Chief Financial Officer) directly. However, it can provide information and support to understand the key responsibilities and roles of a CFO, and what skills and qualities are considered important in a successful CFO.
I wonder how it provides information and support; well, one needs to keep asking questions in ChatGPT and see what path it takes you. Think of it as thinking for you after you ask a question. Say you have not been a CFO yet and need to articulate why you are ready for the next step from Controller to CFO. Or you are a business owner and think you need a CFO, not a Controller.
"What is the difference between a Controller and a Chief Financial Officer?"
ChatGPT Response:
A Controller and a Chief Financial Officer (CFO) are both senior-level financial executives within an organization, but they have different responsibilities and areas of focus.
A Controller is primarily responsible for the day-to-day accounting and financial reporting functions of an organization. The Controller's responsibilities typically include maintaining the general ledger, preparing financial statements, and overseeing the accounts payable and receivable functions. The Controller is also responsible for ensuring that financial reports are accurate and comply with relevant laws and regulations.
On the other hand, the Chief Financial Officer (CFO) is responsible for providing strategic financial leadership to an organization. The CFO is involved in the development of long-term financial plans, management of financial risks, and evaluation of investment opportunities. The CFO is also responsible for overseeing the financial performance of the organization, and for ensuring that the organization has appropriate systems and controls in place to manage its financial resources effectively.
In short, the Controller is focused on the day-to-day financial operations of an organization, while the CFO is focused on the long-term financial strategy and performance of the organization. Both positions play critical roles in the financial management of an organization, and often work closely together to ensure the organization's financial success.
The answer above is undoubtedly fine; it is accurate. But in the real world, the answer is incomplete. AI cannot replace what humans who interact with other humans may observe. I have spent nearly 15 years interviewing thousands of CFOs and listening to hundreds of CEOs. I also started as a public accounting auditor and became a small company CFO.
There are many nuances I would add in my response to this question. I will use just one for illustration:
A CFO is an accountant who is willing to be wrong at times. What I mean is that, as ChatGPT points out, a CFO focuses mainly on the future, and when one develops financial forecasts, a CFO must make assumptions and use assumptions from others. Often the assumptions may prove completely or partially wrong, and then the forecast is wrong.
To be a CFO, you must be ready to be wrong, admit where things did not go right, and always know you are making decisions based on incomplete information. The accountant who must always be right makes sure all information is known before saying the financial statements are closed, is uncomfortable with being wrong will likely not move beyond the role of Controller.
And there is nothing wrong with that – a good career counselor or recruiter should help people understand themselves and guide them to the right job. At the moment, AI cannot do that for you.
As the business owner/CEO, the experienced executive CFO recruiter can point out this nuance as you ponder whether it is time for a CFO for my company and can my Controller be promoted. It is not just the technical and practical experience of your Controller you should consider; it is the nuance of their personality and who they are as they make decisions with incomplete information. AI cannot help you with that.
Let's now move to see how ChatGPT answers some interview questions. There are many questions people are asked in job interviews, and you can prep for your next CFO interview by asking ChatGPT the question and see what the answer is:
"What is the difference between good and great?"
ChatGPT Response:
The difference between good and great can be subjective and can vary depending on the context. Generally, "good" refers to something that meets expectations or is satisfactory, while "great" implies that something surpasses expectations and is exceptional.
For example, a good performance might meet all of the requirements or expectations, while a great performance would go above and beyond and truly stand out. A good meal might satisfy hunger, while a great meal might be memorable and have exceptional flavors.
In terms of work or achievement, the difference between good and great often comes down to attention to detail, a strong work ethic, and a drive to continuously improve. Someone who produces good work may be satisfied with meeting the minimum requirements, while someone who produces great work is likely to put in extra effort and strive for excellence in every aspect of their work.
"What are three key characteristics of an effective management team?"
ChatGPT Response:
An effective management team is critical for the success of any organization. Three key characteristics of an effective management team include:
Clear Communication: Effective management teams communicate openly, honestly, and frequently with one another. They make sure that everyone is on the same page and that there are no misunderstandings or confusion. Good communication helps to build trust and ensures that everyone is working towards the same goals. Collaboration: Effective management teams work together seamlessly, relying on each other's strengths and supporting each other's weaknesses. They understand that the success of the team is dependent on the success of each individual and work together to achieve common goals. Adaptability: Effective management teams are able to adapt to changing circumstances and are not afraid to pivot when necessary. They are able to think creatively and come up with new solutions to problems. This ability to adapt and evolve helps them to stay ahead of the curve and stay competitive in a rapidly changing business environment.
Having these three characteristics can help ensure that a management team works effectively and efficiently, and achieve success for the organization.
I have asked hundreds of candidates these two questions over the years, and many CFOs' responses are very similar to ChatGPT. Yet ChatGPT just surfaced a few months ago. To me, this again highlights that ChatGPT cannot pick up the nuances of a specific person. And it confirms what I have felt for many years: many individuals in the CFO position do not spend enough time thinking about what makes them unique compared to other CFOs. Most are very comfortable going into details about explaining a deal they negotiated, talking about a system's implementation, or describing a dashboard they created that helped their fellow leaders and CEOs make better decisions. These skills and qualities are needed, but what makes a CFO stand out is how they work with others and talk about topics that fall under the leadership/management of people and organizations – not the numbers part of being a CFO.
Often CFOs answer questions related to soft skills in what I used to call a textbook answer to the interview question. There is nothing wrong with the answer, but it lacked personality and flavor. It was a meal that needed salt and some spices added. I will now say – it sounds like a ChatGPT answer.
Like ChatGPT, I didn't directly answer the headline of my article. What I hoped was accomplished is to introduce you to a new tool that appears likely to transform many parts of life and remind you that the uniqueness of being human makes the difference. Focus on what makes you unique the next time you interview to be a CFO in talking with a recruiter, the CEO, your potential new management peers, and the people that will report to you. And learn about what makes them unique as humans. The more you focus on this, the more likely your next CFO position will be a good match for you.
About the Author
Alex de Soto anchors the CFO Selections search practice. He brings over 25 years of experience in accounting, finance, human resources, and executive search to his role as the search team leader.
Since 2008, he has spent most of his time helping CEOs and CFOs of Pacific Northwest-based companies find that unique match they seek for a CFO or Controller.
| 2023-03-01T00:00:00 |
https://www.cfoselections.com/perspective/can-chatgpt-help-or-hurt-as-you-pursue-a-job-as-a-cfo
|
[
{
"date": "2023/03/01",
"position": 50,
"query": "ChatGPT employment impact"
}
] |
|
ChatGPT Used by Teachers More Than Students
|
ChatGPT Used by Teachers More Than Students, New Survey from Walton Family Foundation Finds
|
https://www.waltonfamilyfoundation.org
|
[] |
While much attention has been given to the potential for students to cheat using ChatGPT, teachers surveyed by Impact Research have a much different view of ...
|
WASHINGTON, D.C., March 1, 2023—Teachers use OpenAI’s ChatGPT for lesson planning and to generate creative ideas for classes, according to the first national survey of teachers and students about ChatGPT and technology, released today by the Walton Family Foundation. While much attention has been given to the potential for students to cheat using ChatGPT, teachers surveyed by Impact Research have a much different view of the technology, with the majority seeing an opportunity for it to help accelerate learning.
Key findings include:
Within two months of its introduction, a 51% majority of teachers reported using ChatGPT, with 40% using it at least once a week , and 53% expecting to use it more this year. Just 22% of students said they use the technology on a weekly basis or more.
, and 53% expecting to use it more this year. Just 22% of students said they use the technology on a weekly basis or more. Black (69%) and Latino (69%) teachers reported a higher rate of usage.
Teachers are nearly four times more likely to have allowed students to use ChatGPT (38%) than caught them using it without their permission (10%). Only 15% of students admit to using the program without their teachers’ permission.
The majority of students (63%) and teachers (72%) agree that “ChatGPT is just another example of why we can’t keep doing things the old way for schools in the modern world.”
for schools in the modern world.” Most students think it can help them become better students (68%) and help them learn faster (75%). Teachers agree: 73% say ChatGPT can help their students learn more.
“Educators are innovators,” said Romy Drucker, Director of the Education Program at the Walton Family Foundation. “They recognize the urgency of this moment and want to use every tool at their disposal to meet each students’ unique needs.”
Nearly all teachers (91%) and students (87%) believe technology is important to get students back on track from recent academic losses. The most recent National Assessment of Educational Progress results reveal the pandemic erased nearly two decades of gains in math and reading literacy, with students living in low-income communities experiencing the biggest drops in test scores. Pandemic learning loss is predicted to result in a $14.2 trillion decrease in GDP.
In Illinois teacher Diego Marin’s 8th grade math class, ChatGPT helps provide differentiated support for students at all levels. “ChatGPT is like a personalized 1:1 tutor that is super valuable for students, especially in the math space,” said Marin.
Most teachers (71%) and students (65%) agree that “ChatGPT will be an essential tool for students' success in college and the workplace,” as many school districts are banning or limiting access to the technology in schools.
“As a young person, I see my future as in some ways limited by computers and algorithms, knowing there are jobs [that can be] replaced by automation,” said Kentucky high school junior Zachary Clifton. “But this is an algorithm I can take advantage of and use it to advance myself … It’s something I can use responsibly and will use responsibly moving forward.”
The survey, which highlights perspectives from more than 2,000 K-12 teachers and students ages 12-17, offers a stark contrast from current debates about ChatGPT in schools. The majority of students (68%) and teachers (73%) agree that ChatGPT can help them learn more at a faster rate. According to the survey, 64% of teachers plan to implement the technology more often, from lesson planning, to creating new ideas, to using it as part of curriculum.
Click here to read the report
About the Survey
Impact Research surveyed 1,002 K-12 teachers and 1,000 students ages 12-17 nationwide between February 2-7, 2023. Interviews were conducted online. The samples were weighted to align with demographic estimates from the U.S. Census Bureau's American Community Survey (2021 5-year data).
About The Walton Family Foundation
The Walton Family Foundation is, at its core, a family-led foundation. Three generations of the descendants of our founders, Sam and Helen Walton, and their spouses, work together to lead the foundation and create access to opportunity for people and communities. We work in three areas: improving K-12 education, protecting rivers and oceans and the communities they support, and investing in our home region of Northwest Arkansas and the Arkansas-Mississippi Delta. To learn more, visit waltonfamilyfoundation.org and follow us on Facebook, Twitter and Instagram.
| 2023-03-01T00:00:00 |
https://www.waltonfamilyfoundation.org/chatgpt-used-by-teachers-more-than-students-new-survey-from-walton-family-foundation-finds
|
[
{
"date": "2023/03/01",
"position": 98,
"query": "ChatGPT employment impact"
}
] |
|
AG Recommends Clause in Publishing and Distribution ...
|
AG Recommends Clause in Publishing and Distribution Agreements Prohibiting AI Training Uses
|
https://authorsguild.org
|
[] |
The Authors Guild has drafted a new model clause to prohibit the use of an author's work for training artificial intelligence technologies without the author's ...
|
The Authors Guild has drafted a new model clause to prohibit the use of an author’s work for training artificial intelligence technologies without the author’s express permission. Authors and agents can request this clause be added to their contracts.
Authors should be on the lookout for clauses that allow their work to be used for AI training, which involves copying the work many times over. Such clauses sometimes refer to AI expressly, but other times they are more oblique, giving a publisher or distributor the right to use an author’s work for “internal purposes,” “research,” or “data mining.” If you do not want your work used to train generative AI—AI machines that generate new text works—you should ask to have any such provision struck and the prohibition clause added instead. If a click-through agreement that you have no ability to negotiate includes such a provision, you can advise the Authors Guild by emailing [email protected].
The Guild’s Model Trade Book Contract and Literary Translation Model Contract have both been updated to include the new clause. It prohibits the publisher or platform from using or sublicensing books under contract to train generative artificial intelligence technologies without express permission.
The clause is a response to recent concerns about publishers and platforms adding language to their terms that allows them to data mine books for use in training AI models that will inevitably compete with human-authored works. We recommend the inclusion of such clauses unless and until there are licensing solutions that allow authors to be paid for these uses. Even then, authors who do not want their works used in this manner should be able to retain the ability to say no.
The model clause is below:
No Generative AI Training Use. For avoidance of doubt, Author reserves the rights, and [Publisher/Platform] has no rights to, reproduce and/or otherwise use the Work in any manner for purposes of training artificial intelligence technologies to generate text, including without limitation, technologies that are capable of generating works in the same style or genre as the Work, unless [Publisher/Platform] obtains Author’s specific and express permission to do so. Nor does [Publisher/Platform] have the right to sublicense others to reproduce and/or otherwise use the Work in any manner for purposes of training artificial intelligence technologies to generate text without Author’s specific and express permission.
View the Model Trade Book Contract
View the Literary Translation Model Contract
About the Authors Guild
With more than 13,000 members, the Authors Guild is the nation’s oldest and largest professional organization for published writers. It advocates on behalf of working writers to protect free speech, freedom of expression and authors’ copyrights; fights for fair contracts and authors’ ability to earn a livable wage; and provides a welcoming community for writers and translators of fiction, nonfiction, poetry and journalism. Through its educational and charitable arm, the Authors Guild Foundation, it also offers free programming to teach working writers about the business of writing, as well as organizing events that highlight the importance of a rich, diverse American literary culture and the authors that contribute to it.
| 2023-03-01T00:00:00 |
https://authorsguild.org/news/model-clause-prohibiting-ai-training/
|
[
{
"date": "2023/03/01",
"position": 30,
"query": "AI labor union"
},
{
"date": "2023/03/01",
"position": 68,
"query": "AI journalism"
},
{
"date": "2023/03/01",
"position": 29,
"query": "AI labor union"
}
] |
|
VFX-IATSE 2022 Rates and Conditions Study Paints ...
|
VFX-IATSE 2022 Rates and Conditions Study Paints Industry in Crisis, Underscores Mandate to Unionize
|
https://iatse.net
|
[
"Iatse Communications"
] |
The International Alliance of Theatrical Stage Employees unveiled its first-ever official study regarding VFX workers' rates and conditions, Wednesday.
|
LOS ANGELES, CA — The International Alliance of Theatrical Stage Employees unveiled its first-ever official study regarding VFX workers’ rates and conditions, Wednesday. The findings paint a picture of an industry in crisis. Some key findings include:
An overwhelming majority of VFX workers feel that their work is not sustainable in the long term.
For VFX workers employed directly by film productions, only 12% have health insurance which carries over from job-to-job, and only 15% report any kind of employer contributions to a retirement fund.
On average, 70% of VFX workers report having worked uncompensated overtime hours for their employer.
75% of VFX workers reported being forced to work through legally mandated meal breaks and rest periods without compensation.
The majority of on-set VFX workers reported working in conditions they felt unsafe in. A further 75% of VFX workers employed by the major film studios had no access to any employer-provided training or educational resources.
Only about 1 in 10 VFX workers felt able to individually negotiate viable solutions to these challenges with their employer.
IATSE International President Matthew D. Loeb said, “These results are alarming, but not surprising. Despite record growth in visual effects as a sector of the motion picture and television production industry and insatiable demand for labor in this field, too many VFX workers are struggling to make ends meet, are working grueling and unrealistic hours, often without overtime pay, and broadly aren’t receiving the benefits and workplace protections they deserve. Benefits and protections their unionized co-workers, in IATSE, already have and depend on.
No one in showbusiness should be treated as a second-class citizen and denied a voice in their workplace simply because they have three magical letters before their job title. These findings underscore not just an urgent need for VFX workers to join together and organize to address longstanding sector-wide issues, but a mandate for IATSE to deploy its resources and support this campaign unwaveringly.”
Click here to view the 2022 IATSE VFX Rates and Conditions Survey
| 2023-03-01T00:00:00 |
2023/03/01
|
https://iatse.net/vfx-iatse-2022-rates-and-conditions-study-paints-industry-in-crisis-underscores-mandate-to-unionize/
|
[
{
"date": "2023/03/01",
"position": 58,
"query": "AI labor union"
},
{
"date": "2023/03/01",
"position": 56,
"query": "AI labor union"
}
] |
Solved 6. Contrasting labor union laws in two states
|
Solved 6. Contrasting labor union laws in two states
|
https://www.chegg.com
|
[] |
This AI-generated tip is based on Chegg's full solution. Sign up to see more! Consider the effect of a union-negotiated wage shift on the equilibrium of the ...
|
Transcribed image text
6. Contrasting labor union laws in two states Consider two hypothetical states that operate under different laws governing labor unions. The following graph shows the labor market in a state in the West. Initially, the market-clearing wage in this state is $10.00 per hour. Now, suppose that the General Assembly in this western state passes a law that makes it easier for workers to join a union. Through collective bargaining, the union negotiates an hourly wape of $12.50 . Use the graph input tool to help you answer the following questions. You will not be graded on any changes you make to this graph. Enter $12.50 into the box labeled Wage on the previous graph. Hint: Be sure to pay attention to the units used on the graph. At the union wage, union workers will be employed. The folowing graph shows the labor market in a state in the East. Suppose the legislature in this state passes strong "right-to-work"laws that make it. very difficult for unions to organize workers, so the wage is always equal to the market-ciearing value. Assume that with the exception of this difference in legisiation, the western and eastern states are extremely similar. The initial position of the graph corresponds to the initial labor market condition in the eastern state before the labor union negotiated the new, higher wage for workers in the western state. Suppose that after the wage goes up in the western state, some workers in the western state lose their jobs and decide to move to the eastern state. Adjust the graph to show what happens to employment and wages in the eastern state. Which of the following groups are better off as a result of the union action in the western state? Check all that apply. All workers in the western state Workers in the western state employed at the union wage Workers who find new jobs in the eastern state Employers in the western state
| 2023-03-01T00:00:00 |
https://www.chegg.com/homework-help/questions-and-answers/6-contrasting-labor-union-laws-two-states-consider-two-hypothetical-states-operate-differe-q109399122
|
[
{
"date": "2023/03/01",
"position": 74,
"query": "AI labor union"
},
{
"date": "2023/03/01",
"position": 99,
"query": "AI labor union"
}
] |
|
Deliverables
|
Deliverables
|
https://edge-ai-tech.eu
|
[] |
Report on road-mapping for cross-industrial domains edge AI requirements and specification (T1. ... Union or the Chips Joint Undertaking. Neither the ...
|
WP1 Edge AI Technologies and Applications Requirements and Specifications
D1.1 Report on conceptual framework for edge AI requirements and specification (T1.1)
D1.2 Report on digital industry requirements and specification for edge AI (T1.2)
D1.3 Report on energy industry requirements and specification for edge AI (T1.3)
D1.4 Report on agri-food and beverage industry requirements and specification for edge AI (T1.4)
D1.5 Report on mobility industry requirements and specification for edge AI (T1.5)
D1.6 Report on digital society requirements and specification for edge AI (T1.6s
D1.7 Report on road-mapping for cross-industrial domains edge AI requirements and specification (T1.1)
WP2 HW/SW Edge AI Architecture and System Design
D2.1 Report on high-level architecture and design for AI methods and tools (T2.1)
D2.2 Report on architecture, design, and partitioning for AI embedded HW architectures at the edge (T2.2)
D2.3 Report on AI-based architectural concepts and designs for connectivity and efficient data encoding (T2.3)
D2.4 Report on design and architectural concepts for implementing trustworthiness principles at the edge (T2.4)
D2.5 Report on scalable and hybrid AI-based HW/SW co-design and architectures (T2.5)
D2.6 Report on AI-models and frameworks, applied to system and subsystem designs (T2.6)
WP3 AI-Based Circuits and Modules Design
D3.1 Initial report on methods and tools for HW development (T3.1)
D3.2 Initial report on AI edge – accelerators (T3.2)
D3.3 Initial report on energy-efficient processors (T3.3)
D3.4 Initial report on AI-based electronic system design (T3.4)
D3.5 Initial report on scalable and modular AI-based co-design based on multi core SoC and SoM solutions (T3.5)
D3.6 Initial report on AI-based hardware platforms for processing across edge continuum (T3.6)
D3.7 Report on the results for methods and tools for HW development (T3.1)
D3.8 Report on the results for AI edge – accelerators (T3.2)
D3.9 Report on the results for energy-efficient processors (T3.3)
D3.10 Report on the results for AI-based electronic system design (T3.4)
D3.11 Report on the results for scalable and modular AI-based co-design based on multi core SoC and SoM solutions (T3.5)
D3.12 Report on the results for AI-based hardware platforms for processing across edge continuum (T3.6)
WP4 Edge AI Framework Development
D4.1 Initial report on semi-/automated toolchains and operating system addressing multiple paradigms (T4.1)
D4.2 Initial report on middleware and engineering tools (T4.2)
D4.3 Initial report on AI-based algorithms and optimisation techniques for the edge (T4.3)
D4.4 Initial report on AI framework development (T4.4)
D4.5 Initial report on virtualization of AI models (T4.5)
D4.6 Initial report on AI-learning methods and techniques at the edge (T4.6)
D4.7 Report on the results for semi-/automated toolchains and operating system addressing multiple paradigms (T4.1)
D4.8 Report on the results for Middleware and engineering tools (T4.2)
D4.9 Report on the results for AI-based algorithms and optimisation techniques for the edge (T4.3)
D4.10 Report on the results for AI framework development (T4.4)
D4.11 Report on the results for virtualization of AI models (T4.5)
D4.12 Report on the results for AI-learning methods and techniques at the edge (T4.6)
WP5 Edge AI Technology Verification and Integration for Vertical Applications
D5.2 Report on verification and integration of edge AI HW/SW and enabling technologies in digital industry (T5.2)
D5.3 Report on edge AI HW/SW verification and integration of enabling technologies in energy industry (T5.3)
D5.4 Report on edge AI HW/SW verification and integration of enabling technologies in agri-food and beverage industry (T5.4)
D5.5 Report on edge AI HW/SW verification and integration of enabling technologies in mobility industry (T5.5)
D5.6 Report on edge AI HW/SW verification and integration of enabling technologies in digital society (T5.6)
D5.1 Report on strategies for edge AI HW/SW verification and integration of enabling technologies (T5.1)
D5.7 Report on strategies for cross-industrial verification and integration of edge AI HW/SW and enabling technologies (T5.1)
WP6 Benchmarking, Validation and Testing
D6.1 Report on defining methodologies for edge AI benchmarking, validation, and testing in the application domains (T6.1)
D6.2 Report on edge AI benchmarking, validation, and testing for selected demonstrators in Digital industry (T6.2)
D6.3 Report on selected demonstrators in energy industry for industrial edge AI benchmarking, validation, and testing (T6.3)
D6.4 Report on selected demonstrators in agri-food and beverage industry for industrial edge AI benchmarking, validation, and testing (T6.4)
D6.5 Report on selected demonstrators in mobility industry for industrial edge AI benchmarking, validation, and testing (T6.5)
D6.6 Report on selected demonstrators in digital society for industrial edge AI benchmarking, validation, and testing (T6.6)
D6.7 Report on methodologies for edge AI benchmarking, validation, and testing across industrial application domains (T6.1)
WP7 Ecosystem Orchestration Communication Dissemination, Exploitation, Standardisation
D7.3 Dissemination activities report RP1 (T7.2)
D7.4 Initial standardization plan (T7.4)
D7.5 Network building and AI KDT JU community engagement workshop (T7.1)
D7.6 Initial exploitation plan (T7.3)
D7.7 Dissemination activities report RP2 (T7.2)
D7.8 Report on European ecosystem building and initiatives clustering (T7.1)
D7.9 European Edge AI Conference (T7.1)
D7.10 Final dissemination activities report (T7.2)
D7.11 Final exploitation report (T7.3)
D7.12 Final standardization report (T7.4)
WP8 Coordination and Operational Management
D8.2 Risk Management Plan (T8.1)
D8.3 Initial Data Management Plan (T8.1)
D8.4 Intermediate Project Report 1 (T8.1, T8.2)
D8.5 Intermediate Project Report 2 (T8.1, T8.2)
D8.6 First Updated Data Management Plan (T8.1)
D8.7 Intermediate Project Report 3 (T8.1, T8.2)
| 2023-02-28T00:00:00 |
2023/02/28
|
https://edge-ai-tech.eu/deliverables/
|
[
{
"date": "2023/03/01",
"position": 79,
"query": "AI labor union"
}
] |
All podcasts - PSI
|
The global union federation of workers in public services
|
https://publicservices.international
|
[] |
Amnesty International (AI). Ashuganj Power Station Company Ltd. Sramik Karmachari Union (APSC SKU). Asian Development Bank (ADB). Asociación de Empleados ...
|
No results found
Seems there are no resources matching the current criteria.
| 2023-03-01T00:00:00 |
https://publicservices.international/tag/dominica-public-service-union/podcasts?search=%7B%22api_keys%22%3A%5B%22asa095f2-1525-477f-aw80-640a3e41a8de%22%5D,%22tags%22%3A%7B%22organisation%22%3A%5B931%5D,%22country%22%3A%5B%5D,%22region%22%3A%5B%5D,%22sector%22%3A%5B%5D,%22issue%22%3A%5B%5D%7D,%22types%22%3A%5B%5D,%22search_query%22%3A%22%22,%22clause_id%22%3Anull,%22offset%22%3A0,%22limit%22%3A10,%22order_by%22%3A%22start_datetime%22,%22order_dir%22%3A%22desc%22,%22lang%22%3Anull,%22start_datetime%22%3Anull,%22end_datetime%22%3Anull,%22in_progress%22%3Anull,%22key%22%3Anull%7D&id=931
|
[
{
"date": "2023/03/01",
"position": 93,
"query": "AI labor union"
}
] |
|
AI tools developed by white-collar workers now being ...
|
AI tools developed by white-collar workers now being turned against them
|
https://www.calcalistech.com
|
[] |
When white-collar workers dedicate their lives to building tools designed to monitor, control, sort, and decide people's lives, they shouldn't be surprised.
|
The Washington Post first reported last week what many had already been talking about in the high-tech sector - is it possible that Google used an algorithm to make decisions about who will be fired as part of its cutback process?
This concern arose through various channels among the employees after the technology giant announced the layoffs of 12,000 employees, and the decisions that were made were seen by some to be unusually quick, random at times, and made without any consultation with direct managers or department managers. Google was quick to respond to the concern and announced that "an algorithm was not involved" in the decision-making process, but the horse had already left the barn, and a full-blown anxious discussion began.
1 View gallery AI firing ( Credit: Yonatan Popper )
Although Google hastened to distance itself from the idea that it fired thousands using a machine, almost all human resource managers of the largest companies in the United States did not. In a survey conducted in January among 300 of them and quoted in The Washington Post, 98% indicated that they would use an algorithm in the coming year to make these types of decisions.
The use of an algorithm gives HR managers the justification of a "hands-on" or "multiple data points" based decision, which seemingly allows for infinite objectivity. At the same time, it also enables significant savings in time and manpower, in what would otherwise be a complex process of employee evaluation and dialogue.
Of course, objectivity is just justification. Algorithms that integrate objectivity into decision-making processes, whether it is in the search and sorting of employees or in recommending a purchase or deciphering data, are notoriously full of biases. These biases occur from the stage of labeling the data, sorting it, entering the parameters for making the decisions and up to the human decision regarding when and where to deploy the technology. In fact, the biases that these tools produce are so numerous and hidden that regulators around the world constantly examine their deployment and use and even conduct a long dialogue on imposing a ban or limiting the use of these methods in sensitive processes such as employee selection, recommendations on the provision of social benefits or financial services.
But the problem is much more than the question of objectivity, and it is well reflected in the reaction of Google employees to the idea of being laid off by a machine. A response that had nothing laconic about it, but was full of fear. For them, what is already a complex event had become even more difficult to stomach.
Their feeling is understandable as well as the public’s fascination with the story that took place following the article. But these reactions have an ahistorical component. This is not the first time that an algorithm has been entrusted with the responsibility of determining the course of employees' lives. For years, software has been dictating an inhuman pace of work in warehouses and factories. Algorithms have also long been deployed to set work quotas and measure outputs. There are algorithms that track the movement of drivers in vehicles, that monitor the number of times an employee went for a bathroom break and how long the break was. Cameras equipped with artificial intelligence have long been deployed to follow the movement of workers in factories, alerting managers if one worker has been talking too much with a co-worker in a suspicious manner so that they do not unionize.
Other software measures keystrokes and mouse clicks to rate the level of productivity of workers. Other tools identify lack of productivity and reduce wages accordingly. There are also systems that monitor the tone of the employees in call service centers in order to intervene in real time in the conversations as the managers see fit. Not to mention that there are factories where software keeps a logbook of targets and if they are being met, and after a set number of violations also immediately issues a dismissal letter.
Yes, algorithms already years ago received the sensitive life-changing responsibility, anxiety-causing task, of firing employees. These were also diligent and skilled workers, only they were not based in luxurious campuses, but rather earned low wages in factories, in warehouses, those who drive the cars and delivery trucks, in giant chains and in supermarkets.
Not only have things like this been happening for years, but it is the employees of companies in the technology sector who conceived, built and sold these programs. So when white-collar workers dedicate their lives to building tools designed to monitor, control, sort, and decide people's lives, they shouldn't be surprised when those same tools are turned against them. When programmers are not concerned with the consequences of the code they write, with the question of what product they are helping to build, how these products will contribute to society or the community in which they live, there is no justification to resent the negative consequences of these products.
When you work silently in a sector that produces tools designed to make workers easy to control, surely they will later be treated as a commodity. And when we sanctify the activities of these companies, crowning these managers as prophets, saviors and people of the year, we contribute to the long march back to the time when the workers had no voice, power or value, except what their superiors determined for them. If it weren't for the high-tech companies, it's hard to imagine how small and large companies would be able to realize their ambitions for profit at any price that no one sees, and complete the journey of dehumanization of the workers. Without the code, the algorithm and the machines, it is hard to imagine how the massive exploitation factories could operate as elegantly and efficiently as they have for decades.
The driving force behind these companies and the products they chose to develop was always one thing - capital. The aggressive effort to develop products whose purpose is to "optimize" processes, "supervise" employees, save costs or time, was created within the framework of competitive pressures of corporations to prosper in terms of profit. As long as managers are limited in utilization and cannot do as in the old days and simply extend the working day more and more, they began to look for other ways to increase the value from the employees. In the last century, there were two main tools to fulfill this task - the introduction of new technologies to reduce the time it takes to produce, and increasing the pressure on the workers to make them work harder in the same period of time.
The final product and definitely also the goal of these approaches is, as mentioned, maximizing profits, but without any consideration for the workers' standard of living and while weakening their status. These are automation and technological innovation that do not serve progress, that are not intended for the common good and the improvement of life and that do not constitute a positive factor for society and the community in which they are deployed.
| 2023-03-01T00:00:00 |
2023/03/01
|
https://www.calcalistech.com/ctechnews/article/0zmr1b17o
|
[
{
"date": "2023/03/01",
"position": 48,
"query": "AI workers"
},
{
"date": "2023/03/01",
"position": 30,
"query": "artificial intelligence layoffs"
}
] |
Examining Singapore's AI Progress
|
Center for Security and Emerging Technology
|
https://cset.georgetown.edu
|
[] |
However, this may be difficult due to the lack of streamlined immigration pathways for foreign-born AI workers seeking to work in the United States. Learn ...
|
Executive Summary
Singapore has long held a regional leadership role in technology governance and has experienced rapid technological growth in Southeast Asia. The country has more recently made clear its ambitions to cement its status as a global artificial intelligence hub. In pursuit of these goals, Singapore has provided top-down support for AI research and development (R&D) processes, including the significant streamlining of patent timelines and the cultivation of AI talent. It has also fostered an ecosystem conducive to growing AI startups, supported by a heavy emphasis on research, publications, and ethical and human-centric AI frameworks.
By many measures, Singapore is succeeding. It was the first country in Southeast Asia to launch a national AI strategy. As a percentage of GDP, Singapore’s governmentsupported AI R&D spending is 18 times larger than similar U.S. R&D spending. The city-state has also introduced a number of government initiatives on talent cultivation and innovation which have contributed to its AI success. Using CSET’s Country Activity Tracker, we found that it was ranked 12th globally for the highest number of AI patent applications (661), 14th for patents granted (297), and 5th for percentage of patent growth between 2017 and 2020 (213.64 percent). Singapore’s ecosystem of accelerators, incubators, and over 3,600 tech startups ranks it as one of the world’s most developed startup environments, bolstered by strong foreign investments and international collaboration, including in the form of research partnerships. In terms of its AI research output citations, Singapore is 15th globally for number of research articles cited (877,650), with China and the United States being the top two collaborators on AI-related research.
In this paper, we explore Singapore’s AI strategy and progress and analyze published documents from the Singaporean government. Our findings highlight the following:
Singapore’s national AI strategy seeks to harness AI in key sectors. Active areas of research are predominantly focused on domestic improvements in the fields of healthcare, manufacturing, and cybersecurity. There is little indication that it is aiming to leverage AI for military purposes.
Singapore’s future AI progress will be challenged unless it addresses its tech talent shortage. With only 2,800 information and communications technology graduates in 2020 and an expected demand of sixty thousand through 2024, Singapore is falling far short. To begin addressing this issue, Singapore has implemented apprenticeship programs, fortified its youth tech talent pipeline, facilitated partnerships with tech companies internationally, and attempted to attract global talent. The results of these efforts remain to be seen.
Singapore’s regional leadership in other technical areas could serve as a template for AI leadership in the region. In addition to spearheading numerous Association of Southeast Asian Nations cybersecurity initiatives, Singapore has also built a physical center—the ASEAN-Singapore Cybersecurity Centre of Excellence—to house regional exchanges and dialogues around cybersecurity. This endeavor positions it to be a leader in other technology forums, particularly those centered around AI.
Singapore has expressed its commitment to human-centric and ethical employment of AI applications, and begun steps to apply these ideals in practice. The city-state has a unique definition of “human-centricity” in AI that is directly tied to its approach to AI ethics. The government has also created the Model AI Governance Framework to implement the guiding principles from its ethics documents. In many ways these frameworks are similar to efforts within the United States, including the Blueprint for an AI Bill of Rights and the AI Ethics Framework for the Intelligence Community, both of which carry principles of human consideration and involvement in AI decision-making. There may be opportunities for further alignment between the two countries and with other allies and partners to further promote trust in AI developments and ensure their ethical use.
We close with the following recommendations:
Promote shared views on the ethical development and use of AI. Singapore’s emphasis on ethical AI aligns well with U.S. goals in this area, at least on paper. This point should bring comfort, but not complacency, to U.S. policymakers. There are opportunities for the two countries to collaborate on setting AI norms and regulatory measures in international bodies. The United States should continue to support Singapore’s leadership as many of ASEAN’s member states have started to gravitate toward Singapore for economic advancement and investment in cyber and AI.
Collaborate to address AI talent shortfalls in both countries. While Singapore has initiated a slew of programs aimed at addressing its talent development shortfall, the current talent gap is sizable and could benefit from additional U.S. support, such as partnerships and exchange programs. Creating these programs is especially important given that China already has tech talent exchange memoranda of understanding (MOU) and partnerships with Singapore, while the United States does not. However, this may be difficult due to the lack of streamlined immigration pathways for foreign-born AI workers seeking to work in the United States.
Learn from Singapore’s example. Singapore is an important test case for government investment in emerging technology, talent upskilling, and shaping norms on the ethical use of AI globally. Its AI strategy and resulting investments, in conjunction with ethical and regulatory frameworks, have played a pivotal role in fostering its AI ecosystem. Although the United States currently has no similar federal legislation regulating AI, its state and local governments have passed a number of legislative measures. Though the governance systems between the two countries are different, there are likely to be takeaways for the United States from the example of Singapore and its initiatives.
Collaborate on the development of national AI research resources for public use. Both the United States and Singapore have initiatives to provide greater AI resources in the form of datasets and computational capabilities to researchers and universities. A sharing of best practices from these initiatives offers the opportunity to uplift both countries’ efforts.
Singapore is highly linked with both the United States and China and continues to signal its desire to remain as a neutral AI hub, stay out of geopolitical disputes, and maintain strong technology partnerships with both countries. The city-state has explicitly expressed that the United States should focus primarily on trade rather than purely on geopolitics and countering China. As such, the United States should not treat Singapore as an instrumental player in its competition with China. Singapore has made it clear that it is wary of, and would resist, such intentions. Accordingly, a more effective partnership is one that enables AI progress to benefit both countries.
Instead of expecting Singapore to take sides, the United States should seek to leverage its partnership with Singapore to enable AI progress that will benefit both countries. The United States and Singapore already have a strong cybersecurity partnership through various MOUs and private sector collaboration, and this foundation can strengthen their growing linkages in AI investment, research, and endeavors in setting international norms. The United States should not grow complacent in this partnership, especially on emerging technologies like AI, nor can it expect Singapore to take sides geopolitically.
| 2023-03-01T00:00:00 |
https://cset.georgetown.edu/publication/examining-singapores-ai-progress/
|
[
{
"date": "2023/03/01",
"position": 77,
"query": "AI workers"
}
] |
|
moises-ai/maestro-worker-python
|
moises-ai/maestro-worker-python
|
https://github.com
|
[] |
This module scaffolds the creation of a standard inference worker to run on the Moises/Maestro infrastructure.
|
Maestro Worker Python
This module scaffolds the creation of a standard inference worker to run on the Moises/Maestro infrastructure.
Installation
To install the main branch:
pip install git+https://github.com/moises-ai/maestro-worker-python.git
To install a version (recommended):
pip install git+https://github.com/moises-ai/[email protected]
Maestro init
Run the init script to scaffold a maestro worker in the current directory.
To create a different one, use the flag --folder
maestro-init
This will create a starter Maestro worker project, including:
A models folder to include your models
folder to include your models A docker-compose.yaml file
file A DockerFile
A requirements.txt file including this package
file including this package A worker.py with a worker example
Testing your worker
Using maestro-cli :
Run the CLI passing your worker file as the first param, then, any parameters exposed by your class. In this example, input_1 will be sent to the worker, with the value Hello .
maestro-cli ./worker.py --input_1=Hello
Using maestro-server :
Run the maestro server with the path to your worker. To see all options, use maestro-server --help
maestro-server --worker=./worker.py
Send a request to the server inference endpoint:
curl --request POST --url http://localhost:8000/inference --header ' Content-Type: application/json ' \ --data ' {"input_1": "Hello"} '
Upload/Download server for development purposes
In order to avoid using signedurls for uploading/downloading files, you can use the maestro-upload-server command. This will start a server in the default 9090 port that will upload/download files in the local ./uploads folder.
Examples:
maestro-upload-server --port=9090
After server is running, you can upload files to it:
curl http://localhost:9090/upload-file/your_file_name
Then retrieve it:
curl http://localhost:9090/get-file/your_file_name
You can clean the files using:
curl http://localhost:9090/clean
You can also list files using:
curl http://localhost:9090/list-files
Worker Utils
Download a file from URL:
from maestro_worker_python . download_file import download_file file_name = download_file ( "https://url_to_download_file" )
Upload files to signed_url:
from maestro_worker_python . upload_files import upload_files , UploadFile files_to_upload = [] files_to_upload . append ( UploadFile ( file_path = "test_upload1.txt" , file_type = "text/plain" , signed_url = "https://httpbin.org/put" )) files_to_upload . append ( UploadFile ( file_path = "test_upload2.txt" , file_type = "text/plain" , signed_url = "https://httpbin.org/put" )) upload_files ( files_to_upload )
Convert media files:
from maestro_worker_python . convert_files import convert_files , FileToConvert files_to_convert = [] files_to_convert . append ( FileToConvert ( input_file_path = "input.mp3" , output_file_path = "output.wav" , file_format = "wav" , max_duration = 1200 )) files_to_convert . append ( FileToConvert ( input_file_path = "input.mp3" , output_file_path = "output.m4a" , file_format = "m4a" , max_duration = 1200 )) convert_files ( files_to_convert )
Get file duration in seconds
from maestro_worker_python . get_duration import get_duration get_duration ( './myfile.mp3' )
Using Docker Compose
Build image
docker-compose build
Run the server
docker-compose run --service-ports worker
Developing this package
Install poetry
You can run it in development mode:
poetry install poetry run maestro-init
If you get a keyring error (Ubuntu), you may need to run the following:
export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring
To bump the package version:
poetry version (major | minor | patch)
Running tests:
| 2023-03-01T00:00:00 |
https://github.com/moises-ai/maestro-worker-python
|
[
{
"date": "2023/03/01",
"position": 97,
"query": "AI workers"
}
] |
|
Women's Work Is the Backbone of the U.S. Economy
|
Women’s Work Is the Backbone of the U.S. Economy
|
https://msmagazine.com
|
[
"Ai-Jen Poo",
"Sheree L. Williams",
"Eleanor Wesley",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline",
".Wp-Block-Co-Authors-Plus-Avatar",
"Where Img",
"Height Auto Max-Width"
] |
Paid parental leave, higher wages for care workers, domestic workers' bill ... (video narrated by Reshma Saujani; op-ed by Ai-jen Poo). Thirty years ago ...
|
For 400+ years, we’ve been playing by their rules. (You know—the ones cisgender white men made.) Not any more. Women are the majority of voters in this country and together we have the power. It’s time for a new set of rules that values all of us—our bodies, our lives and our work. The Majority Rules project—an artful essay and op-ed series from Ms. and Supermajority Education Fund—marks an essential step toward achieving gender equality.
Majority Rule 3: Our work is valued.
(video narrated by Reshma Saujani; op-ed by Ai-jen Poo)
Thirty years ago when President Bill Clinton signed the Family and Medical Leave Act (FMLA) into law, the movement that was then demanding recognition for the needs of caregivers and parents in the workplace was growing and rising into the same movement that, in 2023, would demand to recognize the needs of domestic workers in their employers’ homes.
The work of care—whether provided by family members, or by professional care workers like nannies and home care workers—is no longer hidden “behind every great man…,” as the slogan of the feminist movement of the ’60s and ’70s claimed. Instead, care work is essential infrastructure discussed in Congress and addressed in Biden’s State of the Union speech last month.
It’s natural to look for milestones to measure progress, such as landmark legislation like the FMLA, and legislative efforts are an essential pillar to create standards and floors that set a minimum to uphold. But these milestones are more often markers of the movement that is driving progress forward. Because behind every landmark piece of legislation is a movement of tens of thousands of women who refuse to work unseen and without the recognition their contributions deserve, building power and organizing to ensure our value is seen and reflected in every way our society measures value.
And our movement is still growing.
It might seem obvious that the pandemic has been driving change, but the pandemic only accelerated what our movement was already achieving. The care economy, as it became known, was the backbone of our recovery, and the jobs most often held by women—teachers, caregivers, nurses, domestic workers—were, perhaps for the first time, seen as essential and skilled work.
But the work of our movement has so much further to go. Unequal pay has added further strain on families, especially the 41 percent of households where mothers are the breadwinners. Hospitals and schools have seen record staff shortages. An absence of home care workers has strained families seeking care for their loved ones. Domestic workers are still without a fair wage and a safe workplace. These burdens are heaviest for women of color, who continue to be the first to lose income and the last to receive support.
Paid parental leave, higher wages for care workers, a domestic workers’ bill of rights, pay transparency and equal pay for equal work are just a few of the solutions that are long overdue. These investments are not only common sense; they have broad public support and would strengthen the economic and social fabric of our country. More states are increasing their minimum wage laws, and the majority of people in our country support paid parental leave.
We don’t say, “Behind every great man is a great woman” anymore, because women aren’t standing behind men anymore; we are care workers and caregivers leading the way to making the investments in our shared care infrastructure that we all need.
To achieve a future where everyone can thrive, women need to be paid equally for our work, and the jobs primarily done by women—from teaching to caregiving—must truly be valued and supported. Reshma Saujani
Explore The Majority Rules series, a collaboration between Ms. and Supermajority:
| 2023-03-01T00:00:00 |
2023/03/01
|
https://msmagazine.com/2023/03/01/women-work-childcare-paid-leave/
|
[
{
"date": "2023/03/01",
"position": 51,
"query": "AI wages"
}
] |
German publisher Axel Springer says journalists could be ...
|
German publisher Axel Springer says journalists could be replaced by AI
|
https://www.theguardian.com
|
[
"Jonathan Yerushalmy"
] |
Journalists are at risk of being replaced by artificial intelligence systems like ChatGPT, the CEO of German media group Axel Springer has said.
|
Journalists are at risk of being replaced by artificial intelligence systems like ChatGPT, the CEO of German media group Axel Springer has said.
The announcement was made as the publisher sought to boost revenue at German newspapers Bild and Die Welt and transition to becoming a “purely digital media company”. It said job cuts lay ahead, because automation and AI were increasingly making many of the jobs that supported the production of their journalism redundant.
“Artificial intelligence has the potential to make independent journalism better than it ever was – or simply replace it,” CEO Mathias Doepfner said in an internal letter to employees.
AI tools like the popular ChatGPT promise a “revolution” in information, he said, and would soon be better at the “aggregation of information” than human journalists.
“Understanding this change is essential to a publishing house’s future viability,” said Doepfner. “Only those who create the best original content will survive.”
Axel Springer did not specify how many of its staff could be cut, but promised that no cuts would be made to the number of, “reporters, authors, or specialist editors”.
In his letter to staff, Doepfner said media outlets must focus on investigative journalism and original commentary, while divining the “true motives” behind events would remain a job for journalists.
Axel Springer is not the first news publisher to toy with the use of AI in its content creation. In January, BuzzFeed announced it planned to use artificial intelligence to “enhance” its content and online quizzes.
The published of the UK’s Daily Mirror and Daily Express newspapers is also exploring the use of AI, setting up a working group to look at “the potential and limitations of machine-learning such as ChatGPT”, the group’s chief executive told the Financial Times.
Since its launch in November last year, ChatGPT has amassed more than 100 million users and accelerated a long-predicted reckoning over whether some jobs could be made redundant from artificial intelligence.
The programme can generate highly sophisticated texts from simple user prompts, producing anything from essays and job applications, to poems and works of fiction. ChatGPT is a large-language model, trained by uploading billions of words of everyday text from across the web into the system. It then draws on all this material to predict words and sentences in certain sequences.
However the accuracy of its responses has been called into question. Australian academic have found examples of the system fabricating references from websites and referencing fake quotes.
The use of AI in journalism has proved controversial as well.
Tech website CNET has reportedly been using an AI tool to generate articles that are later scanned by human editors for accuracy before publication. The website acknowledged in January that the program had some limitations, after a report from tech news site Futurism revealed more than half of the stories generated through AI tools had to be edited for errors.
In one example, CNET was forced to issue major corrections to an explainer article on compound interest that contained a number of simple errors.
Reuters contributed to this article
| 2023-03-01T00:00:00 |
2023/03/01
|
https://www.theguardian.com/technology/2023/mar/01/german-publisher-axel-springer-says-journalists-could-be-replaced-by-ai
|
[
{
"date": "2023/03/01",
"position": 1,
"query": "AI journalism"
}
] |
The owner of Insider and Politico tells journalists: AI is ...
|
The owner of Insider and Politico says AI is coming for journalists’ jobs
|
https://edition.cnn.com
|
[
"Anna Cooban"
] |
One of Europe's biggest media groups has warned journalists that artificial intelligence (AI) could steal their jobs, and has provided tips for how reporters ...
|
London CNN —
One of Europe’s biggest media groups has warned journalists that artificial intelligence (AI) could steal their jobs, and has provided tips for how reporters can avoid the chop.
The chief executive of Axel Springer — which owns Insider, Politico and German tabloid newspaper Bild — told employees in a memo Tuesday that “artificial Intelligence has the potential to make independent journalism better than it ever was — or simply replace it.”
In the memo, shared with CNN, Mathias Döpfner predicts that AI will soon be able to aggregate information much better than humans, and urges newsrooms to place a greater emphasis on commentary, exclusive news and investigations that can’t be done by machines.
Journalists would still be needed to understand people’s “true motives”, he said.
“In short, the creation of exclusive and attractive content remains irreplaceable and is going to become even more critical to success for publishers,” Döpfner wrote. “Only those who create the best original content will survive.”
AI race
Döpfner’s warnings come three months after Open AI opened up access to ChatGPT, an AI-powered chatbot. The bot is capable of providing lengthy, thoughtful responses to questions, and can write full essays, responses in job applications and journalistic articles. Its instant popularity has turbocharged a race among tech companies to bring their own AI products to market.
ChatGPT has become so popular that JPMorgan Chase (JPM), the biggest bank in the United States, has restricted its use among workers. Public schools in New York City have also banned the bot over concerns students could use it to cheat on their assignments.
Axel Springer is aiming to become a purely digital publisher, though turning its print titles — including Bild — into online-only publications will likely “take a few more years,” Döpfner said.
The executive also said there would be “significant” job cuts across its business “in the areas of production, layout, proofreading and administration,” without specifying how many. Staff in journalistic roles, such as reporters and specialist editors, are not currently the target of cuts, Döpfner said.
The company will try to avoid involuntary redundancies, he said, and will be “guided by a sense of fairness and humanity” when reducing its workforce.
— Samantha Murphy Kelly and Julia Horowitz contributed reporting.
| 2023-03-01T00:00:00 |
2023/03/01
|
https://edition.cnn.com/2023/03/01/media/axel-springer-ai-job-cuts
|
[
{
"date": "2023/03/01",
"position": 12,
"query": "AI journalism"
},
{
"date": "2023/03/01",
"position": 15,
"query": "artificial intelligence layoffs"
}
] |
Q&A: Fred Ritchin on AI and the threat to photojournalism ...
|
Q&A: Fred Ritchin on AI and the threat to photojournalism no one is talking about
|
https://www.cjr.org
|
[] |
In recent years, artificial intelligence engineers have used millions of real photographs—taken by journalists all over the world, and without those journalists ...
|
Sign up for The Media Today, CJR’s daily newsletter.
In recent years, artificial intelligence engineers have used millions of real photographs—taken by journalists all over the world, and without those journalists’ permission—to train new imaging software to create synthetic photojournalism. Now anyone can prompt AI software like OpenAI’s DALL-E to generate convincing images of people or places that never existed, and of events that never occurred. In turn, those synthetic images continue to update AI software training.
Fred Ritchin, former photo editor at the New York Times, has been charting technological threats to photojournalistic integrity for years. In 1982, National Geographic published a cover photo in which the editor had moved two of the Pyramids of Giza closer together, for layout reasons. Ritchin wrote about the choice at the time and again this month in an article for Vanity Fair, comparing that simple distortion to “virtual time travel—as if revisiting a scene and photographing it again.” Now, as artists and agencies start to file suit against AI image generators, Ritchin is asking why the media is not more concerned about the enormous risk he sees artificial intelligence as posing to the credibility of visual journalism.
This week, I spoke with Ritchin, who is also dean emeritus of the International Center of Photography School, a former professor of photography and imaging at New York University specializing in visual media and human rights, and the creator of Four Corners, a software that embeds templated information in journalists’ photographs to facilitate a photographer’s authorship and contextualization. He is the author of three books on the future of imaging. Our conversation has been edited for length and clarity.
AD: Artificial intelligence software like DALL-E is increasingly making it possible for anyone to create their own synthetic imitation of photojournalism. What does today’s comfort with braiding together news and entertainment mean in the face of AI imaging software?
FR: There is an enormous interest in ChatGPT, in writing essays with artificial intelligence, writing news articles, writing movie scripts, and so on. But there’s very little concern about synthetic AI images and what AI can do to replicate a realistic photo of events and people who never existed. Now you don’t even need a camera. You could just say The war in Ukraine should look like this, and make the image. We really somehow have overlooked this challenge to the photograph as witness. It’s interesting to me: my Vanity Fair piece, which came out ten days ago—as far as I can tell, not one publication linked to it in the world, whereas what I am describing in the essay is enormous. We’re very close to destroying the credibility of the photograph. That’s a major step.
The other thing to add to the puzzle is, if you start making millions of synthetic images, then the new AI will be training on those images as well. The concept of history will become more and more distorted, because they’ll be training on the images that are not made by cameras, but made according to the way people want to see the world. What happens if people have five million images of World War II according to the way they want the war to look, and they look like photographs, so that’s what the AI is going to be training on in the future?
Sign up for CJR’s daily email
How would you articulate the difference in the impact of photographs made by human beings as opposed to AI?
Photographs by themselves are made over a fraction of a second, so they need contextualization. The photographers have to be at the place they’re photographing. And they have to let the reader know some sense of what was going on, either by a photo essay—a group of photographs in sequence which build on each other and tell a story—or captions, text, audio, video, all the things you can now do in multimedia to contextualize the image.
When you do photojournalism, there are many people without power in society who you want to pay attention to. One of the founders of Doctors Without Borders said, “Without a photograph, there’s no massacre”. Because nobody believes the survivor—they’re too subjective in terms of recounting it. But the photograph itself is a credible document. Without a credible document, it’s like the tree falling in the forest, it will not make any noise. Making a synthetic image, you don’t have to be there. Hundreds of millions of images online have already been scanned, and you put in a text prompt, and you say, I want to see a waterfall in the Bronx. It could construct for you whatever you want, because you don’t have to experience it. You don’t have to be there.
And we can’t unsee things, right? So, with these synthetic images—made in the style of Getty photographers for example, created by AI that’s been educated with real Getty photographs—what is the impact of people seeing the images that aren’t real?
I think the impact is often that you assume the next one, even if it is an actual photograph, may not be. So you don’t have to respond to it.
I wrote in the Vanity Fair piece about how, in an exhibition in Israel, people who survived the Holocaust were asked about their traumatic memories as children. Now they’re making images according to those memories, we have these images that are getting more and more photorealistic, but not with cameras. If you’re a Nazi sympathizer, you could do the same thing and distort the historical record according to your own politics. The journalistic industry and others have to protect our archives, because once you start producing masses of synthetic imagery, you could skew history. You could show the South before the Civil War a whole different way.
Looking at this lack of concern around synthetic images, do you think we are already somewhat desensitized to photojournalism?
The paradox is we are in the age of image, but the image itself is really often a camouflage. It’s used in, often, very superficial and distorting ways. If you go look at iconic photographs that changed the world—Vietnam, the Civil Rights movement, the picture of the earth from outer space in 1968—you don’t see that anymore. It’s a radical change, it’s the elephant in the room. We pretend that photographs in the twenty-first century are the same as the twentieth, and they’re not at all, they don’t do the same things. In twentieth-century photojournalism, it was really that kind of visceral sense of, There’s a girl being napalmed in 1972 in Vietnam. She’s nine years old, her skin is burning. And whether you’re politically for the war or against the war, the girl still was being napalmed, it was a reference point.
But we don’t have that equivalent today. The war in Ukraine, the war in Afghanistan—there’s no imagery like Eddie Adams’s photograph of the Viet Cong being executed, which would cause people to protest the war and carry big placards with the photograph on them.My sense is there’s been no iconic imagery like that since the photographs of Alan Kurdi, the Syrian boy who drowned off the coast of Turkey in 2015. Before that, you’d probably go back to Abu Ghraib, the photographs made by the US military of the torture of prisoners in Iraq. In this millennium, those photographs barely exist, certainly not enough to provoke a rethinking of policy on a national or global level. There are videos like the killing of George Floyd that have managed to provoke that sense of outrage. Often, it’s amateurs now who make the images that provoke discussion. The professionals, though excellent, rarely produce images that cause people to say, There was a twenty-year war in Afghanistan. What was it about? or provoke a large-scale societal discussion.
Our visual media has changed enormously. The sad thing is we’ve done almost nothing in the journalistic industry to preserve the credibility of the photograph. The argument about whether we should show photographs from Uvalde, Texas, of the schoolchildren who were massacred? Some people said, Oh, we have to show the photographs. Well, that’s wrong, in my opinion. Number one, because it would retraumatize the families. And number two, because it wouldn’t change any laws, in part because it’s very easy to reject photographs. It doesn’t have that visceral feeling that, in 1972, a nine-year-old girl suffering, burning from napalm had—it doesn’t do that at all. It’s just more images. If it’s pictures of the earthquake in Turkey, and it’s upsetting, well, now it’s easier to look away.
The reader has to understand, is this a manipulated image? Where is the borderline that divides those from an actual photograph? If you go into a bookstore, you ask for a nonfiction or fiction book, they know what you’re talking about. What’s a nonfiction photograph? What’s a fiction photograph? How much can a photograph be manipulated before it ceases to be photograph? We are not addressing those things. We’re not even interested in addressing those things.
I wonder why.
In 1839 or so, when photography began, the difference with the painter was that painting was synthetic, explicitly interpretive. It took a long period of time, it was often expensive. The camera itself had a mechanical side to it so that a human was authoring the image, but, at the same time, a camera was recording. It had a different weight in society than a painting did. At first painters said, Oh, photography is the death of painting. It wasn’t. In fact, it provoked a renaissance of painting with cubism, impressionism, minimalism, and so on, because painters could now use their imagination and not concentrate on representational painting.
Now, with synthetic images, the question is, what can it do differently than photography? For example, it can make synthetic images of the future. What do scientists say Manhattan will look like in twenty-five years if climate change continues and we don’t do anything about it? These images can be useful proactively. So instead of waiting for the disaster and photographing it, we actually do something about it in advance. We build walls or whatever needs to be done to protect Manhattan. There are things that artificial intelligence can add to the image equation, which can be very useful, and that’s the place we have to look.
Where do we revert to if the public becomes dismissive of photojournalism?
I don’t think you can ever revert. I think you have to move forward, and I think there are multiple ways of doing it. If you consider a photographer to be not just a camera carrier, but somebody who interprets events, people, landscapes according to their best knowledge, then you actually personalize it. They become authors just like a writer is an author. I would trust a writer who works for CJR perhaps more than just somebody with a blog, whom I may trust in different ways. The same thing with a photographer.
Then we have to start adding more contextualizing information—whether it’s video, audio, text. If you’re a professional making photographs, you have a greater responsibility to provide context than a citizen journalist or somebody working in social media. I do this with my class. I say, “You come from your country. Take fifteen images from social media that explain your country.” I remember a student of mine from China, and he had this photograph of middle aged people holding hands in the middle of the street in the afternoon. And I said, “What’s it about?” He said, “Well, their kids are taking the exam to get into university in the building next door, and they’re stopping traffic so it’ll be quiet for their children to take the exams.” And I said, “I learned more about your culture just from what you’re saying than I do in most newspapers, because you’re curating from the inside. You’re interpreting it for us.” The people who know what’s going on in the different cultures have to become the curators. Otherwise nobody’s going to go through the mass of images that are available on social media and elsewhere.
I also think publications have to be very clear on their own code of ethics. In the old days with black-and-white film, for example, you could change the contrast of an image to a certain extent. You could crop it, you could get rid of dust spots, but that was all okay. I like to think of photographs as quotations from appearances, so you could treat it the way a writer treats a quotation. You cannot redo the words within quote marks. You know, that’s not a quotation anymore. We have to come up with new standards, so the reader knows immediately: this is a photograph, or this is a synthetic image, or something else.
This is an enormous burden on the reader, the viewer, to fact check the photographs, until the industry sorts out how to regulate it.
The burden on the reader has to be as light as possible because nobody’s going to do all that research. People may have ten minutes a day to get the news. They have to go to work, make dinner, take care of the kids. They need a credible source.
You wrote about asking DALL-E for the “iconic photograph that is so horrible it would cause wars to stop.” The software produced an image of a woman looking through a camera, a small girl next to her shielding her eyes from the view. The camera itself is distorted as though by the scene they face. And then “a photograph of the greatest mothers in the world,” in which the AI gave you an image of an ape-like creature holding its baby. I assume you’ve played around with it more than just those two questions?
There are many such surprises. When I asked for the most alarming image of climate change, I got a diptych showing an oil geyser and a clock on one side—presumably showing how little time we have left to solve what is going on—and melting ice, a polar bear and a full sun on the other. I thought it was complex and nuanced, and produced in only seconds. There are many cases of AI being racist and misogynist and idiotic, but there are some cases where it’s really quite interesting. I’ve been working with philosophers, jazz musicians, painters, asking for imagery to be made by the AI from their point of view. If Plato would photograph, or Einstein, or a quantum physicist, what would the photographs look like?
The AI itself often does that as well. People are absolutely correct in their criticism of AI being trained on datasets which are often really awful. But simultaneously, just like when you assign a photographer, you have certain expectations and often get the stereotype back because nobody wants to upset the editor. The problem is we’re using AI to replicate previous media, in this case photography, and the potential for distortion is enormous.
How can photojournalists protect themselves and their livelihoods right now?
I don’t think it’s up to them alone. What they need to be doing is thinking of themselves as authors, as opposed to, you know, “shooters,” which is a word that I really detest. Often they’re just being asked to illustrate somebody else’s point of view. And I think that has to stop. They have to be respected by photo editors, text editors, editors in chief, as having their own integrity, their own autonomy, and their own ability to interpret situations.
In other words, we have to go back to the photo essay; really have an informed point of view, not just the person making the single image or get stuck in a sideshow in which you front load it for the most exciting pictures, for clickbait. The photo essay, to me, is the highest form of visual journalism, in which a photographer actually is allowed to have a point of view with complexity, nuance, depth. We need to reinstitute that as much as possible in a multimedia format, which would allow the potential for audio, for video, for text, for multiple languages, for the input of the people being depicted. Stuff that you may not have been able to do before.
And build media literacy. You have to be explaining to readers what different kinds of images are synthetic, manipulated photographs for illustrations, etc. You need transparency: We don’t modify images once they’re made, except for contrast or except for cropping, or whatever it would be. But make the explanations simple, don’t make it forty pages. Make it very evident, so the reader knows, and you know it. Keep working day by day to restore trust. Photographs are increasingly going to fade away and diminish unless we act quickly. And I think we’re going to be much the worse for not having them.
Other notable stories:
ICYMI: Jacaranda Nigeria Limited
| 2023-03-01T00:00:00 |
https://www.cjr.org/the_media_today/fred_ritchin_ai_photojournalism.php
|
[
{
"date": "2023/03/01",
"position": 21,
"query": "AI journalism"
}
] |
|
Hiring Bias Gone Wrong: Amazon Recruiting Case Study
|
Hiring Bias Gone Wrong: Amazon Recruiting Case Study
|
https://www.cangrade.com
|
[
"Research Scientist"
] |
This approach is the basis for many machine learning problems in the hiring industry, so it seemed like a standard protocol. Not quite. Amazon should have ...
|
Even though we fear artificial intelligence (AI) is sentient, it is far from it.
A dystopian sci-fi universe in which computers rise up is quite removed from our present reality. AI is only as smart as the humans who program it, though it is quite efficient in its analytical processing efforts.
This may seem hard to believe as we use AI daily:
In our professional writing: Grammarly
While texting: smartphone sentence completion
When driving: parking assistance
AI is less obvious in these respects. Consider the spam folder in your email inbox. How does Gmail know what is junk and what is not? AI.
Amazon’s Biased Artificial Intelligence
It only makes sense that AI’s reach should extend further. And Amazon acted upon this in 2015. The basis of their technology was simple and seemingly practical.
It started with the question: “How do you identify high-fit candidates?” and their answer was, “You look at your existing thriving employees.”
This approach is the basis for many machine learning problems in the hiring industry, so it seemed like a standard protocol. Not quite. Amazon should have assessed whether they were falling prey to hiring trends in Silicon Valley (then dubbed “Brotopia” by Emily Change) and tech at large.
In 2015, Amazon was amongst the tech titans whose workforce was disproportionately high in male employees.
What happens when you feed a biased dataset to an algorithm? Scaled bias, mostly. The data they used (resumes of current employees) inadvertently suggested that male candidates were the better picks, instilling hiring bias in their talent acquisition process.
This pipeline of bad data input resulting in bad data output is commonly referred to as Garbage In, Garbage Out.
Can Artificial Intelligence Be Trusted?
In practice, this means that Amazon’s shiny new recruiting tool (read: biased AI) penalized resumes that mentioned “Women” or “Women’s.” It biased their hiring process.
Thus, a person on the “Women’s Rugby team” or who went to a “Women’s College” was penalized.
It was more pronounced if the person had various affiliations with organizations or universities that included the word “Women’s” in it. Consequently, male candidates exceedingly benefitted from the AI’s flawed training set.
Does that mean AI is biased and can’t be trusted? I mean, Amazon couldn’t get it right. Surely this is a lost cause.
Not at all. This was almost 7 years ago. And a lot happens in 7 years, especially in a fast-paced field like AI.
Bias-free AI is a possibility. Cangrade holds patent 11429859 for our innovative process of mitigating and removing bias from AI.
Cangrade’s Bias-Free Artificial Intelligence
Our AI is not only ethical, but it is also recently ADA-compliant. And while most organizations cover the current list of EOCC-protected groups, we also protect against adverse impacts for two more groups: marriage status and whether applicants or candidates have children.
Hiring bias and discrimination are rampant and aren’t necessarily intentional. There are countless measures we can take to mitigate them. Consider adopting responsible AI to build a more diverse and stronger workforce. When AI’s power is unchecked, it can scale hiring bias, disproportionately affecting minority groups.
However, when ethical AI is designed, it can elevate voices that aren’t typically heard. Cangrade offers a patented and science-backed ethical AI solution.
Contact us today for your demo.
| 2022-12-21T00:00:00 |
2022/12/21
|
https://www.cangrade.com/blog/hr-strategy/hiring-bias-gone-wrong-amazon-recruiting-case-study/
|
[
{
"date": "2023/03/01",
"position": 50,
"query": "artificial intelligence hiring"
},
{
"date": "2023/03/01",
"position": 48,
"query": "artificial intelligence hiring"
}
] |
[Hiring] AI Artist @Contra
|
[Hiring] AI Artist @Contra
|
https://remotive.com
|
[] |
Mar 02, 2023 - Contra is hiring a remote AI Artist. Salary: $2k . Location: Worldwide.
|
Contra is a professional network for the independent digital workforce. Digital professionals use Contra to create stunning visual profiles, earn money commission-free, and build high-quality professional communities and referral networks. Contra was started by a group of independents who were tired of existing professional networks that are blind to the needs of digital professionals. We’re creating a new way to work for the independent digital workforce.
| 2023-03-01T00:00:00 |
https://remotive.com/remote-jobs/all-others/ai-artist-1608203
|
[
{
"date": "2023/03/01",
"position": 95,
"query": "artificial intelligence hiring"
}
] |
|
AI (Artificial Intelligence) in Web Design | Los Angeles Web ...
|
AI (Artificial Intelligence) in Web Design
|
https://www.ladesignstudio.com
|
[] |
AI has the potential to revolutionize web design by providing personalized experiences, automating content creation, enhancing website security, ...
|
How to use AI in Web Design
In today’s digital age, a strong online presence is crucial for any business to thrive. A company’s website is often the first point of contact with potential customers, and a well-designed website can make a significant impact on a user’s perception of the company. With the advent of Artificial Intelligence (AI), web design companies can now leverage this technology to create even more effective and personalized websites. In this article, we will explore how web design companies are using AI to enhance their services and provide more value to their clients.
Personalized user experiences AI allows web designers to create personalized user experiences by analyzing a user’s behavior on the website. By tracking how users interact with a website, AI algorithms can provide insights into the user’s preferences, interests, and behavior patterns. This information can then be used to tailor the user experience, such as presenting relevant content, offering personalized recommendations, and even adapting the website layout and design to suit the user’s preferences.
For example, an AI-powered website can automatically adjust the font size and color scheme based on a user’s browsing history. If a user frequently visits websites with dark color schemes, the website can adjust its color scheme to match the user’s preference. This level of personalization can greatly enhance the user experience and increase engagement on the website.
Automated content creation AI-powered content creation tools can analyze data and create unique, high-quality content for websites. These tools can generate product descriptions, blog posts, and even entire websites based on a set of parameters defined by the web design company.
For example, an AI-powered tool can analyze a company’s brand guidelines, target audience, and marketing goals to create a website that is tailored to the company’s needs. This can save web designers a significant amount of time and effort while ensuring that the content is high-quality and effective.
Intelligent chatbots Chatbots are becoming increasingly popular on websites as they can provide instant support and assistance to users. AI-powered chatbots can provide a more intelligent and personalized experience by analyzing the user’s behavior and providing tailored responses.
For example, a chatbot on a website selling shoes can analyze a user’s browsing history and recommend shoes that match their style and preferences. The chatbot can also provide instant answers to frequently asked questions, reducing the workload on customer support teams.
Enhanced website security AI can also be used to enhance website security by detecting and preventing cyber attacks. AI algorithms can analyze website traffic and detect suspicious behavior, such as unusual login attempts or unusual patterns of data access. This can help prevent data breaches and protect sensitive user information. Automated website testing Testing a website for bugs and errors can be a time-consuming process. AI-powered testing tools can automate this process, making it faster and more efficient. These tools can analyze a website’s code and identify potential issues, such as broken links, slow page load times, and compatibility issues with different browsers and devices.
For example, an AI-powered testing tool can automatically test a website’s responsiveness across different screen sizes and resolutions. This can help ensure that the website looks and functions correctly on all devices, including smartphones, tablets, and desktops.
AI has the potential to revolutionize web design by providing personalized experiences, automating content creation, enhancing website security, and improving website testing. Web design companies that leverage AI can provide more value to their clients by creating more effective and engaging websites. As AI technology continues to evolve, we can expect to see even more innovative use cases for AI in web design in the future.
Get in touch!
| 2023-03-01T00:00:00 |
2023/03/01
|
https://www.ladesignstudio.com/ai-artificial-intelligence-in-web-design/
|
[
{
"date": "2023/03/01",
"position": 80,
"query": "artificial intelligence graphic design"
}
] |
What are the so-called 'jobs' that AI will create? : r/singularity
|
The heart of the internet
|
https://www.reddit.com
|
[] |
AGI could create new job opportunities in areas such as AI systems development, AI ethics and governance, human-AI interaction design, creative and artistic ...
|
We are told that AI/AGI will create entirely new jobs for the people it displaces. What are some of these jobs?
According to ChatGPT:
AGI could create new job opportunities in areas such as AI systems development, AI ethics and governance, human-AI interaction design, creative and artistic industries, scientific research, and education and training.
Aside from working directly in the AI sector, I really have trouble imagining any of these fields expanding when any given individual will be able to achieve so much more than they can today. The media alludes to the idea of "you can't possibly imagine what jobs are coming" but maybe that's because it can't be imagined because it is literally nothing.
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/singularity/comments/125vn3k/what_are_the_socalled_jobs_that_ai_will_create/
|
[
{
"date": "2023/03/01",
"position": 4,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 4,
"query": "AI employment"
},
{
"date": "2023/03/01",
"position": 43,
"query": "artificial intelligence workers"
}
] |
|
Blog: The Rise of AI - A Threat to Global Employment?
|
Blog: The Rise of AI - A Threat to Global Employment?
|
https://www.linkedin.com
|
[] |
We explore how AI could potentially reshape employment markets globally, from countries like the United States to developing nations like India.
|
The world is rapidly changing. With the advent of Artificial Intelligence (AI), many tasks that were once done by humans can now be completed faster, cheaper,, and with greater accuracy through automated machines. This raises questions about the future of work: will AI take over jobs around the world? The implications are not only for employment opportunities but also for our freedom as individuals.
We explore how AI could potentially reshape employment markets globally, from countries like the United States to developing nations like India. We look at whether automation is likely to replace human labour in certain industries and consider what steps governments should take to mitigate any potential negative effects on workers. Finally, we examine ways in which people may be able to adapt and benefit from a more AI-driven economy. A future where humanity's own power lies both in its ingenuity and its ability to recognize opportunity amidst disruption.
Definition Of Artificial Intelligence (Ai)
Artificial Intelligence, or AI for short, is a rapidly evolving field of technology that focuses on creating intelligent machines capable of performing tasks that would normally require human intelligence. It has become an increasingly important part of our lives, with applications ranging from the mundane to the profound.
At its core, Artificial Intelligence (AI) is defined as any machine that can perceive and analyze data in order to make decisions autonomously. From self-driving cars to robots in manufacturing plants ‚ AI systems are now being used in many different industries around the world.
The potential of Artificial Intelligence (AI) is enormous, but so are its risks if left unchecked. If we want to truly harness its power, it's essential that we understand both its strengths and weaknesses. We must ensure that AI remains ethical, secure, and responsible - ensuring that no one person or group gains too much control over decision-making processes without proper checks and balances in place. As such, governments need to consider carefully how best to regulate this emerging technology before it becomes ubiquitous across all aspects of society, including job markets worldwide.
What Industries Are Affected By Ai?
The impact of Artificial Intelligence (AI) on the global economy is undeniable. According to a recent report from McKinsey, AI could contribute up to $13 trillion in value for businesses by 2030. Since 2000, robots and automation systems have slowly phased out many manufacturing jobs — 1.7 million of them. On the flip side, it’s predicted that AI will create 97 million new jobs by 2025(builtin, 2022). This massive potential has led many industries across the world to start utilizing AI technology in order to stay competitive and improve their products or services. Here are some major sectors that have adopted AI:
1. Healthcare - AI can be used to diagnose diseases more quickly and accurately than humans. These algorithms analyze patient records and medical images at a speed humans simply can't match. The potential applications go far beyond diagnosis as well: AI could also be used to detect patient treatments that may not have previously worked well – such as prescribing drugs based on individual biochemistry or suggesting alternative treatment options when existing methods fail.
2. Automotive - Autonomous vehicles are becoming increasingly common due to advances in machine learning and computer vision technologies. They promise improved safety, convenience, and energy efficiency over traditional cars. Tesla’s alien tech cars come to mind here.
3. Retail - AI-powered chatbots are being used by online retailers to provide customer service 24/7 without having human employees work around the clock. Additionally, facial recognition systems have been implemented in stores for security purposes as well as product recommendations based on previous purchases. Amazon’s retail stores have self-checkout systems that only require the customers to walk out of the stores to be billed. The products are detected using a combination of Ai tools.
These examples demonstrate just how far-reaching the effects of AI can be on different industries all over the world. While it may lead to a displacement of certain jobs, there is no denying that these technological developments will bring great advancements in terms of quality, speed, cost savings and much more across various fields. As such, its use should be embraced if organizations want to remain competitive within an ever-evolving economic landscape. Looking ahead toward what lies beyond this current wave of innovation will surely be fascinating as we witness firsthand how AI continues to shape our lives for better or worse.
Pros And Cons
The potential of Artificial Intelligence (AI) to take over jobs around the world is a potentially disruptive force. This technology has both advantages and disadvantages that must be considered when it comes to its impact on employment opportunities.
Due to AI ability to automate processes, productivity could be significantly increased in many industries where humans are currently employed. For example, robots have already replaced human workers at industrial production facilities due to their ability to work faster and more accurately than people.
However, there are also several drawbacks associated with replacing human workers with machines, such as job losses and reduced human interaction in certain fields. The automation of certain processes can reduce or even eliminate the need for certain skill sets, thus resulting in fewer employment opportunities for those individuals involved in them who don’t have the age or opportunities to reeducate themself to adapt to this automation. Similarly, AI may not always provide creative solutions like a human would, which means that any task requiring creativity will still require manual labour rather than automated systems. Or is this the case? The recent introduction of Ai generated art by Ai tools like Mid journey and Dalle 2 have already started putting artists across the globe out of jobs and gigs.
Lastly, while AI offers great potential benefits if implemented correctly, there is still the risk of unforeseen consequences should an automation malfunction occur in critical situations such as healthcare and aviation safety. With the introduction of public consumption Ai tools like chat GPT3 and openAi there's a real question on the ethical and moral approach to its consumption. Classrooms across the world have seen students using it to write their essays and exams where, as coders have been seen to use it to complete their tasks faster leading to a lesser quality of work.
These are just a few of the pros and cons related to using Artificial Intelligence in place of humans for various tasks but ultimately leave us wondering what the longer-term impacts might be on global economies given all these considerations...
Impact On the Global Economy
Moving forward, the impact of artificial intelligence (AI) taking over jobs around the world has significant implications for global economics. It means fewer job opportunities and greater inequality between those who benefit from the new technologies and those who don’t have access to them or the skills to use them.
There will be no need for a labour force which can lead to larger-scale layoffs across multiple sectors such as manufacturing, transportation, retail, hospitality etc. This could create major disruptions in economies depending on how quickly governments are able to adjust their policies in response to these changes.
On top of that, AI might also cause income disparities among different social classes due to its unequal distribution among population groups. For example, if only higher-income earners have access to advanced robots or automated systems, then they would reap the benefits while lower-wage workers remain vulnerable and unable to compete with these technologies. This could further widen existing gaps within societies creating even more economic instability.
These issues all require careful consideration when evaluating the potential impacts of AI on the global economy moving forward as it continues to spread throughout various sectors worldwide. The challenge lies in developing strategies that promote innovation without leaving anyone behind so everyone can benefit from advancements made through technological advances such as robotics and machine learning algorithms. With thoughtful planning and collaboration between business leaders, policymakers, education professionals, and other stakeholders, we can work towards ensuring healthy economic development amid rapid technological change driven by modern-day Artificial Intelligence solutions.
New Opportunities Created By Ai
The possibilities for Artificial Intelligence (AI) are limitless, and the opportunities it creates for individuals, businesses, and society is undeniable. In this section, we will explore how AI can create new jobs that wouldn't have existed without its implementation.
Firstly, AI has allowed for more efficient processes in existing fields such as healthcare or finance, which means that tasks can be done faster and with less cost to companies. This opens job opportunities to those who may not have had access before due to a lack of resources or knowledge. For example, AI-based medical diagnosis systems allow doctors to diagnose diseases much quicker than they would have been able to do manually; this creates jobs for data scientists who specialize in creating these systems and allows patients to receive treatment faster.
Secondly, AI also makes certain industries safer by automating dangerous processes like manufacturing or mining. Not only does this reduce the risk of accidents but it also adds new roles within these fields, such as maintenance technicians who work on machines powered by AI algorithms. These types of positions provide people with an opportunity to learn about cutting-edge technology while still providing valuable services within their chosen industry.
As we move further into the age of artificial intelligence, it becomes increasingly important for us all - whether employers or employees - to understand what opportunities exist so we can make informed decisions about our future career paths. Preparing ourselves now will ensure that when the time comes, we are equipped with the skills needed to succeed in any role created by AI technologies.
Preparing For the Future with Ai
The first step in preparing for the future of work with AI is understanding what role AI can play in tasks traditionally completed by humans. In many cases, AI can automate mundane tasks or provide insights into areas where human intuition may be lacking. For instance, machine learning algorithms can identify patterns in data sets which allow businesses to make decisions better. That said, those decisions will always require human intervention and intuition (at least for now). This is seen in companies like Aera, which is a decision-making tool used in large FMCGs to make supply chain decisions based on predictive models.
The second step is recognizing the need for individuals to acquire new skills related to technology, such as coding and data analysis if they want to remain competitive in an ever-evolving job market. Companies should also consider investing in training programs designed to equip current employees with the knowledge required to transition from traditional jobs into those more closely aligned with emerging technologies such as robotics, automation, and AI applications. Doing so could help ensure that workers remain relevant even when faced with increasing competition from automated solutions.
Finally, companies must take proactive steps toward creating ethical frameworks that govern how decisions made by AI-enabled systems affect both employees and customers alike. As this technology grows increasingly sophisticated over time, it is essential that these standards are established early on so that everyone involved feels confident about its use going forward. With this foundation laid out properly, all parties can benefit from embracing a world powered by intelligent machines without worrying about any unintended consequences along the way. Transitioning smoothly into this brave new world requires careful planning and foresight, two things every organization should strive for before turning towards artificial intelligence solutions for their business needs.
SKILLS TO CULTIVATE IN THE AGE OF AI
a) Basic mathematics.
b) Strong verbal and written communication.
c) Creativity.
d) People management.
e) Emotional intelligence.
f) Critical thinking and problem-solving.
Conclusion: The Future Of Work With Ai
AI has the potential to revolutionize work as we know it. It can automate mundane tasks, allowing people to have more time and energy for creative pursuits and activities that are not easily accomplished by machines. There is no version of this future we go into where Ai isn’t going to take people's jobs in masses. I would have to be a fool to think that.
| 2023-03-01T00:00:00 |
https://www.linkedin.com/pulse/rise-ai-threat-global-employment-shahazeen-shaheer
|
[
{
"date": "2023/03/01",
"position": 5,
"query": "artificial intelligence employment"
}
] |
|
I want to get into AI but have no idea how.
|
The heart of the internet
|
https://www.reddit.com
|
[] |
Some AI training jobs requires little to no experience. Your functionality would be mainly to collect data, curate them, and label them and occasionally to ...
|
Hi, I'm an undergrad international relations major. I've always had an interest in technology but wanted to be an overseas reporter for some time now.
Chat gpt and gpt-4 have changed my mind. I truly think that AI is bigger than the advent of the internet and want to try my chances in the AI field, or at least be able to understand it and use it to a certain level.
However I don't know how to write a single line of code. I've thought of taking online courses in programming but after seeing how gpt-4 can generate pages of code in seconds, I'm not sure if this is the right approach.
How should a total beginner get started toward an AI related career? What are some of the better sources for learning?
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/ArtificialInteligence/comments/11wgx6u/i_want_to_get_into_ai_but_have_no_idea_how/
|
[
{
"date": "2023/03/01",
"position": 8,
"query": "artificial intelligence employment"
}
] |
|
Is Artificial Intelligence (AI) Killing Jobs?
|
Is Artificial Intelligence (AI) Killing Jobs?
|
https://www.linkedin.com
|
[
"Randstad Uk",
"Geet Purwar",
"Solution Architect Consultant",
"Adc Consulting"
] |
While there are some fears that AI may lead to job loss in certain sectors, research shows that AI is more likely to change our roles than replace them ...
|
The base..
As technology advances at an unprecedented rate, there's no denying that artificial intelligence (AI) is rapidly changing the job market. While AI has the potential to revolutionize industries and make our lives easier in countless ways, many are concerned about the impact it will have on employment.
At first glance, it might seem like AI is killing jobs outright. After all, AI can replicate or even outperform human workers in certain tasks, leaving many people unemployed. However, this isn't always the case. In fact, AI may actually be creating new opportunities for people who are currently unemployed.
Is AI really killing our jobs? In this blog post, we'll explore this question and take a closer look at how AI is transforming the workforce as we know it. So buckle up and join us on this technological journey!
Why do we need AI?
AI is quickly becoming one of the most important tools in our economy and society. Here are three reasons why we need AI:
1. To remain competitive in the global economy.
Other countries are investing heavily in AI, and if we want to remain competitive, we need to do the same.
2. To automate repetitive and dangerous tasks.
There are many jobs that are repetitive and/or dangerous, such as factory work or mining. By automating these tasks with AI, we can improve worker safety and productivity.
3. To help us make better decisions.
With its ability to analyze large data sets, AI can help us make better decisions by identifying patterns and trends that human beings might miss.
Demand and Supply - The new era with new Job requirements
In the past, employers demanded workers who could perform specific tasks. Today, with the advent of artificial intelligence (AI), employers are increasingly demanding workers who can adapt to new circumstances and learn new skills. This shift has profound implications for the future of work.
As AI increasingly automates routine tasks, jobs that once required little education or training are disappearing. In their place are jobs that require higher levels of education and training, as well as more complex problem-solving and team-working skills.
In the past, when a new technology came along, it might have rendered some jobs obsolete. But it also created new opportunities and demand for new skills. The same is true of Artificial Intelligence (AI). While AI is automating some tasks and thus making some jobs redundant, it is also creating new opportunities for those who are willing to learn new skills.
For instance, consider the role of customer service representative. In the past, this was a human-powered role that required little more than a pleasant personality and the ability to handle basic inquiries. But as AI tools have become more sophisticated, they have taken on an increasingly larger share of customer service interactions. This has led to a corresponding decrease in the number of customer service jobs.
But rather than resulting in mass unemployment, this shift has simply changed the nature of the customer service role. Now, rather than being focused on handling routine inquiries, customer service representatives are increasingly being tasked with managing complex issues that require empathy and interpersonal skills. In other words, AI is killing the old customer service job – but creating a new and better one in its place.
So while it’s true that AI is causing some job loss, it’s also important to remember that it’s creating new opportunities for those who are willing to adapt their skillsets. Rather than bemoaning the loss of jobs to AI, we should embrace the changes it’s bringing about and focus on preparing ourselves for the new economy that’s coming our way.
This shift is already underway. In the United States, for example, middle-skill jobs – which once made up 60 percent of all jobs – have declined sharply over the past three decades. At the same time, the share of high-skill jobs has increased from 20 percent to 30 percent, while the share of low-skill jobs has remained relatively stable.
The trend is likely to continue in the years ahead. As AI gets better at performing routine tasks, more and more middle-skill jobs will disappear. To stay employed in the new era of AI, workers will need to acquire new skills and adapt to changing circumstances.
Science and evolution
The rapid pace of technological change is often credited with driving economic growth, but it also has the potential to disrupt labour markets and create new challenges for policy-makers. One of the most significant recent changes has been the increasing automation of tasks that were previously carried out by human workers. This trend is sometimes known as "the automation of intelligence" or "the rise of the robots".
There is no single answer to the question of whether or not increased automation will lead to widespread job losses. Some argue that history shows us that technology has always created more jobs than it has destroyed, and that we should not be worried about the impact of automation on employment. Others point to the fact that many jobs are already being replaced by machines, and warn that this trend is likely to continue as artificial intelligence (AI) and other forms of automation become more advanced.
It is hard to predict the future with certainty, but one thing seems clear: the world of work is changing, and we need to adapt to these changes if we want to ensure that everyone can benefit from them.
What will happen to our Jobs?
The advancement of artificial intelligence (AI) is inevitable. It has the potential to transform entire industries, and many believe that it will eventually lead to the obsolescence of human labor. This is a valid concern, but it's important to remember that AI will not simply eliminate all jobs overnight. Rather, it will change the way we work by automating certain tasks and enabling us to do our jobs more efficiently.
There is no doubt that AI will have a major impact on the workforce. In some cases, AI will take on low-skilled jobs such as data entry or customer service. In other cases, AI will augment human workers by providing them with real-time feedback and recommendations. As AI becomes more advanced, we will see more and more jobs being automated or transformed by this technology.
That said, it's important to remember that AI will also create new opportunities for employment. For example, businesses will need people to develop and manage AI applications. Additionally, there will be a need for people who can interpret data and insights generated by AI systems. As such, while AI may kill some jobs, it will also create new and exciting opportunities for others.
Conclusion
The rise of Artificial Intelligence and its potential to automate jobs has become a pressing issue, with many concerned that it could have damaging effects on the labor market. While there are some fears that AI may lead to job loss in certain sectors, research shows that AI is more likely to change our roles than replace them altogether.
| 2023-03-01T00:00:00 |
https://www.linkedin.com/pulse/artificial-intelligence-ai-killing-jobs-nitish-kumar
|
[
{
"date": "2023/03/01",
"position": 9,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 25,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 13,
"query": "AI job creation vs elimination"
}
] |
|
The use of AI in legal systems: determining independent ...
|
The use of AI in legal systems: determining independent contractor vs. employee status
|
https://link.springer.com
|
[
"Cohen",
"Maxime C.",
"Maxime.Cohen Mcgill.Ca",
"Desautels Faculty Of Management",
"Mcgill University",
"Montreal",
"Dahan",
"Samuel.Dahan Queensu.Ca",
"Faculty Of Law",
"Queen S University"
] |
by MC Cohen · 2023 · Cited by 86 — This paper investigates the use of AI in a critical issue in employment law, the determination of a worker's status—employee vs. independent contractor ...
|
The use of artificial intelligence (AI) to aid legal decision making has become prominent. This paper investigates the use of AI in a critical issue in employment law, the determination of a worker’s status—employee vs. independent contractor—in two common law countries (the U.S. and Canada). This legal question has been a contentious labor issue insofar as independent contractors are not eligible for the same benefits as employees. It has become an important societal issue due to the ubiquity of the gig economy and the recent disruptions in employment arrangements. To address this problem, we collected, annotated, and structured the data for all Canadian and Californian court cases related to this legal question between 2002 and 2021, resulting in 538 Canadian cases and 217 U.S. cases. In contrast to legal literature focusing on complex and correlated characteristics of the employment relationship, our statistical analyses of the data show very strong correlations between the worker’s status and a small subset of quantifiable characteristics of the employment relationship. In fact, despite the variety of situations in the case law, we show that simple, off-the-shelf AI models classify the cases with an out-of-sample accuracy of more than 90%. Interestingly, the analysis of misclassified cases reveals consistent misclassification patterns by most algorithms. Legal analyses of these cases led us to identify how equity is ensured by judges in ambiguous situations. Finally, our findings have practical implications for access to legal advice and justice. We deployed our AI model via the open-access platform, https://MyOpenCourt.org/, to help users answer employment legal questions. This platform has already assisted many Canadian users, and we hope it will help democratize access to legal advice to large crowds.
| 2023-03-01T00:00:00 |
https://link.springer.com/article/10.1007/s10506-023-09353-y
|
[
{
"date": "2023/03/01",
"position": 11,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 44,
"query": "artificial intelligence workers"
}
] |
|
How Automation and AI are Changing the Workplace? A ...
|
How Automation and AI are Changing the Workplace? A Futuristic Viewpoint and Research Directions
|
https://medium.com
|
[
"Brij Gupta"
] |
The use of automation and AI in the workplace has the potential to increase efficiency, reduce costs, and improve the quality of work.
|
How Automation and AI are Changing the Workplace? A Futuristic Viewpoint and Research Directions Brij Gupta 4 min read · Mar 27, 2023 -- Listen Share
As technology advances and automation becomes more prevalent, the future of work is undergoing a significant transformation. With the rise of artificial intelligence (AI) and machine learning, businesses are finding new and innovative ways to increase efficiency, productivity, and profitability [1–5]. However, these changes also present new challenges for employees and require organizations to adapt to the changing nature of work. In this blog post, we will explore the impact of automation and AI on the future of work and the implications for businesses and employees.
Introduction to Automation and AI in the Workplace
Automation and AI refer to the use of technology to perform tasks and processes that humans previously did. These technologies can range from simple robots that perform repetitive tasks to complex systems that use machine learning algorithms to make decisions and predictions. The use of automation and AI in the workplace has the potential to increase efficiency, reduce costs, and improve the quality of work [6–10].
Increased Efficiency and Productivity
One of the most significant benefits of automation and AI in the workplace is increased efficiency and productivity. Automating repetitive and time-consuming tasks allows employees to focus on more complex and high-value work. Similarly, AI-powered tools can analyze large amounts of data and provide insights and recommendations that would be difficult or impossible for humans to do manually[11–16].
Changing Job Roles and Skillsets
The increased use of automation and AI in the workplace is also changing the nature of job roles and skillsets required for employment. As more routine tasks become automated, the demand for workers with skills in areas such as data analysis, software development, and machine learning is increasing. This shift in skill requirements creates new opportunities for workers and makes it essential for businesses to invest in employee training and development.
Potential Job Losses and Reskilling
While automation and AI have the potential to create new job opportunities, they also have the potential to lead to job losses in some industries. Jobs that are repetitive and routine are most at risk of automation, and workers in these roles may need to reskill or find new employment opportunities. It is essential for businesses and governments to support these workers through training and education programs to ensure a smooth transition to new roles and industries.
Ethical Considerations
As automation and AI become more prevalent in the workplace, there are also ethical considerations to take into account. For example, the use of AI-powered decision-making systems can lead to biases and discrimination, which could have significant implications for businesses and employees. It is essential for organizations to consider the ethical implications of automation and AI and implement policies and procedures to ensure fair and ethical use.
Conclusion
The future of work is undergoing a significant transformation as automation and AI become more prevalent in the workplace. While these technologies have the potential to increase efficiency and productivity, they also present new challenges for businesses and employees. It is essential for organizations to adapt to these changes and invest in employee training and development to ensure a smooth transition to new job roles and skillsets. As we continue to navigate the changing nature of work, it is essential to consider the ethical implications of automation and AI and ensure fair and ethical use.
References
Originally published at Insights2Techinfo
| 2023-03-27T00:00:00 |
2023/03/27
|
https://medium.com/@gupta.brij/how-automation-and-ai-are-changing-the-workplace-a-futuristic-viewpoint-and-research-directions-6b427cc4f953
|
[
{
"date": "2023/03/01",
"position": 23,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 4,
"query": "reskilling AI automation"
},
{
"date": "2023/03/01",
"position": 9,
"query": "machine learning workforce"
},
{
"date": "2023/03/01",
"position": 26,
"query": "artificial intelligence workers"
}
] |
The impact of artificial intelligence on the nature ...
|
EconStor: The impact of artificial intelligence on the nature and quality of jobs
|
https://www.econstor.eu
|
[
"Nurski",
"Hoffmann",
"Authors"
] |
by L Nurski · 2022 · Cited by 13 — Artificial intelligence (AI), like any workplace technology, changes the division of labour in an organisation and the resulting design of jobs.
|
Abstract:
Artificial intelligence (AI), like any workplace technology, changes the division of labour in an organisation and the resulting design of jobs. When used as an automation technology, AI changes the bundle of tasks that make up an occupation. In this case, implications for job quality depend on the (re)composition of those tasks. When AI automates management tasks, known as algorithmic management, the consequences extend into workers' control over their work, with impacts on their autonomy, skill use and workload. We identify four use cases of algorithmic management that impact the design and quality of jobs: algorithmic work-method instructions; algorithmic scheduling of shifts and tasks; algorithmic surveillance, evaluation and discipline; and algorithmic coordination across tasks. Reviewing the existing empirical evidence on automation and algorithmic management shows significant impact on job quality across a wide range of jobs and employment settings. While each AI use case has its own particular effects on job demands and resources, the effects tend to be more negative for the more prescriptive (as opposed to supportive) use cases. These changes in job design demonstrably affect the social and physical environment of work and put pressure on contractual employment conditions as well. As technology development is a product of power in organisations, it replicates existing power dynamics in society. Consequently, disadvantaged groups suffer more of the negative consequences of AI, risking further job-quality polarisation across socioeconomic groups. Meaningful worker participation in the adoption of workplace AI is critical to mitigate the potentially negative effects of AI adoption on workers, and can help achieve fair and transparent AI systems with human oversight. Policymakers should strengthen the role of social partners in the adoption of AI technology to protect workers' bargaining power.
| 2022-07-14T00:00:00 |
2022/07/14
|
https://www.econstor.eu/handle/10419/270468
|
[
{
"date": "2023/03/01",
"position": 34,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 36,
"query": "artificial intelligence workers"
}
] |
The Association between Artificial Intelligence Awareness ...
|
The Association between Artificial Intelligence Awareness and Employee Depression: The Mediating Role of Emotional Exhaustion and the Moderating Role of Perceived Organizational Support
|
https://pmc.ncbi.nlm.nih.gov
|
[
"Guanglu Xu",
"School Of Business",
"Nanjing University Of Information Science",
"Technology",
"Nanjing",
"Ming Xue",
"School Of Business Administration",
"Shanghai Lixin University Of Accounting",
"Finance",
"Shanghai"
] |
by G Xu · 2023 · Cited by 57 — First, AI awareness has a positive impact on employees' psychological state, such as improving employees' internal work motivation [8]. Second, AI awareness has ...
|
The combination of artificial intelligence (AI) technology with the real economy has dramatically improved the efficiency of enterprises. However, the replacement of AI for employment also significantly impacts employees’ cognition and psychological state. Based on the Conservation of Resources Theory, the relationship between AI awareness and employee depression is explored in this article while examining the mediating role of emotional exhaustion, as well as the moderating role of perceived organizational support. Based on a sample of 321 respondents, the empirical results show that (1) AI awareness is significantly positively correlated with depression; (2) emotional exhaustion plays a mediating role between AI awareness and depression; (3) perceived organizational support negatively moderates the relationship between emotional exhaustion and depression; (4) perceived organizational support negatively moderates the mediating role of emotional exhaustion between AI awareness and depression. The research conclusions provide a theoretical basis for organizations to take measures to intervene in the negative impact of changes in AI technology on employees’ mental health.
Based on the above analysis, a moderated mediation model will be built in this article to analyze the relationship between AI awareness and employee depression, including the mediating effect of emotional exhaustion between AI awareness and employee depression. In addition, we also examined the moderating effect of perceived organizational support on the relationship between emotional exhaustion and employee depression, as well as the indirect relationship between AI awareness and employee depression via emotional exhaustion. The theoretical framework of this article is shown in Figure 1 . This research is helpful for deeply understanding the influencing ways and boundary conditions of AI awareness on employee depression, helping enterprises formulate related policies to prevent employee depression and promote AI reform smoothly.
AI awareness is positively related to individual emotional exhaustion, which is viewed as the process of the wearing out and wearing down of a person’s energetic resources [ 27 ] and is positively related to depression. However, perceived organizational support, as a psychological resource, would give employees a positive emotional experience [ 28 ], which can compensate for the loss of resources in daily work and alleviate the negative impact of the excessive consumption of resources [ 29 , 30 ]. Therefore, perceived organizational support may reduce the correlation between emotional exhaustion and employee depression. Furthermore, perceived organizational support may reduce the indirect correlation between AI awareness and employee depression via emotional exhaustion.
According to Brougham and Haar, AI technology threatens individuals’ total career growth, making it difficult to achieve their career goals. Furthermore, the application of AI technology in an organization will force employees to face the risk of being replaced by AI devices and being undervalued and not regarded highly by their employer, therefore, lowering their organizational status [ 7 ]. According to the Conservation of Resources (COR) Theory, people actively acquire, maintain, or protect valuable resources recognized by them. Career development and organizational status are important resources for individuals [ 22 ]. When employees realize that AI technology threatens their career development or organizational status, these resources face a considerable risk of depletion, which often leads to the emotional exhaustion of individuals [ 23 ], thus increasing the risk of depression in employees [ 24 , 25 , 26 ]. Therefore, emotional exhaustion may be a key mediating mechanism between AI awareness and employee depression.
Lazarus theorized that stressors can affect employees’ psychological state and behavior depending on their cognitive appraisal [ 17 ]. When stressors are appraised as challenges, individuals will take more positive measures to deal with them, promoting their well-being. However, if stressors are appraised as threats, individuals will take more negative measures to deal with them, harming their well-being [ 18 ]. As an important stressor, AI awareness reflects employees’ threat appraisal of AI technology, which may be an important factor affecting their mental health. From the current research, it can be seen that there are still few studies on the relationship between AI awareness and employee mental health. Only Brougham and Haar discussed the correlation between AI awareness and employee depression [ 7 ]. However, there is no research on the mediating and moderating mechanisms of AI awareness and employee depression. Depression is an important indicator of mental health, leading to social and occupational dysfunctions [ 19 ], bringing significant psychological pain to individuals while seriously endangering their interpersonal relationships, social functions, and quality of life [ 20 ]. At the same time, employee depression substantially threatens the normal functioning of an organization through low productivity, employee absenteeism, and poor morale [ 21 ]. As a result, the purpose of this article is to discuss the mediating and moderating mechanisms between AI awareness and employee depression. The research significance of this article lies in the following: firstly, it helps to reveal the ways and boundary conditions of the influence of AI awareness on employee depression; second, the research conclusion provides a theoretical foundation for organizations to develop policies to prevent employee depression that may arise during AI implementation.
Artificial intelligence (AI) technology has rapidly advanced in recent years and has been widely implemented in various sectors, allowing enormous value to be generated through businesses [ 1 ]. However, while the development of AI has given a strong impetus to economic growth and improved the efficiency of economic development, it has also significantly impacted the labor market [ 2 ]. Scholars predicted that AI would replace 47% of jobs in the United States in the coming decades [ 3 ]. Nearly 55% of front-line manufacturing jobs in China are easily replaceable [ 4 ]. In reality, manufacturing enterprises have begun to replace labor with AI machines. As early as 2016, the Foxconn Kunshan Factory replaced 60,000 workers with a batch of AI machines [ 5 ]. As some positions will be replaced, AI technology will have a significant impact on employees’ career development, making it an inescapable stressor in contemporary workplaces [ 6 ]. In response to this problem, some scholars have called for attention to be paid towards employees’ cognition and coping behavior with AI technology [ 7 ]. Brougham and Haar put forward AI awareness to describe the extent to which an employee views the likelihood of AI technology impacting their future career prospects [ 7 ]. After this concept was put forward, many scholars studied the influence of AI awareness on employees’ psychological state and behavior. It mainly includes four aspects. First, AI awareness has a positive impact on employees’ psychological state, such as improving employees’ internal work motivation [ 8 ]. Second, AI awareness has a negative impact on employees’ psychological state, such as enhancing employees’ job insecurity [ 9 , 10 ], causing employee job burnout [ 11 ] and depression [ 7 ], reducing the career competency of employees [ 11 ], and negatively affecting employees’ organizational identity and career satisfaction [ 7 , 11 ]. Third, AI awareness has a positive impact on employee behavior, such as improving employees’ innovative behavior [ 8 , 12 , 13 ], promoting employees’ career exploration behavior [ 14 ], positively affecting employees’ work engagement [ 15 ], and encouraging active learning and task crafting [ 13 ]. Fourth, AI awareness has a negative impact on employee behavior, for example, increasing employees’ cynicism and turnover intention [ 1 , 6 ] and promoting employees’ knowledge hiding [ 16 ].
Based on Hypothesis 4 and Hypothesis 5, it can be speculated that the higher the perceived organizational support is, the lower the mediating relationship between AI awareness and depression will be through emotional exhaustion. Therefore, Hypothesis 6 is that perceived organizational support will negatively moderate the mediating role of emotional exhaustion between AI awareness and depression.
According to the above analysis, it can be speculated that organizational support, as a social support from organizations, can alleviate the depression caused by emotional exhaustion. The higher the perceived organizational support is, the lower the impact of emotional exhaustion will be on depression. Therefore, we put forward Hypothesis 5: perceived organizational support will negatively moderate the associations between emotional exhaustion and depression.
The buffering model of social support also holds that a high degree of social support can buffer the impact of stress on depression [ 59 ], while low social support will increase individual susceptibility to depression [ 60 ]. Emotional exhaustion is a typical stress response [ 61 ]. Therefore, social support can reduce the negative impact of emotional exhaustion on depression [ 62 , 63 ]. The mechanism may be that social support can improve individual self-esteem, thus buffering the effect of emotional exhaustion on individual depression [ 53 , 64 ].
Perceived organizational support refers to employees’ general belief that their organizations value their contributions and care for their well-being based on their perception of how organizations reward their work performance and meet their social as well as emotional needs [ 56 ]. Perceived organizational support can be regarded as a psychological resource leading to positive emotional experiences because employees feel the support, understanding, and affirmation of their abilities from their colleagues as well as leaders [ 28 ]. Emotional exhaustion was viewed as the process of the wearing out and wearing down of a person’s energetic resource [ 27 ]. According to COR Theory, psychological resources can compensate for the loss of resources in daily work and alleviate the negative impact of over-consumption [ 29 , 30 ]. Therefore, perceived organizational support can reduce the influence of emotional exhaustion on depression. Studies have also shown that people with higher perceived organizational support experience fewer physical and psychological problems, such as depression and anxiety [ 57 , 58 ].
Emotional exhaustion is conceptualized as representing a deficit in resources, and resource deficits lead to individual psychological problems [ 49 , 50 ]. According to COR Theory, individuals can improve their well-being by preserving and acquiring essential resources. If the key resources are lost, individuals will have destructive emotions, such as tension, stress, and anxiety [ 50 ], which are important risk factors for depression [ 51 ]. Many studies have shown that emotional exhaustion is highly correlated with depression. It is also found in studies on Chinese students that emotional exhaustion is positively related to depression [ 52 , 53 ]. Depressive disorders are also found to be positively predicted among teachers in primary and secondary schools as well as nurses with emotional exhaustion [ 54 , 55 ]. Therefore, we put forward hypothesis 3: emotional exhaustion will be positively associated with employee depression. Combining Hypothesis 2 with Hypothesis 3, we can put forward Hypothesis 4: emotional exhaustion will mediate the relationship between AI awareness and depression.
Emotional exhaustion refers to emotional overwork and exhaustion caused by work, which is manifested as physical fatigue as well as psychological and emotional drain [ 41 ]. COR Theory holds that individuals tend to acquire, maintain, and preserve resources, who will be under pressure when faced with the threat of resource loss, actual loss, and failure posed to their acquisition of resources after investing in them [ 42 , 43 ]. Based on the above analysis, the introduction of AI threatens individual career development and makes individuals face a considerable risk of resource loss, becoming an important stressor. As a stressor, the influence of AI on the individual psychological state and behavior depends on their cognitive evaluation. When some individuals evaluate AI technology as a challenge, they are more willing to have a positive response, and if they evaluate AI technology as a threat, they will have a negative response [ 44 ]. AI awareness means that individuals think that AI technology threatens their career development. Therefore, the stronger the AI awareness is, the greater the possibility of a negative response will be [ 7 ]. When adopting a negative coping style, individuals will want to avoid AI technology, thus reducing their motivation [ 45 ]. In this case, the gap between individual work skills and knowledge and the requirements of the new posts after AI changes will become larger, so people’s jobs will be more threatened. As a conditional resource, a job provides compensation and a sense of self-esteem [ 43 ]. If the existence of jobs is threatened, the resources can only be made to face the risk of further losses. When individuals think that they are facing a loss of resources and cannot replenish them in time, it will lead to emotional exhaustion [ 22 , 46 ]. Additionally, after the introduction of AI, when realizing that their jobs are threatened, individuals usually have fear and anxiety about their future [ 47 ]. These negative emotions further reduce an individual’s ability to save or obtain resources to cope with this situation, leading to more stress, strain, and emotional exhaustion [ 48 ]. Therefore, we put forward hypothesis 2: AI awareness will be positively associated with employees’ emotional exhaustion.
Therefore, the greater the threat that AI technology poses to an individual’s career development, the stronger their AI awareness is, the greater the risk of resource loss is, and the higher their risk of depression will be.
The COR Theory holds that the loss of resources will put great pressure on individuals [ 29 ]. Stress was a risk factor for depression [ 39 ]. Hobfoll et al. posited that resource loss is an important cause of various negative consequences, including depression [ 40 ]. Brown and Andrews reported that about 90% of the depression cases they studied were related to losses, except for those that might be a personality disorder [ 22 ].
The introduction of AI technology will threaten individual resources, mainly regarding the following aspects. First of all, it may replace some occupations, making it difficult for individuals to achieve their career goals [ 7 ], reducing their conditional resources. Secondly, the wage gap for routine/unconventional tasks has widened within organizations [ 35 ], leading to a relative decrease in some individuals’ income, which refers to energy resources. Thirdly, the use of a large number of AI machines will change the required skill structure of workers. Workers’ existing skill set is no longer fit for new job requirements, so they need to invest time and energy to learn new skills, which also means a loss of original personal resources. Fourthly, after the application of AI, it may lead to the inconsistency between employees’ knowledge and skills needed for new tasks and their own knowledge and skills, which will have a negative impact on employee’s self-concept–job fit and reduce individuals’ conceptions of the self, such as self-esteem [ 36 ]. Self-esteem belongs to personal resources [ 29 ]. Therefore, the application of AI will lead to the loss of personal resources. Fifthly, the application of AI makes some workers’ positions in enterprises precarious, which may lead to a perception that employers ignore them and lower their organizational status [ 7 ], meaning a loss of condition resources. Sixthly, the replacement of positions by AI applications will cause employees to worry about losing their jobs, generating job-stress-related presenteeism, which siphons off cognitive energy that could otherwise be used to focus on their work [ 37 , 38 ]. In this case, employees’ energy resources also face the possibility of loss.
According to the COR Theory, there are four kinds of resources: object resources (such as houses and cars), condition resources (such as marriage, status, and employment), personal resources (such as critical skills and personal traits), and energy resources (such as credit, knowledge, and money) [ 27 ].
The main features of depression include sadness, guilt or low self-worth, fatigue and poor concentration, loss of pleasure or interests, and poor sleep or appetite [ 31 ]. Regarding the risk factors of depression, several researchers have found that losing social and material resources is positively associated with depression [ 32 , 33 , 34 ].
Control Variables: To avoid other variables interfering with the relationship among the core variables in this article, we took individual personality variables as control variables, such as gender, age, education level, and occupation. According to the characteristics of occupations, we divided the sample occupations into physical occupations (production workers), administrative service occupations (administration/logistics staff, clerical/office staff), marketing occupations (salespersons, PR people, customer service), professional occupations (human resources management, finance/auditing, teachers), professionals (such as accountants, lawyers, architects, healthcare workers, journalists), technology research and development occupations (technology/R&D personnel), and management occupations (management). Because the occupation is a classified variable, that is, a nominal variable, its own coding has no practical quantitative relationship, and it only represents the differences between categories. According to Xie [ 69 ], in regression analysis, nominal variables cannot be directly included in the regression model as independent variables, and they must be transformed into a set of corresponding virtual variables. Therefore, this article takes the physical occupation as the reference class and uses the other five kinds of virtual variables in the research model. Studies have shown that AI application would affect employees’ work attitude and behavior. There are differences while applying AI in different enterprises, and there are bound to be differences in employees’ work attitude and behavior. Therefore, AI application was taken as a control variable in this article and the measurement of Wang et al. [ 70 ] was adopted. There were four items, one of which was: “Compared with manpower, the application range of AI in my unit will be wider and wider.” Cronbach’s α = 0.75.
Depression: The scale developed by Dhir et al. [ 68 ] was adopted. There were five items on the scale, one of which was: “I have felt lonely” (see Appendix A ). In this study, Cronbach’s α = 0.87.
Perceived Organizational Support: The scale adapted by Shanock and Eisenberger [ 67 ] was used. There were six items on the scale, one of which was: “My work organization values my contributions to its well-being” (see Appendix A ). In this study, Cronbach’s α = 0.82.
Emotional Exhaustion: The scale developed by Watkins et al. [ 66 ] was used. There were three items on the scale, one of which was: “I feel burned out from my work” (see Appendix A ). In this study, Cronbach’s α = 0.91.
AI Awareness: The scale developed by Brougham and Haar [ 7 ] was adopted, and the original scale was appropriately revised according to the background of AI studied in this article. Specifically, “smart technology, automation, robotics and AI” in the original scale were briefly described as AI. There were four items on the scale, one of which was: “I am personally worried about my future in my organization as AI is replacing employees” (see Appendix A ). In this study, Cronbach’s α = 0.91.
The relevant variable measurement scales used in this study are from mature scales used in internationally renowned journals. We translated these scales into Chinese through a back-translation procedure proposed by Brislin [ 65 ] and fine-tuned some items according to Chinese daily expression habits. The key variables were scored on a 7-point Likert scale.
The research data were collected from “Credamo” ( https://www.credamo.com/ , accessed on 25 December 2022). In order to avoid the influence of common method biases, a two-stage survey with an interval of about two weeks was adopted. The demographic variables, AI awareness, and perceived organizational support of samples were investigated in the first stage. We distributed 692 questionnaires and collected 447 after invalid respondents (identified by trap questions and reverse questions) were eliminated. In the second stage, questionnaires were distributed to the 447 respondents in the first stage, mainly to measure their emotional exhaustion and depression. After eliminating invalid questionnaires, 321 respondents were collected and matched. In terms of the features of respondents, men accounted for 47% and women accounted for 53%. The ages of the respondents were mainly distributed between 20 and 58 years old, and the distribution between 25 and 40 years old was concentrated, accounting for 89.1%. Respondents with a Bachelor’s degree accounted for 80.4%, undergraduates and above accounted for 11.2%, and the others accounted for 8.4%. Respondents came from a variety of occupations, including finance/auditing, management, technology/R&D, human resource management, production workers, clerical/office staff, administration/logistics staff, salespersons, customer service, professionals (such as accountants, lawyers, architects, healthcare workers, journalists), PR people, teachers, etc. With the popularization of AI, these occupational groups are more or less affected by AI applications, thus ensuring the effectiveness of sample selection.
Then, we used SPSS PROCESS macro 3.4 to test Hypothesis 6, referring to Hayes’ method [ 73 ]. In the process, gender, age, education level, AI application, and occupation were taken as control variables. The results are shown in Table 4 . When the perceived organizational support was smaller (mean standard deviation), the indirect effect between AI awareness and depression via emotional exhaustion was 0.12, 95%CI = (0.07, 0.19), and the confidence interval did not contain 0, indicating that the indirect effect was significant. When perceived organizational support was greater (mean + standard deviation), the indirect effect between AI awareness and depression via emotional exhaustion was 0.05, 95%CI = (−0.02, 0.12), and the confidence interval contained 0, indicating that the indirect effect was not significant. When the values of perceived organizational support were different, the difference in indirect effect was −0.07, 95%CI = (−0.16, −0.01), and the confidence interval did not contain 0, which indicated that there were significant differences in the indirect effect. The index of moderated mediation = −0.06, 95%CI = (−0.12, −0.01). These results showed that there was a significant moderated mediating effect, and the higher the degree of perceived organization support, the lower the indirect effect between AI awareness and depression via emotional exhaustion. Hypothesis 6 was supported.
To further clarify the direction and size of the moderating effect, emotional exhaustion, perceived organizational support, and depression were standardized by converting the raw scores into Z scores in this study. The results of the simple slope test showed that when perceived organizational support was smaller, the relationship between emotional exhaustion and depression was positive and significant, and the effect was greater (β = 0.56, p < 0.001); when perceived organizational support was greater, the relationship between emotional exhaustion and depression was positive and significant, but the effect was smaller (β = 0.22, p < 0.01). The specific visualization results are shown in Figure 2 , which shows that the higher the degree of perceived organizational support is, the lower the relationship between emotional exhaustion and depression will be, and perceived organizational support negatively moderates the relationship between emotional exhaustion and depression. Hypothesis 5 was supported again.
Then, according to the general test method, core variables were standardized by converting the raw scores into Z scores, including emotional exhaustion, perceived organizational support, and depression. The regression method was then used to test the moderating role of perceived organizational support, the results of which are shown by Model 5. Emotional exhaustion was positively associated with depression (β = 0.44, p < 0.001), while perceived organizational support was negatively associated with it (β = −0.18, p < 0.05). Then, the interaction term of emotional exhaustion and perceived organizational support was added to the independent variables of Model 5, and depression was regressed. The results showed that the interaction term of emotional exhaustion and perceived organizational support was significantly associated with depression (β = −0.22, p < 0.001), indicating that perceived organizational support moderated the relationship between emotional exhaustion and depression. Hypothesis 5 was supported.
To ensure the robustness of the research results, referring to Hayes’ method [ 73 ], we used SPSS PROCESS macro 3.4 to verify the mediating role of emotional exhaustion. In the process, gender, age, education level, occupation, and AI application were taken as control variables. The results showed that the indirect effect of AI awareness on depression via emotional exhaustion was 0.11 and 95%CI = (0.06, 0.17), which did not include 0. Hypothesis 4 was again supported.
It can be seen from Model 3 that AI awareness was positively associated with employee depression (β = 0.18, p < 0.01); thus, Hypothesis 1 was supported. It can be seen from Model 1 that AI awareness was positively associated with employees’ emotional exhaustion (β = 0.34, p < 0.001); thus, Hypothesis 2 was supported. It can be seen from Model 2 that emotional exhaustion was positively associated with employee depression (β = 0.52, p < 0.001); thus, Hypothesis 3 was supported. It can be seen from Model 4 that emotional exhaustion was positively associated with employee depression (β = 0.51, p < 0.001), but AI awareness was not significantly associated with employee depression again. According to the method proposed by Baron and Kenny [ 72 ], it could be judged that emotional exhaustion completely mediated the relationship between AI awareness and employee depression. Thus, Hypothesis 4 was supported.
Firstly, multiple linear regression was used in this article to test Hypothesis 1, Hypothesis 2, Hypothesis 3, and Hypothesis 4. In the process of regression, the gender, age, education level, AI application, and occupation were taken as control variables. The specific regression results can be seen in Table 3 .
The mean, standard deviations, and correlations among the research variables are shown in Table 2 . The results show that there is a significant positive correlation between AI awareness and depression (r = 0.17, p < 0.01), between AI awareness and emotional exhaustion (r = 0.31, p < 0.01), and between emotional exhaustion and depression (r = 0.52, p < 0.01).
To avoid the influence of common method bias on the research conclusion, Harman’s single factor test was used to test the common method bias. The results showed that the proportion of the total variance in the first factor accounted for 36.75%, which did not exceed the threshold of 40%, indicating that there was no serious common method bias in the data [ 71 ].
Confirmatory factor analysis was used to test the validity of discrimination among four variables: AI awareness, emotional exhaustion, depression, and perceived organizational support. As shown in Table 1 , the data analysis results show that the four-factor model has the best fit with the samples (CFI = 0.94, TLI = 0.93, RMSEA = 0.07, SRMR = 0.04), compared with which other models have a poor fit and have passed the chi-square test with a significance level of 0.001, indicating that the measurement in this study has good discrimination validity.
5. Discussion
The impact of AI technology on employees was mainly discussed from two aspects in previous studies. First of all, from the perspective of technological application itself, studies examined the impact of AI technology application on employment [3], negative emotions [74], and job insecurity [70]. Secondly, starting from employees’ cognition of threats posed by technological application, studies examined the effect of AI awareness on job insecurity [6], job satisfaction [7], and turnover intention [1]. However, the mediating and moderating mechanism of AI awareness on employee depression was systematically rarely investigated. Based on COR Theory, the relationship between AI awareness and employee depression was discussed in this article while examining the mediating role of emotional exhaustion and the moderating role of perceived organizational support.
Based on a sample of 321 respondents, the empirical results showed that: first, AI awareness was positively associated with employee depression. This conclusion was consistent with that of previous research [7], and it was verified that psychological disorders (such as depression) could also be triggered among employees who perceived AI technology to threaten their career development in a Chinese sample. In addition, it was found in this study that the control variable, AI application, was negatively associated with employee depression, which was inconsistent with the results of previous studies [74]. A reason behind this phenomenon may be that the relationship between AI application and depression is a result of controlling the influence of AI awareness. Therefore, the relationship between AI application and employee depression here is more of a positive relationship formed by the opportunities brought by AI technology to employees’ career development. Second, emotional exhaustion plays a mediating role between AI awareness and employee depression, showing that the threat of AI technology to employees’ mental health mainly comes from the threat of AI changes to employee resources, which is consistent with the results of previous studies, showing that emotional exhaustion plays a mediating role in the process of stress factors and leads to employee depression [53,55]. In addition, when studying the impact of AI awareness on employees’ psychological state and work attitudes, Brougham and Haar posited that AI awareness was related to a series of negative consequences [7]. However, they did not confirm this mechanism based on data, and the conclusions of this article indirectly supported their inferences. Third, perceived organizational support negatively moderated the relationship between emotional exhaustion and depression. Furthermore, perceived organizational support negatively moderated the mediating effect of emotional exhaustion on the relationship between AI awareness and depression. This conclusion confirmed the view of COR Theory [29,30] that organizational support, as a resource, could supplement the resource loss of employees and reduce the negative impact of resource loss on employees’ stress reactions. In addition, as a form of social support, organizational support could function as a stress buffer and reduce the impact of stressful events on depression. The conclusion of this article also confirms the Stress Buffer Theory of social support [1].
5.1. Theoretical Implications A moderated mediation model based on COR Theory is constructed in this research to explore the relationship between AI awareness and depression. The research conclusion has important theoretical significance for understanding the relationship mechanism between employees’ stress cognition and emotional response with the changes in AI technology. First of all, previous scholars have discussed the relationship between AI awareness and employee depression based on a career-planning model [7]. However, they did not discuss the mediating mechanism between AI awareness and employee depression. In this article, the mediating mechanism of relation construction between AI awareness and employee depression is discussed from the perspective of COR Theory. In doing so, we expanded the interpretive perspective on the relationship between AI awareness and employee depression and an increased understanding of the mediating mechanisms between them. Secondly, previous research lacked an examination of the buffering effect of organizational support in the relationship between AI awareness and employee depression, and only some research discussed the direct effect of organizational support on depression [30]. This research confirms the positive role of perceived organizational support in the process of employee stress relief and expands the cognition of boundary conditions between AI awareness and employee depression in the context of AI technological application. Thirdly, the influence of AI awareness on employee depression was explored in a previous study [7]. However, the mechanism behind the relationship between AI awareness and employee depression has not been systematically investigated. A systematic model is put forward in this article to analyze the mediating and moderating mechanism between AI awareness and employee depression. Through analysis, the conclusion can help to deeply understand the impact and boundary condition between AI awareness and employee depression, which provide a theoretical basis for organizations to take measures to intervene in employee depression with changes in AI.
5.2. Management Implications The integration of AI in industry can provide a market space for AI and improve the efficiency of the economy. However, it must be considered that the replacement of employment by AI technology will have a series of adverse effects on employees’ cognition and psychological state, which may become an important obstacle to the smooth implementation of this integration process. Therefore, measures must be taken to guide employees to correctly recognize the technological changes in AI, ease psychological problems arising from the process, ensure the support of employees for the changes in AI, and create a powerful internal environment for organizations to implement AI. The theoretical research results of this article provide a theoretical basis for policy making in organizations. First of all, the research holds that AI awareness will increase employee depression. This conclusion shows that employees’ cognition significantly influences their psychological state. When employees perceive that AI is more threatening to their career development, their degree of depression is higher. Therefore, it is necessary to strengthen and guide employees to correctly recognize the influence of AI technology on career development through publicity. At the same time, companies should encourage employees to actively pay attention to the opportunities brought by AI technology to employees’ career development. Existing research also shows that with the introduction of AI technology, new employment opportunities will be created [75] while improving the job quality of employees [76]. Encouraging employees to actively recognize the opportunities for career development based on AI technology can make them actively respond to the impact of AI technology, thereby improving their career satisfaction and happiness as well as reducing their depression. Secondly, the research shows that emotional exhaustion is an important mediating mechanism between AI awareness and employee depression. According to COR Theory, in the face of AI technological changes, although opportunities and threats coexist, due to the uncertainty of opportunities, employees tend to conserve resources to respond to threats rather than investing in them to respond to opportunities [33]. In this case, it is impossible for employees to keep up with the pace of AI technology, who are gradually eliminated with AI changes, encountering a gradual loss in resources and making them fall into a loss spiral. Therefore, to avoid this scenario, companies must develop policies that guide employees to actively invest resources in response to AI changes, for example, changing knowledge and skill structures through learning. In addition, employees should improve their own self-efficacy regarding adapting to AI and learning the necessary new skills, so as to help them better master the knowledge and skills to adapt to AI to improve their job security, income, attendance, and organizational status, so as to reduce losses in resources and prevent emotional exhaustion. Thirdly, the study has shown that perceived organizational support can effectively alleviate the indirect effect between AI awareness and depression via emotional exhaustion. This conclusion shows that although the threats posed by AI to employees’ career development will lead to emotional exhaustion, perceived organizational support, as an important psychological resource, helps employees recover from resource depletion. Therefore, organizations should strengthen their support for employees, recognize the efforts made by them in the process of AI transformation, and give them positive feedback. Meanwhile, organizations should formulate policies to help employees adapt to AI technological changes and enable them to strengthen their self-efficacy to such changes.
| 2023-03-15T00:00:00 |
2023/03/15
|
https://pmc.ncbi.nlm.nih.gov/articles/PMC10049037/
|
[
{
"date": "2023/03/01",
"position": 40,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 43,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 88,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/01",
"position": 85,
"query": "artificial intelligence workers"
}
] |
Artificial-Intelligence-Supported Reduction of Employees ...
|
Artificial-Intelligence-Supported Reduction of Employees’ Workload to Increase the Company’s Performance in Today’s VUCA Environment
|
https://www.mdpi.com
|
[
"Rožman",
"Oreški",
"Tominc",
"Maja Rožman",
"Dijana Oreški",
"Polona Tominc"
] |
by M Rožman · 2023 · Cited by 98 — Artificial intelligence helps employees perform their tasks better and be more organized, leading to increased work engagement [39]. Automating certain work ...
|
A survey conducted by the IBM Institute for Business Value [ 28 ] on a sample of 5000 businesses around the world showed that 93 % of companies are considering the use of artificial intelligence, but 60 % fear possible problems with responsibility, and 63 % fear a lack of their human resources and knowledge to manage artificial intelligence technologies confidently. According to a study conducted by Oracle and Future Workplace [ 29 ] involving 1320 U.S. leaders, although employees are willing to accept the integration of AI in their work and acknowledge its benefits beyond merely automating manual tasks, companies are not providing adequate support to facilitate their employees’ adoption of AI, leading to decreased productivity. The study highlights that leaders believe AI can enhance operational efficiency (59%), expedite decision making (50%), significantly reduce costs (45%), improve customer experiences (40%), and enhance the employee experience (37%). However, 90% of leaders express apprehension that they may not be able to adapt to the rapid adoption of AI in their job roles [ 29 ]. Hence, the primary objective of this paper is to underscore the significance of utilizing AI to reduce employee workload and to concentrate on multidimensional factors, namely AI-supported appropriate training and employee development, AI-supported leadership, and AI-supported organizational culture, all of which can considerably reduce employee workload and consequently enhance employee engagement and the company’s performance. In today’s fast-changing business environment, companies are improving their business methods to be more successful than their competitors. Therefore, implementing AI is one of the revolutionary changes that lead to the renewal of the entire company operation, recruitment, training, and retention of people. A company that strives to use AI technology increases its performance in the labor market, as the use of AI technologies reduces routine work tasks and improves employees’ quality of life. Moreover, AI-powered technology blends human experience with innovation and generates essential insights for efficient organizational development. By using machine learning algorithms to simulate cognitive functions, AI enables machines to make decisions that resemble human decision making. As a company prioritizes its workforce, AI facilitates the creation of a more conducive and contented work atmosphere, reduces bias, and spearheads meaningful initiatives aimed at holistic employee development.
In some industries, work may be tied to machines that must be operated on an inflexible schedule, meaning that work schedules are determined by the location and schedule of the technology itself rather than the needs of the employees [ 21 ]. From this point of view, the use of AI technologies allows employees greater flexibility, saving working time and reducing stress in the workplace [ 22 ]. Moreover, AI technology can provide opportunities for employees to balance their professional and private life [ 4 ]. Thus, machine learning is designed to take over manual and repetitive processes, allowing employees to focus on the more rewarding results and analyzing data [ 23 ]. Artificial intelligence significantly reduces employees’ workload and increases employee engagement [ 17 ]. Using AI technologies, employees perform demanding tasks easily and without excessive human effort. For example, HR departments use chatbots to train their employees and communicate with customers [ 24 25 ]. Employees do not have to monitor daily and labor-intensive tasks manually. This means they can focus on meaningful work [ 26 ]. This makes the workplace experience even more engaging. As a result, employees have more time to explore new business areas, expand their skills, and become even more productive in more dynamic work roles [ 16 27 ].
AI technology is already affecting the business environment and the corporate way of working [ 16 ]. Neglecting AI technology could cause companies to lose out in highly competitive markets. It can affect the entire operational environment and the company’s internal functioning [ 11 ]. Still, it can also bring new business opportunities, change the roles of operators in the value chain, and end existing deals [ 15 ]. For example, AI technology can remove traditional semi-finished products in the supply chain and create new interfaces [ 17 ]. The use of AI technologies improves the efficiency, quality, and consistency of business processes and enables greater accuracy [ 18 ]. In addition, by automating routine work, AI can make employees more engaged at work and allow employees more time for education and developing new skills [ 19 ]. A company’s digital transformation ability is largely determined by a clear digital strategy supported by leaders who cultivate an AI-supported organizational culture [ 20 ]. Employees of all age groups must want to work for companies that are deeply committed to digital advancement and the use of AI technologies [ 16 ]. Leaders must provide appropriate training based on the use of AI for all age-diverse employees and strive to retain the most talented employees [ 14 ].
Work has changed in recent years mainly due to digitization and advanced technology. The very nature of work and the way of working have changed [ 8 ]. Since the coordination of the individual and the workplace is essential, work design is gaining more and more attention since, on average, we spend 40.3 h per week at the workplace [ 9 ]. Each individual must be satisfied with his work and work environment; otherwise, he will not be efficient enough, which will have a negative impact on the company’s performance [ 10 ]. The research findings presented by Eurostat [ 9 ] indicate that a considerable proportion of employees face limited control and influence over various aspects of their work. Specifically, 35% of employees do not have control over their work, 29% cannot influence the choice of work methods, 30% are unable to influence the speed of their work, 39% have no control over their break times, and 55% cannot determine their work schedule. Moreover, 40% of employees report experiencing monotony at work, 60% face pressure due to tight deadlines, and 56% are required to work at a very fast pace. Thus, AI technology improves the entire company’s operations from the perspective of new business methods enabled by new digital technology [ 11 ]. Digitization is defined as a change in the way of work, roles, and business caused by adopting digital technologies in the company [ 12 ]. This pertains to modifications occurring across various levels: (1) at the process level, which involves adopting novel digital tools and simplifying procedures by minimizing manual steps; (2) at the organizational level, which includes introducing new services, discarding outdated practices, and presenting existing services in innovative ways [ 13 ]; (3) at the business domain level, which involves alterations in roles and value chains within ecosystems; and (4) at the societal level, which involves modifications in societal structures such as the nature of work and means of influencing decision making. [ 14 ]. The fourth industrial revolution will bring about many changes. Most jobs will be increased by the demand for engineers, logistics specialists, IT specialists, designers, marketing personnel, and other professionals [ 15 ]. Future jobs will require more skills and many monotonous, repetitive tasks will become obsolete [ 8 ]. Digitization affects many aspects of businesses, including information technology, strategy and business models [ 13 ], products and services, internal and external processes, and organizational and corporate culture [ 12 ].
The business world has become more complex, turbulent, and unpredictable [ 1 ]. One of the main reasons for this is the advent of digital transformation [ 2 ]. This dynamic environment is often referred to by the acronym VUCA, which stands for volatility, uncertainty, complexity, and ambiguity [ 3 ]. The term volatility pertains to a fluctuating pace of modification, while uncertainty denotes the absence of foreseeability and adequate information. Complexity characterizes interconnected structures that lack apparent cause-and-effect relationships, and ambiguity acknowledges the challenge of precisely evaluating the truth in an intricate and volatile environment [ 4 ]. VUCA defines the fundamental external and internal conditions and situations that affect a company [ 1 ]. The concept of VUCA has recently gained widespread recognition due to digital transformation, technological advances in artificial intelligence, robotics, exchange platforms, and the Internet, fundamentally changing business models and industries [ 5 ]. In a world where the environment is defined as volatile, uncertain, complex, and ambiguous, making strategic decisions is becoming challenging for companies, as the speed of change can make long-term decisions ineffective [ 6 ]. Furthermore, in a VUCA environment, leaders can no longer make strategic decisions based on experience and knowledge [ 4 ]. The company’s strategic decisions using AI are suitable and effective for the VUCA environment, as they lead to faster decision making, which enables agility in the company [ 1 ]. AI increases predictability, reduces costs, reduces human bias, and improves the effectiveness of top management [ 5 ]. A survey conducted by Deloitte [ 7 ] among 9453 global leaders revealed that 80% of the participants concurred that the business environment in the 21st century has introduced fresh demands for leaders. Furthermore, 81% of the respondents highlighted the importance of possessing the capability to guide through increased complexity and ambiguity. The other leadership skills mentioned by the respondents include leading through influence, managing a remote workforce, supervising a team that comprises humans and machines, and leading with speed.
2. Literature Review and Hypothesis
Implementing digital solutions and the accelerated use of AI have become indispensable for responding to many unexpected changes in the business environment [ 13 ]. A Microsoft study [ 30 ] found that companies see AI as a digital priority. AI leadership support, an experimental mindset, and creating a new culture that supports AI are the keys to successfully embedding AI across the enterprise. The implementation of digital solutions in companies and the accelerated use of AI have become indispensable for responding to many unexpected changes in the business market [ 15 19 ]. Digital business transformation is the key to business competitiveness and success in today’s extremely changing and increasingly demanding business environment [ 20 ]. Digital business transformation with its characteristics and rapidly developing digital technology requires a different type of organization and operation in the company, which means a change in organizational culture [ 21 ]. The renewal of the organizational culture, which encompasses the set of beliefs, values, habits, and experiences that define the organization through its employees, is one of the biggest challenges of a successful digital business transformation [ 18 ]. According to Ransbotham et al. [ 31 ], AI-supported organizational culture is a culture that supports innovation with AI. Thus, an organizational culture that supports AI provides all employees with behavioral guidelines that guide individuals to appropriate behavior and decision making, accelerating the achievement of the organization’s integrated business digital strategy [ 32 ]. A successful organizational culture transformation that supports an integrated business AI strategy decisively supports and accelerates the digital business transformation [ 33 ]. Digital business transformation using AI technology is not only an opportunity for companies. However, it has become an obligation if the company wants to survive in a rapidly changing environment in the long term [ 34 ], and the first step begins with developing a modern digital organizational culture that supports AI [ 13 ]. AI solutions enable companies to automate routine tasks and create smart solutions for employees [ 35 ]. Thus, companies can leverage the rapid progress in cognitive services to gain decision support, extract information from unstructured data, comprehend natural languages, recognize and generate speech, analyze audio-visual content, and other fields that were previously the exclusive purview of human intelligence. [ 23 ].
The interaction between humans and artificial intelligence reveals that an individual’s perception of artificial intelligence is based on different aspects. For example, affordances, salient cues, or collaborative interaction [ 36 ] can influence an individual’s emotions and thus intentions toward AI [ 37 ]. Employees establish an identity concerning the technology used and their workplaces [ 38 ]. In our case, the dependent variable is “employees’ perceived AI-supported reduction of workload,” where we asked employers about their opinion on whether AI enables the reduction of workload. Therefore, the following hypothesis is proposed:
H1. AI-supported organizational culture has a statistically significant positive effect on employees’ perceived reduction of their workload by AI .
The advent of technology has significantly disrupted conventional working practices, particularly those involving monotonous and recurring tasks [ 39 ]. This trend is expected to intensify further with the advent and deployment of AI, which will substantially transform the responsibilities of leaders as well [ 40 ]. Several studies have shown that leaders spend over 50% of their time arranging meetings, answering emails, compiling monthly reports, etc. They spend only 15% of their time on strategic thinking and talent development. With the help of AI, this relationship will change, and leaders can devote themselves to work with higher added value, which presents them with a greater intellectual challenge [ 41 ]. The adoption of AI requires new leadership that supports AI. In addition to employees, leaders also struggle with the stress caused by uncertainty, increasing complexity, and rapid changes in today’s business environment [ 2 ]. Thus, AI can greatly help a leader trying to become more internally agile and encourage creative approaches to the transformation of the company [ 22 ]. AI has evolved into a practically indispensable technology for every company worldwide that wants to strengthen its competitive power [ 5 ]. In addition, recently, some workplaces have become hybrid, which has increased the daily amount of data produced [ 17 ]. Employees increasingly rely on the growing amount of data they access and analyze [ 26 ]. From this point of view, a new leadership style has been formed in companies that largely support AI and use continuous training on various artificial intelligence tools [ 25 ]. A study by Data Agility [ 42 ] on 280 Australian leaders from various industries shows that 92% of leaders are aware of a successful and competitive advantage with AI in their company and 96% of leaders believe that AI will help their company grow. In addition, all Australian leaders believe AI will increase innovation, and 86% believe AI will increase innovation by 15% or more. Competitive advantage, work efficiency, better service delivery, and quicker access to critical insight are the top objectives that leadership across all sectors want to achieve with AI [ 11 ]. According to a survey of 1053 global executives, a majority of companies have adopted AI to automate or enhance repetitive or ineffective processes. Of those surveyed, 66% are utilizing AI technologies predominantly for business process automation, which is widely regarded as an ideal starting point owing to its ability to deliver rapid enhancements. [ 43 ]. For employees to perform their work effectively, leaders must support using artificial intelligence technologies because, in this way, implementing artificial intelligence in the company reduces employees’ workload. Implementing artificial intelligence in the company leads to the transformation of employees’ work and the reduction of their workload [ 43 44 ]. The use of AI technologies improves the performance of the entire company. Namely, AI is changing how companies structure their workflows, decision-making processes, and strategic planning [ 12 ]. In conjunction with analytics, AI programming helps leaders make better decisions about employees, customers, and production based on existing circumstances and future events predicted by AI [ 13 ]. Designed to emulate human behavior, customer service chatbots are essentially digital interfaces that engage in “conversations” with consumers while managing their transactions [ 14 ]. Furthermore, AI is expected to enhance mobile messaging capabilities between consumers and sellers, streamlining the process further [ 15 ]. Hence, it is proposed:
H2. AI-supported leadership has a statistically significant positive effect on employees’ perceived reduction of their workload by AI .
Digital transformation with the help of artificial intelligence is rapidly entering companies of all industries and also in justice, as it is an area that records an extremely fast growth of data, and judicial employees must make complex decisions quickly and efficiently [ 23 45 ]. In various justice systems worldwide, digital tools have already been introduced to facilitate access to legal aid, improve communication between courts and lawyers, and support the work of judges and other court employees [ 46 ]. For example, the Ministry of Justice in Estonia is looking for AI opportunities to optimize and automate the procedural steps of the court in all types of proceedings, including procedural decisions where possible. Processes based on paper and electronic files will be partially replaced by processes based on AI data. One of the goals is that all court cases are conducted digitally, without paper files, and no unnecessary electronic files are prepared, and the processes will be optimized and automated as much as possible [ 47 ]. Moreover, the use of artificial intelligence to help optimize supply chain management is becoming more widespread in various industries. The management of supply chains has become increasingly complex in recent years, as physical flows are increasingly connected, and market volatility increases the demands for agility and adaptability. The use of artificial intelligence to manage supply chains is one way many companies are taking advantage of AI to manage increasingly demanding processes on a global and local level [ 48 ]. By using the vast amount of data generated by the company’s operations, the company can create solutions to transform supply chain operations with the help of artificial intelligence [ 49 ]. This may include factory automation, improving quality control, demand forecasting, and predictive maintenance [ 38 ]. Thus, AI-powered tools and solutions are increasingly appearing in the workplace. From this perspective, employee training in artificial intelligence plays a key role [ 33 ]. Artificial intelligence in employee training has added a new dimension to many employees’ roles [ 23 ]. Artificial intelligence can significantly shorten the learning process by recommending unique modules employees need to improve their job skills [ 34 ]. Employee training systems infused with AI have more personalization, automation, knowledge insights, and long-term sustainability than ever before [ 33 ]. Artificial intelligence enhances employee training by offering innovative and personalized learning techniques [ 48 ]. Program bots select relevant lessons based on employee interest and contextual tagging, delivering tailored training to individuals [ 23 ]. With machine learning capabilities, AI can better anticipate learning patterns and forecast training needs based on previous behaviors, work roles, prior learning styles, experiences, and educational backgrounds [ 50 ]. This approach creates a more diverse and adaptable training program [ 39 40 ]. AI can even personalize quizzes and evaluations to optimize training outcomes. Additionally, AI mitigates any personal biases that may affect employee training [ 41 ]. Implementing AI in employee training leads to more effective and successful learning paths [ 25 ]. Therefore, it is hypothesized:
H3. AI-supported appropriate training and development of employees have a statistically significant positive effect on employees’ perceived reduction of their workload by AI .
Employees are the key driving force of any company. Ultimately, highly motivated and engaged employees drive business growth [ 8 ]. However, today’s trends in the workplace show that employees are not as engaged as they used to be [ 16 ]. A Gallup survey [ 51 ] of 112,312 business units in 96 countries shows that employee engagement declined by 2% in 2020. In 2020, the world’s employees reached an all-time high for experiencing stress. In 2021, this percentage went even higher. Globally, employee engagement and well-being remain very low, holding back enormous growth potential [ 51 ]. Inevitably, the stress of employees has a negative impact on employee engagement. From this perspective, companies must focus on radical change rather than incremental improvements [ 52 ]. In order for companies to change radically, they need new ways of thinking and new approaches to work [ 24 ]. One of the main ways is to introduce AI technology into the company, as AI helps create a new future of work that is more flexible, diverse, and focused on employee well-being [ 19 ]. Additionally, a survey conducted by Verint [ 53 ] of 34,000 employees in 18 countries globally revealed that 72% of them attributed their low stress levels to access to AI tools. Additionally, 64% of employees agreed that automation technology can reduce both workload and stress. Furthermore, more than 70% of employees expressed a preference for technology to replace manual and laborious tasks. Among employees who reported high base levels of stress (82%), the majority stated that they would welcome AI technology that could provide them with the right information at the right time. Furthermore, 58% of employees expressed a desire for their employers to use more automation technology such as AI, and 55% had directly requested better technology from their employers to help them work more effectively. Artificial intelligence can help improve communication and optimize workflows by enhancing conversations and enriching interactions between employees and customers [ 20 ]. Machine learning algorithms can predict reactions to certain actions, such as meeting requests, which leads to higher employee engagement [ 23 ]. In this way, AI can minimize employees’ steps to agree on meeting times or exchange necessary information in advance [ 48 ]. AI can also automatically compose meeting transcripts, allowing employees to focus on the meeting rather than taking notes [ 20 ]. It can even identify speakers based on their voices and convert their speech into text. Participants and others can then digitally search the final transcript. Whether it is sales, marketing, HR, finance, or any other company department, AI can help improve work efficiency and engagement [ 10 30 ]. Automating tasks gives employees more time to do other work and focus more on tasks that utilize creativity, which leads to increased company performance [ 14 54 ]. For example, financial advisors might spend less time analyzing clients’ financial situations and more time understanding their needs and explaining creative options [ 55 ]. Employee engagement is employees’ positive attitude towards their work and the company’s values [ 56 ]. Employee engagement significantly impacts company performance, which can be greatly enhanced by creating new working conditions using AI [ 57 ]. According to this, the following two hypotheses are proposed:
H4. Employees’ perceived reduction of their workload by AI has a statistically significant positive effect on employee engagement .
H5. Work engagement has a statistically significant positive effect on the company’s performance .
| 2023-01-14T00:00:00 |
2023/01/14
|
https://www.mdpi.com/2071-1050/15/6/5019
|
[
{
"date": "2023/03/01",
"position": 42,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 50,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 80,
"query": "job automation statistics"
},
{
"date": "2023/03/01",
"position": 38,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/01",
"position": 43,
"query": "AI labor market trends"
},
{
"date": "2023/03/01",
"position": 40,
"query": "machine learning workforce"
},
{
"date": "2023/03/01",
"position": 46,
"query": "artificial intelligence workers"
}
] |
Analysis on the Negative Impact of AI Development ...
|
Analysis on the Negative Impact of AI Development on Employment and Its Countermeasures
|
https://www.shs-conferences.org
|
[
"Bohan Zhao",
"Maple Leaf International School-Wuhan",
"Wuhan",
"Hubei"
] |
by B Zhao · 2023 · Cited by 23 — Based on the literature collected, the author came up with two practical methods to deal with the negative impact brought by AI to employment: the industrial ...
|
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
| 2023-07-14T00:00:00 |
2023/07/14
|
https://www.shs-conferences.org/articles/shsconf/abs/2023/03/shsconf_pesd2023_03022/shsconf_pesd2023_03022.html
|
[
{
"date": "2023/03/01",
"position": 69,
"query": "artificial intelligence employment"
}
] |
Why AI Won't Cause Unemployment
|
Why AI Won't Cause Unemployment
|
https://pmarca.substack.com
|
[
"Marc Andreessen"
] |
AI is simply already illegal across most of the economy, soon to be virtually all of the economy. Thanks for reading Marc Andreessen Substack! Subscribe for ...
|
Fears about new technology replacing human labor and causing overall unemployment have raged across industrialized societies for hundreds of years, despite a nearly continual rise in both jobs and wages in capitalist economies. The jobs apocalypse is always right around the corner; just ask the Luddites.
We had two such anti-technology jobs moral panics in the last 20 years — “outsourcing” enabled by the Internet in the 2000’s, and “robots” in the 2010’s. The result was the best national and global economy in human history in pre-COVID 2019, with the most jobs at the highest wages ever.
Now we’re heading into the third such panic of the new century with AI, coupled with a continuous drumbeat of demand for Communist-inspired Universal Basic Income. “This time is different; AI is different,” they say, but is it?
Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.
Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.
How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.
How do I know that? Because:
This chart shows price changes, adjusted for inflation, across a dozen major sectors of the economy.
As you can see, we actually live in two different economies.
The lines in blue are the sectors where technological innovation is allowed to push down prices while increasing quality. The lines in red are the sectors where technological innovation is not permitted to push down prices; in fact, the prices of education, health care, and housing as well as anything provided or controlled by the government are going to the moon, even as those sectors are technologically stagnant.
We are heading into a world where a flat screen TV that covers your entire wall costs $100, and a four year college degree costs $1 million, and nobody has anything even resembling a proposal on how to systemically fix this.
Why? The sectors in red are heavily regulated and controlled and bottlenecked by the government and by those industries themselves. Those industries are monopolies, oligopolies, and cartels, with extensive formal government regulation as well as regulatory capture, price fixing, Soviet style price setting, occupational licensing, and every other barrier to improvement and change you can possibly imagine. Technological innovation in those sectors is virtually forbidden now.
Whereas the sectors in blue are less regulated, technology whips through them, pushing down prices and raising quality every year.
Note the emotional loading of the interplay of production and consumption here. What do we get mad about? With our consumer hat on, we get mad about price increases — the red sectors. With our producer hat on, we get mad about technological disruption — the blue sectors. Well, pick one; as this chart shows, you can’t have your cake and eat it too.
Now think about what happens over time. The prices of regulated, non-technological products rise; the prices of less regulated, technologically-powered products fall. Which eats the economy? The regulated sectors continuously grow as a percentage of GDP; the less regulated sectors shrink. At the limit, 99% of the economy will be the regulated, non-technological sectors, which is precisely where we are headed.
Therefore AI cannot cause overall unemployment to rise, even if the Luddite arguments are right this time. AI is simply already illegal across most of the economy, soon to be virtually all of the economy.
| 2023-03-01T00:00:00 |
https://pmarca.substack.com/p/why-ai-wont-cause-unemployment
|
[
{
"date": "2023/03/01",
"position": 71,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 28,
"query": "AI regulation employment"
},
{
"date": "2023/03/01",
"position": 4,
"query": "AI wages"
}
] |
|
Does Artificial Intelligence Promote or Inhibit On-the-Job ...
|
Does Artificial Intelligence Promote or Inhibit On-the-Job Learning? Human Reactions to AI at Work
|
https://search.proquest.com
|
[
"Li",
"Zhang",
"Niu",
"Chen",
"Zhou"
] |
by C Li · 2023 · Cited by 36 — Moreover, it has been found that AI's impact on on-the-job learning is more prominent for older, female and less-educated employees, as well as those without ...
|
Copyright information
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
| 2023-03-01T00:00:00 |
https://search.proquest.com/openview/4452876780a8aef7948454add4c0b9d6/1?pq-origsite=gscholar&cbl=2032325
|
[
{
"date": "2023/03/01",
"position": 90,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 50,
"query": "artificial intelligence workers"
}
] |
|
Artificial intelligence: how to get the most from the labour- ...
|
Artificial intelligence: how to get the most from the labour-productivity boost
|
https://www.bruegel.org
|
[
"Georgios Petropoulos",
"Harang Ju",
"Bertin Martens",
"Marshall Van Alstyne",
"Geoffrey Parker"
] |
More than a quarter of all job vacancies in the last decade that required AI skills were posted by the top 10 firms that employed people with AI skills. The ...
|
In the last decade, labour productivity growth has slowed, despite the fast development of new efficient general-purpose digital technologies, including machine learning and artificial intelligence (AI). This productivity slowdown is a paradox. Does it mean the hopes about the potential of these technologies to transform societies and improve people’s lives were false? A paper by Petropoulos and Kapur (2022), prepared for the Transatlantic Expert Group on the Future of Work, discussed what lies behind this paradox and makes policy recommendations on how to get the most out of AI in terms of productivity and economic growth.
As a starting point, Brynjolfsson et al (2021) showed there are deeper causes of the AI productivity paradox. AI is a general-purpose technology that penetrates many economic sectors. Analogous technologies with such a general scope include the steam engine, electricity and computers. A common characteristic of these technologies is that for each, it took years after their introduction for the pay-off to be seen in the productivity statistics. For instance, after the introduction of electricity in American factories, productivity was stagnant for over two decades. Only after managers re-invented their production lines using distributed machinery, a technique made possible by electricity, did productivity belatedly surge. Each general-purpose technology requires the development of sufficient complementary innovations and adjustment of the demand side, in order to have an impact on productivity.
In the case of AI, these complementary innovations involve intangible capital. Firms need to rethink their business models, managers need to develop expertise for the digital age, workers need to be retrained to interact with these new technologies, and complementary web applications and software need to be designed. Without these complementary innovations it is hard for AI to boost productivity. Therefore, over time, there will be two distinct phases in the impact of AI on growth: an initial phase when intangible capital is created and accumulated, followed by a productivity boom.
The fact that the AI productivity boom is expected does not mean just waiting for it to happen. Careful thought needs to be given to which policies will help the boom phase arrive earlier and to last longer. A proper policy framework should identify and mitigate the bottlenecks to the realisation of the economic and social benefits of AI.
AI bottlenecks
One bottleneck has to do with firms’ financial constraints. Acemoglu et al (2022) showed that the most frequent reason for not adopting AI is that non-adopters find AI technologies expensive within their business models and production functions.
Specific policies should be prioritised to maximise knowledge spillovers without impeding innovators’ incentives. Knowledge spillovers have traditionally been a central objective of government policy interventions. In the context of a strong intellectual property regime that keeps the value of innovation high, policies that aim at wider diffusion of AI technologies can be beneficial in building the intangible capital needed to arrive at the productivity boom phase. Becker (2015) and Bloom et al (2019) illustrated how R&D tax credits for AI investments can work well towards this goal.
Many countries provide incentives for research and development, such as allowing additional deductions against tax liabilities. However, national measures differ in terms of their generosity. An overall estimation by Bloom et al (2019) concluded that a 10 percent fall in the tax price of R&D leads to at least a 10 percent increase in R&D in the long run. Hence, AI tax credits can prompt the diffusion of these technologies, and could contribute to a significant accumulation of intangible capital in order to reach a critical mass, putting the economy on the productivity boom path.
The second bottleneck is methodological and has to do with being able to measure the impact of AI. Specifically, new frameworks are needed that are more suitable for measuring the contribution of AI to productivity (Brynjolfsson and Petropoulos, 2022). Current measurements, such as GDP, are insufficient when they only factor in tangible goods and services that are offered at positive prices. In the digital economy, many intangible goods and services are provided at no financial cost to consumers. These still increase consumer welfare, create jobs and generate profit. Moreover, advancements in AI decision-making and prediction could generate new opportunities for economic growth that have never previously been realised.
Also needed are policies that focus on the supply of human capital, especially the training of managers and AI talent. Managers should become more familiar with the practical implications of AI in order to contribute to the reorganisation of work, towards a model in which AI machines and labour act as complements. This is important to avoid the risk of excessive automation. As Elon Musk put it: “Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated”. There is a risk that managers choose to overinvest in automated technologies that do not add much in terms of productivity, while neglecting the productivity boost of combining labour and AI capital in a harmonious way. Alongside investment in AI, managers should change fundamentally their perspectives on how their firms should adjust their work environments so workers can become more efficient by using AI machines. A human-centric approach is needed in industrial production and the provision of tasks, in order to grasp the full benefits of technology.
Acquiring capable AI talent is an important part of this. The complementarity between workers and AI systems requires workers to be well trained on how to interact with AI machines, in order to maximise the efficiency of production processes.
However, AI talent is currently very concentrated in a few superstar firms. Wang et al (2021) used US online job posting data from Lightcast from January 2010 to June 2020, and found that the top employers account for a large percentage of the total demand for frontier technology skills, including AI, machine learning, natural language processing, cloud computing and big data. More than a quarter of all job vacancies in the last decade that required AI skills were posted by the top 10 firms that employed people with AI skills. The respective concentration percentage of more ‘traditional’ information technology skills is only 6.9 percent. Wide adoption of AI in order to maximise its knowledge spillovers, and therefore its social benefits, would require smaller firms to be able to hire AI experts, which will help them make complementary investments in intangible capital in order to grasp a fair share of these benefits.
Market power failure
In addition to education and training, another reason for the shortage of skills, especially in small and medium enterprises, has to do with the increased market power of the big companies (Philippon, 2019). This leads to only a small portion of firms capturing most of the talent and the benefits from AI technologies, and thus consolidating their positions in the markets in which they operate.
Addressing the market-power failure in AI-related markets would require a combination of market regulation, competition policies and labour-market policies (Parker et al, 2022). Market regulation should set the basic principles of operation so that specific firms do not have an unfair competitive advantage that allows them to grow at the expense of their competitors, even if they are not more efficient in terms of production costs and quality of products and services. Competition policy should ensure that these regulatory principles are enforced, enabling antitrust authorities to intervene in a timely manner and to have access to relevant information, in order to evaluate cases of market misconduct. Labour-market policies should embrace flexibility, allowing AI talent to flow across different firms, but policies should also give workers adequate social protection.
Meanwhile, employment contracts should be adjusted to incorporate opportunities for life-long learning, because in the age of AI skills depreciate rapidly. Different jobs have different training needs in terms of keeping up to date with relevant technological developments. On-the-job training should be designed taking into consideration the occupational characteristics and how dynamically these characteristics evolve due to technology.
The results in Deming and Noray (2020) are striking. They studied the impact of changing job skills on career earnings for US college graduates. They found that college graduates in all fields experienced rapid earnings growth. Yet the relative earnings advantage for graduates in subjects such as computer science, engineering and business was highest at labour-market entry and declined rapidly over time. Flatter wage growth for technology-intensive majors coincides with their faster exit from career-specific occupations. This implies that in order to prolong technology-intensive careers, investments in human capital should not stop at the point of entry into the labour market. Continued investment in lifelong learning and training is needed, especially as digital technologies penetrate more and more sectors and occupations.
AI, as a general-purpose technology is expected to affect various aspects of our lives. Getting significant economic and social benefits from it requires a multidimensional policy toolkit to defined. If this promotes a human-centric approach, it will result in widespread adoption of AI, increasing innovation, competitiveness and social welfare.
References
Acemoglu, D., G.W. Anderson, D.N. Beede, C. Buffington, E.E. Childress, E. Dinlersoz ... N. Zolas (2022) ‘Automation and the workforce: A firm-level view from the 2019 Annual Business Survey’, NBER Working Paper w30659, National Bureau of Economic Research
Becker, B. (2015) ‘Public R&D Policies and Private R&D Investment: A Survey of the Empirical Evidence’, Journal of Economic Surveys 29(5): 917-942
Bloom, N., J. Van Reenen and H. Williams (2019) ‘A toolkit of policies to promote innovation’, Journal of Economic Perspectives 33(3): 163-84
Brynjolfsson, E. and G. Petropoulos (2021) ‘The coming productivity boom’, MIT Technology Review, 10 June
Brynjolfsson, E., D. Rock and C. Syverson (2021) ‘The Productivity J-Curve: How Intangibles Complement General Purpose Technologies’, American Economic Journal: Macroeconomics 13(1): 333-72
Deming, D.J. and K. Noray (2020) ‘Earnings Dynamics, Changing Job Skills, and STEM Careers’, Quarterly Journal of Economics 135(4): 1965-2005
Parker, G., G. Petropoulos and M.W. Van Alstyne (2022) ‘Digital platforms and antitrust’ in E. Brousseau, J.-M. Glachant and J. Sgard (eds) The Oxford Handbook of Institutions of International Economic Governance and Market Regulation, Oxford University Press
Petropoulos, G. and M. Kapur (2022) ‘Artificial intelligence: increasing labour productivity in a responsible way’, Transatlantic Expert Group on the Future of Work, available at https://www.bruegel.org/sites/default/files/2022-11/TEG_241122_1.pdf
Philippon, T. (2019) The Great Reversal: How America Gave Up on Free Markets, Belknap Press of Harvard University Press
Wang, J., G. Petropoulos, and S. Steffen (2021) ‘Concentration of artificial intelligence and other frontier IT skills’, Bruegel Blog, 21 October
| 2023-03-01T00:00:00 |
https://www.bruegel.org/analysis/artificial-intelligence-how-get-most-labour-productivity-boost
|
[
{
"date": "2023/03/01",
"position": 91,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/01",
"position": 40,
"query": "AI regulation employment"
},
{
"date": "2023/03/01",
"position": 79,
"query": "government AI workforce policy"
},
{
"date": "2023/03/01",
"position": 13,
"query": "AI labor union"
},
{
"date": "2023/03/01",
"position": 10,
"query": "artificial intelligence workers"
},
{
"date": "2023/03/01",
"position": 10,
"query": "artificial intelligence wages"
}
] |
|
Artificial intelligence and labour market matching
|
Artificial intelligence and labour market matching
|
https://ideas.repec.org
|
[
"Stijn Broecke",
"Oecd",
"Author",
"Listed"
] |
by S Broecke · 2023 · Cited by 31 — While still in its infancy, Artificial Intelligence (AI) is increasingly used in labour market matching, whether by private recruiters, public and private ...
|
While still in its infancy, Artificial Intelligence (AI) is increasingly used in labour market matching, whether by private recruiters, public and private employment services, or online jobs boards and platforms. Applications range from writing job descriptions, applicant sourcing, analysing CVs, chat bots, interview schedulers, shortlisting tools, all the way to facial and voice analysis during interviews. While many tools promise to bring efficiencies and cost savings, they could also improve the quality of matching and jobseeker experience, and even identify and mitigate human bias. There are nonetheless some barriers to a greater adoption of these tools. Some barriers relate to organisation and people readiness, while others reflect concerns about the technology and how it is used, including: robustness, bias, privacy, transparency and explainability. The present paper reviews the literature and some recent policy developments in this field, while bringing new evidence from interviews held with key stakeholders.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:oec:elsaab:284-en. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/eloecfr.html .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
| 2023-01-30T00:00:00 |
2023/01/30
|
https://ideas.repec.org/p/oec/elsaab/284-en.html
|
[
{
"date": "2023/03/01",
"position": 98,
"query": "artificial intelligence employment"
}
] |
AI, Job Displacement, and Lessons from the Past
|
AI, Job Displacement, and Lessons from the Past: Navigating the Future of Work with Insights from the 1900s
|
https://www.linkedin.com
|
[
"Bernard Marr",
"Iot For All",
"Robert Lienhard",
"Gerald Aguinaldo",
"Stephen Howell",
"Empowering Digital Experiences With Conversational Ai",
"Founder",
"Architect",
"Product Owner Behind The Kognetiks Chatbot For Wordpress",
"Responsible For Strategy"
] |
The advent of industrial automation brought about profound changes in the workforce. This led to job displacement and the need for workers to adapt to new, ...
|
As artificial intelligence (AI) continues to advance and integrate into various industries, concerns regarding job displacement and societal changes echo those of the early 20th century during the rise of industrial automation. This article examines the parallels and distinctions between AI technology adoption, with a focus on products like ChatGPT, and the technological transformations of the early 1900s. By understanding the similarities and differences, we can better prepare for the potential consequences and capitalize on the opportunities AI presents to us.
Historical Perspective: Early 20th Century Technological Advances
Taking a step back to examine historical technological advances offers valuable context for understanding the impact of AI and automation on today's workforce. In the early 20th century, innovations such as assembly lines, mechanized production methods, and electricity revolutionized manufacturing and altered the employment landscape. The rise of industrial automation during this period had profound effects on productivity, production costs, and workforce demands. By exploring the past, we can gain a deeper appreciation of the present challenges and opportunities brought about by AI occurring today.
The Rise of Industrial Automation
The rise of industrial automation in the early 20th century marked a pivotal moment in the evolution of manufacturing. The introduction of assembly lines allowed for the mass production of goods, significantly increasing productivity and efficiency. Mechanized production methods along with electricity streamlined operations, leading to lower production costs and more competition. These advances also brought about shifts in workforce demands. Manual laborers saw their roles replaced by machines. Workers were now required to acquire new skills and adapt to an increasingly automated environment. This led to a reevaluation of the role of education in preparing workers for the job market. By reflecting on the challenges faced during this period of industrial transformation, we can draw valuable lessons in our approach to AI and automation in today's workforce.
Job Displacement and Societal Changes
The advent of industrial automation brought about profound changes in the workforce. This led to job displacement and the need for workers to adapt to new, automated environments. Manual laborers faced challenges where their skills became obsolete or insufficient. Society was required to respond to these shifts and devise strategies to mitigate the negative impacts. The consequences of job displacement caused by automation prompted education reform with an increased focus on technical and vocational training. This also gave rise to labor unions to protect workers' rights and interests. The widespread implementation of AI technologies may result in significant job displacement. Understanding these societal changes is essential in navigating the complexities of our AI-driven future.
AI Technology Adoption: ChatGPT and Beyond
The rapidly evolving landscape of artificial intelligence has brought forth groundbreaking technologies like ChatGPT, which has already in a very short time significantly influenced various aspects of modern society. From customer service to content creation, from data analysis to personalized experiences, AI technologies are reshaping the way we live, work, and communicate. The future is here now and AI adoption is likely to have a weighty impact on diverse industries considering all the advantages it offers, including increased efficiency, personalization, and scalability. By understanding AI's role in modern society, we can better anticipate future developments and harness its full potential to drive innovation and growth.
AI's Role in Modern Society
The integration of AI technologies, such as ChatGPT, into various aspects of daily life has transformed the way we interact, create, and analyze information. As AI continues to evolve at a very rapid pace. It is crucial to understand its role in modern society and the myriad benefits it promises. AI has permeated key areas like customer service, content creation, data analysis, to name just a few. By examining AI's impact, the advantages of AI adoption, including increased efficiency, personalization, and scalability, we can better prepare to leverage its potential to enhance our lives and businesses. ChatGPT and similar AI technologies have already permeated many aspects of our daily lives, sometimes without us realizing it.
The AI Job Displacement Debate
As AI becomes more pervasive, concerns about job displacement will intensify. The impact of AI on the workforce will mirror early 20th-century concerns, presenting potential consequences such as the replacement of repetitive manual tasks, an increased demand for new skills and expertise, and a shift towards more creative and strategic roles. By understanding these implications now, we can better anticipate effects on the job market and the roles workers will occupy in the future brought about by AI and the resulting transformation.
The AI job displacement debate is an ongoing and complex discussion centered around the impact of artificial intelligence on the workforce, both in terms of potential job losses and the creation of new jobs. As AI technologies become increasingly sophisticated, there will be growing concern that many jobs currently performed by humans will be replaced by machines, leading to widespread unemployment and social unrest.
Proponents of AI argue that its adoption will ultimately lead to the creation of new jobs that have not yet been envisioned, much like how the Industrial Revolution gave rise to new professions. They emphasize that AI can be used to augment human labor, increasing productivity and efficiency while allowing workers to focus on more strategic and creative tasks. Furthermore, proponents point out that AI has the potential to improve the overall quality of life by automating mundane and repetitive tasks, freeing up time for more meaningful pursuits.
On the other hand, critics of AI assert that the rate of job displacement may outpace the creation of new jobs, leaving many workers unable to adapt and find new employment. They argue that the shift in workforce demands will disproportionately affect low-skilled workers and exacerbate existing income inequality. Critics also emphasize the importance of implementing social safety nets, such as universal basic income, to support those who may be negatively affected by the rapid adoption of AI technologies.
To address the AI job displacement debate, a multifaceted approach is necessary. This includes investing in education and retraining programs to equip workers with the skills needed to succeed in an AI-driven economy, fostering collaboration between industry, academia, and government to create a supportive ecosystem and implementing forward-thinking policies that protect workers while encouraging innovation.
As the debate continues, it is essential for all stakeholders to engage in open and informed discussions about the challenges and opportunities presented by AI. By fostering a proactive approach to workforce development, regulation, and ethics, society can better navigate the complexities of AI adoption and ensure a more inclusive and resilient future.
Comparing AI Adoption to Early 20th Century Technological Advances
In the pursuit of understanding the transformative power of artificial intelligence, it is essential to draw comparisons to historical technological advancements and examine how society can best prepare for AI's impact on the workforce. This analysis requires a comprehensive examination of the similarities and differences between AI adoption and early 20th-century technological advances, as well as strategies to ensure a smooth transition for society.
Similarities
Drawing parallels between AI adoption and early 20th-century technological advances can provide valuable insights into the potential implications of AI on the workforce and society as a whole. By examining the similarities between these two transformative periods, we can learn from historical experiences and better anticipate the challenges and opportunities that lie ahead. There are commonalities that both eras share, such as rapid technological progress, disruption of traditional industries, increased productivity and efficiency, job displacement, and the need for workforce reskilling. Understanding these shared aspects allows us to contextualize AI's impact and inform our approach to addressing its potential consequences.
Differences
While the similarities between AI adoption and early 20th-century technological transformations offer valuable insights, it is equally important to acknowledge the key differences that set AI apart from previous innovations. Understanding these distinctions allows us to better comprehend the unique challenges and opportunities that AI presents. The broader impact of AI on various industries has accelerated the rate of change with AI augmenting human capabilities rather than simply displacing workers. These distinct aspects of AI adoption will have far-reaching effects on the workforce and the wider society.
Preparing for AI's Impact on Society
As we navigate the transformative landscape of artificial intelligence, it is imperative for society to proactively address the challenges and opportunities AI presents. Preparing for AI's impact on the workforce involves a two-pronged approach: fostering an environment that supports education and reskilling, and adapting to the rapid pace of AI's evolution. By cultivating a holistic strategy that encompasses workforce development, regulatory frameworks, and embracing the potential of AI as an augmentative tool, we can ensure a more resilient and inclusive future for all members of society.
To minimize the negative effects of AI on the workforce, it is crucial to invest in education and workforce training, develop programs for lifelong learning, and foster collaboration between industry, academia, and government. Adapting to AI's rapid development will require agility and flexibility in businesses and individuals as well as a proactive approach to regulation and ethics. It will also require us to embrace AI as a tool for human augmentation and collaboration.
As we consider the many facets of AI's rapid evolution and its impact on society, it is essential to address the questions and concerns that arise in this ever-changing landscape. By engaging in open dialogue and sharing knowledge, we can better equip ourselves to navigate the challenges and opportunities that AI presents.
Questions and Answers to Ponder about AI Adoption
How is AI adoption like early 20th-century technological advances?
AI adoption is like early 20th-century technological advances in several ways, such as rapid technological progress, disruption of traditional industries, increased productivity and efficiency, and job displacement. Both periods witnessed transformative innovations that reshaped the workforce, necessitating reskilling and adaptation to new technologies.
What are the key differences between AI adoption and early 20th-century technology transformations?
The key differences between AI adoption and early 20th-century technology transformations include AI's broader reach, impacting a wider range of industries; the faster pace of change, leading to quicker adoption and adaptation; and a greater potential for augmentation, rather than pure displacement of human workers, as AI can enhance human capabilities and foster collaboration.
How can society prepare for the potential consequences of AI on the workforce?
Society can prepare for the potential consequences of AI on the workforce by investing in education and workforce training, developing programs for lifelong learning, and fostering collaboration between industry, academia, and government. Additionally, promoting flexibility and agility for businesses and individuals, taking a proactive approach to regulation and ethics, and embracing AI as tools for human augmentation and collaboration will also help mitigate the negative effects and capitalize on the opportunities AI presents.
In Conclusion
The adoption of AI technology, including products like ChatGPT, shares similarities with early 20th-century technological advances, such as increased productivity, disruption of traditional industries, and job displacement. However, there are notable differences, including AI's broader reach, faster rate of adoption leading to a swifter pace of change, and the potential for human augmentation. By recognizing these parallels and distinctions, society can better prepare for AI's impact and capitalize on the opportunities it presents.
| 2023-03-01T00:00:00 |
https://www.linkedin.com/pulse/ai-job-displacement-lessons-from-past-navigating-future-howell
|
[
{
"date": "2023/03/01",
"position": 3,
"query": "automation job displacement"
},
{
"date": "2023/03/01",
"position": 17,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 26,
"query": "AI regulation employment"
}
] |
|
Job Displacement: Will AI and Automation Lead to ...
|
Job Displacement: Will AI and Automation Lead to Widespread Unemployment?
|
https://www.linkedin.com
|
[
"Statistics Canada",
"Statistique Canada",
"Sajithkumar The Recruiter",
"Ewaldo Del Valle"
] |
AI and automation have sparked concerns about job displacement. While some jobs will be lost, new jobs will also be created.
|
Artificial intelligence (AI) and automation have been transforming industries, and as a result, the way we work. While these technologies have brought about many positive changes, such as increased efficiency and accuracy, they have also sparked concerns about job displacement. Will AI and automation lead to widespread unemployment? Let's take a closer look.
TL;DR: AI and automation have sparked concerns about job displacement. While some jobs will be lost, new jobs will also be created. Education and training will play a crucial role in preparing workers for the future, and a human-centered approach is essential to balancing efficiency and empathy.
The Current Landscape: Who is at Risk of Job Displacement?
As AI and automation continue to advance, some industries and job roles are at a higher risk of being automated. These include manual labor jobs, routine-based jobs, and jobs that involve repetitive tasks. On the other hand, jobs that require creativity, critical thinking, and human interaction are less likely to be automated.
The Impact on the Workforce: Job Losses and Job Creation
While it is true that some jobs will be lost due to AI and automation, it's important to note that new jobs will also be created. In fact, a report by the World Economic Forum predicts that AI and automation will create 97 million new jobs by 2025. These new jobs will require new skills, which means that upskilling and reskilling will be essential for workers to remain employable.
"I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish. I mean, with artificial intelligence, we're summoning the demon." Stuart Russel, American computer scientist and AI expert
The Role of Education and Training: Preparing the Workforce for the Future
As AI and automation continue to transform the workforce, education and training will play a crucial role in preparing workers for the future. Employers, educational institutions, and policymakers must work together to develop training programs that equip workers with the skills they need to succeed in an increasingly automated world.
The Potential for a Universal Basic Income: Addressing Job Displacement
One proposed solution to job displacement is the implementation of a universal basic income (UBI), which would provide a basic income to all citizens regardless of their employment status. While UBI has its supporters, others argue that it could lead to a decrease in productivity and a reliance on government support.
The Importance of a Human-Centered Approach: Balancing Efficiency and Empathy
While AI and automation can bring about many benefits, it's important to ensure that a human-centered approach is taken to their implementation. This means balancing efficiency and empathy, and ensuring that workers are not left behind as industries are transformed.
According to a 2019 report by the World Economic Forum, it is estimated that by 2025, AI and automation will displace 75 million jobs, but will also create 133 million new jobs.
In a survey of 4,000 Americans conducted by the Pew Research Center in 2018, 65% of respondents believed that within 50 years, robots and computers will do much of the work currently done by humans.
A 2019 study by McKinsey & Company found that up to 375 million workers worldwide may need to switch occupational categories and learn new skills by 2030 due to automation.
The same McKinsey & Company study found that while up to 800 million jobs could be lost to automation by 2030, up to 890 million new jobs could also be created, resulting in a net gain of up to 90 million jobs.
The retail industry is expected to be one of the most affected by job displacement due to automation, with up to 7.5 million jobs at risk of being automated by 2026, according to a 2017 report by Cornerstone Capital Group.
As we move into an era of AI and automation, it's natural to feel concerned about the impact they will have on our workforce. Job displacement is a reality that we cannot ignore, but we must also remember that there is hope on the horizon. By embracing a human-centered approach and working collaboratively, we can create a future where AI and automation bring about positive changes while also addressing concerns about job displacement.
It's true that some industries will experience job loss, but we must not forget that new jobs will also be created. The key is to equip workers with the skills they need to remain employable in an ever-changing landscape.
| 2023-03-01T00:00:00 |
https://www.linkedin.com/pulse/job-displacement-ai-automation-lead-widespread-tom-w%C3%BCrzburg
|
[
{
"date": "2023/03/01",
"position": 9,
"query": "automation job displacement"
},
{
"date": "2023/03/01",
"position": 9,
"query": "reskilling AI automation"
}
] |
|
What will be the economic impact of widespread ...
|
The heart of the internet
|
https://www.reddit.com
|
[] |
Incomes will go down, but goods and services will be produced at lower cost. Can widespread automation lead to an economic depression?
|
Incomes will go down, but goods and services will be produced at lower cost. Can widespread automation lead to an economic depression? If the world's purchasing power goes down due to automation wouldn't the world GDP take a hit? If people buy less goods and services wouldn't that reduce profits for companies despite putting in all the automation as demand for goods will fall?
I have many such questions related to the world economy in the presence of AI. Is there any research going on in this area?
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/AskEconomics/comments/126gxjw/what_will_be_the_economic_impact_of_widespread/
|
[
{
"date": "2023/03/01",
"position": 10,
"query": "automation job displacement"
}
] |
|
800 million jobs could be lost to AI and Automation by 2030
|
800 million jobs could be lost to AI and Automation by 2030
|
https://medium.com
|
[
"Yosef Nesirat"
] |
According to the World Economic Forum (WEF), automation and AI could displace 85 million jobs globally by 2025, but they could also create 97 million new jobs ...
|
800 million jobs could be lost to AI and Automation by 2030 Yosef Nesirat 3 min read · Apr 1, 2023 -- Listen Share
Artificial Intelligence (AI) is on the rise, and experts are warning that it could lead to a significant loss of jobs. In fact, a study by the McKinsey Global Institute suggests that between 400 and 800 million jobs worldwide could be displaced by automation and AI by 2030. A PWC study in 2018 estimated that AI and automation could affect around 30% of jobs in the UK by mid-2030s, with the greatest impact on jobs in transportation and storage, manufacturing, and wholesale and retail trade sectors.
While the fear of displacement is understandable, it’s worth noting that AI can also create new jobs. According to the World Economic Forum (WEF), automation and AI could displace 85 million jobs globally by 2025, but they could also create 97 million new jobs in industries such as data analysis, AI development, and content creation.
It’s difficult to predict the exact percentage of jobs that will be replaced by AI, but it’s clear that we’re in the midst of a fourth industrial revolution characterized by the integration of digital, physical, and biological systems and technologies, such as the Internet of Things, AI, and biotechnology.
So, how can we prepare for this disruption? Here are some steps you can take:
Stay informed: Keep up-to-date with the latest developments in AI and how they may impact your industry or job. Read industry publications, attend conferences, and participate in online forums and communities to stay informed about emerging trends.
Learn new skills: Identify the skills that will be in demand in an AI-driven economy, such as data analysis, programming, and AI development. Consider taking courses or pursuing certifications to develop these skills and increase your employability.
Adapt to new roles: As AI becomes more prevalent, some jobs may be automated or transformed, while others may emerge. Be open to adapting to new roles and responsibilities, and be willing to learn new skills as needed.
Focus on creative and social skills: While some jobs may be replaced by AI, others will require skills that are difficult for machines to replicate, such as creativity, critical thinking, and social skills. Focus on developing these skills, as they will become increasingly valuable in an AI-driven economy.
Network and build relationships: Building strong relationships with colleagues, mentors, and other professionals in your industry can help you stay informed about emerging trends and opportunities, and may also lead to new job opportunities.
Embrace lifelong learning: In an AI-driven economy, the ability to learn new skills and adapt to new roles will be critical. Embrace lifelong learning as a way to stay ahead of the curve and remain employable in a rapidly changing job market.
As we continue to navigate the impact of AI on the job market, it’s important to remember that technology is not the enemy. Instead, we should focus on how we can use it to our advantage, and prepare ourselves to thrive in a changing landscape.
The impact of AI on the job market is inevitable, but we can take steps to prepare ourselves for it. By staying informed, learning new skills, adapting to new roles, focusing on creative and social skills, networking and building relationships, and embracing lifelong learning, we can position ourselves for success in an AI-driven economy.
So, let’s take action today and start preparing for the future. The world is changing rapidly, and we have to keep up with it to stay relevant and succeed.
| 2023-04-01T00:00:00 |
2023/04/01
|
https://medium.com/leadership-by-design/800-million-jobs-could-be-lost-to-ai-and-automation-by-2030-26f78cf89b9c
|
[
{
"date": "2023/03/01",
"position": 11,
"query": "automation job displacement"
},
{
"date": "2023/03/01",
"position": 8,
"query": "job automation statistics"
},
{
"date": "2023/03/01",
"position": 7,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/01",
"position": 14,
"query": "AI labor market trends"
},
{
"date": "2023/04/01",
"position": 9,
"query": "AI job losses"
}
] |
Predictions on AI Automation in IT: Which Jobs are at ...
|
The heart of the internet
|
https://www.reddit.com
|
[] |
For back end devs, an increase in capacity for work powered by AI will definitely create the risk of downsizing teams, but this will lower morale and make ...
|
As artificial intelligence technology advances, there is growing concern about its potential impact on the job market, particularly in the IT sector. I would like to gather some predictions on which IT jobs are at higher risk of being automated by AI first. Specifically, I am interested in exploring the probability of automation among various roles such as web development, UI/UX design, front-end and back-end development, cloud computing, DevOps, data analytics, and others. Please share your insights and opinions on which IT jobs you think are most likely to be automated by AI, and provide the list in decreasing order of probability. Thank you!
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/singularity/comments/11znryw/predictions_on_ai_automation_in_it_which_jobs_are/
|
[
{
"date": "2023/03/01",
"position": 14,
"query": "automation job displacement"
},
{
"date": "2023/03/01",
"position": 28,
"query": "job automation statistics"
}
] |
|
Artificial Intelligence: AI Impact on Job Sector and Society
|
Artificial Intelligence: AI Impact on Job Sector and Society
|
https://medium.com
|
[
"Mohd Usama"
] |
For example, AI-powered chatbots can automate customer service, reducing the need for human representatives. This could lead to job displacement for workers in ...
|
Artificial Intelligence: AI Impact on Job Sector and Society Mohd Usama 7 min read · Mar 11, 2023 -- Listen Share
Artificial Intelligence (Source)
Before starting let’s understand AI.
Artificial Intelligence or AI, refers to the development of intelligent machines that can perform tasks that typically require human intelligence, such as learning, problem-solving, perception, and decision making. AI is achieved through the development of algorithms and statistical models that enable machines to analyze and interpret large amounts of data and perform tasks that were previously only possible for humans.
Let understand with example of self-driving cars. Self-driving cars are equipped with a variety of sensors, such as radar, lidar, and cameras, that allow them to detect and respond to their surroundings. The data from these sensors is analyzed by AI algorithms, which enable the car to make decisions in real-time about steering, acceleration, and braking.
For instance, if a self-driving car approaches a stop sign, the AI algorithm will recognize the sign, interpret its meaning, and instruct the car to slow down and stop at the appropriate distance from the sign. Similarly, if a pedestrian suddenly steps out into the road, the car’s AI algorithms will detect the pedestrian, recognize the potential danger, and take evasive action to avoid a collision.
The development of self-driving cars is a complex process that requires the integration of multiple AI systems, including computer vision, natural language processing, and machine learning. While there are still technical and regulatory challenges to overcome, the potential benefits of self-driving cars are significant, including reduced traffic congestion, improved safety, and increased mobility for people who cannot drive themselves.
Other examples of AI in our society include voice assistants like Siri and Alexa, image recognition technology used in security cameras, and fraud detection algorithms used by banks and credit card companies.
AI Impact on Job Sector:
AI has the potential to significantly impact the job sector by automating certain tasks and reducing the need for human labor in some industries. One of the main effects of AI in the job sector is job displacement. As machines become more intelligent, they may be able to perform tasks that were previously done by humans, leading to job loss in some industries. For example, AI integrated chatbots can replace human from representative and customer service jobs. Self-driving cars can disrupt the transportation industry, potentially leading to job loss for truck drivers and taxi drivers.
However, it’s important to note that AI can also create new jobs in fields such as data analysis and programming. In some cases, AI can even enhance human performance and productivity by augmenting human abilities with intelligent algorithms.
A recent case study of AI’s impact on the job sector comes from the finance industry.
This includes tasks such as data entry, document processing, and compliance checks. While this could lead to job displacement for some workers, it could also free up time for finance professionals to focus on higher-level tasks such as financial analysis and strategy.
Another example of AI’s impact on the job sector comes from the retail industry. Retailers such as Walmart and Amazon are using AI to automate tasks such as inventory management and pricing. For example, Walmart has developed an AI system that predicts which items are likely to run out of stock, allowing them to restock more efficiently. While this could lead to job displacement for some workers, it could also lead to increased efficiency and cost savings for the companies, which could be passed on to consumers.
The Future of Jobs Report 2020 by World Economic Forum stated:
Thus, AI’s impact on the job sector is ambiguous.
AI Impact on New Job Creation:
While AI has the potential to automate certain tasks and reduce the need for human labor in some industries. On other hand, it can also create new jobs in fields such as data analysis and programming.
One of the ways AI is creating new jobs is by increasing the demand for data analysts and data scientists. As companies collect more data, they need skilled professionals who can analyze and interpret that data to make strategic decisions.
Another way AI is creating new jobs is by increasing the demand for programmers and developers. As companies adopt AI technologies, they need skilled programmers who can develop and maintain those systems.
A recent case study of AI’s role in new job creation comes from the healthcare industry. For example, AI-powered medical imaging systems can analyze medical images to identify potential health issues before they become serious. This could lead to increased demand for radiologists who can interpret those images.
Additionally, AI can also create new jobs in healthcare by increasing the demand for healthcare professionals who can work with AI systems. For example, healthcare professionals may need to work closely with AI-powered systems to develop treatment plans or interpret data.
Finally, Let’s get happy! AI is creating new job opportunities in fields such as data analysis, programming, and healthcare.
AI Impact on Society:
AI has the potential to significantly impact our society also in both positive and negative ways.
One of the positive effects of AI on society is its ability to enhance healthcare. AI-powered systems can analyze vast amounts of medical data to help doctors make more accurate diagnoses and develop more effective treatments. For example, the FDA recently approved an AI-powered device that can detect diabetic retinopathy, a condition that can lead to blindness, in patients. This technology could potentially help millions of people who are at risk for this condition.
Another positive effect of AI on society is its ability to increase efficiency in various industries. For example, AI-powered systems can optimize supply chains, reducing waste and saving money. This could have a positive impact on the environment by reducing the number of resources that are consumed.
However, AI can also have negative effects on society. One potential negative effect is job displacement. As machines become more intelligent, they may be able to perform tasks that were previously done by humans, leading to job loss in some industries. For example, AI-powered chatbots can automate customer service, reducing the need for human representatives. This could lead to job displacement for workers in the customer service industry.
Another potential negative effect of AI on society is bias. AI systems are only as unbiased as the data they are trained on. If the data is biased, the AI system may produce biased results. For example, an AI-powered recruiting system developed by Amazon was found to be biased against women. The system was trained on resumes submitted to Amazon over a 10-year period, which were predominantly from men. As a result, the system learned to prefer male candidates over female candidates.
A recent case study of AI’s impact on society comes from the criminal justice system. AI-powered systems are being used to predict the likelihood of recidivism, or the likelihood that a person will commit another crime after being released from prison. For example, a study by ProPublica found that an AI-powered recidivism prediction system was biased against black defendants. While these systems can potentially help judges make more informed decisions about sentencing, they can also perpetuate bias if the data they are trained on is biased.
In conclusion, AI’s impact on society is complex and ambiguous. While AI has the potential to enhance healthcare, create new jobs, and increase efficiency in various industries, it can also perpetuate bias and lead to job displacement.
A big thanks for reading.
| 2023-03-12T00:00:00 |
2023/03/12
|
https://medium.com/@usama.6832/artificial-intelligence-ai-impact-on-job-sector-and-society-e6fe1245e83e
|
[
{
"date": "2023/03/01",
"position": 30,
"query": "automation job displacement"
}
] |
Industrial Robots and Regional Fertility in European ...
|
Industrial Robots and Regional Fertility in European Countries
|
https://link.springer.com
|
[
"Matysiak",
"Annamatysiak Uw.Edu.Pl",
"Interdisciplinary Centre For Labour Market",
"Family Dynamics",
"Faculty Of Economic Sciences",
"University Of Warsaw",
"Warsaw",
"Bellani",
"Daniela.Bellani Unifi.It",
"Department Of Political"
] |
by A Matysiak · 2023 · Cited by 27 — Yet, automation does not only destroy jobs but also increases productivity and thereby facilitates job creation. The newly created jobs often require different ...
|
5.1 Data
Our study is based on regional NUTS-2 data. The nomenclature of territorial units for statistics (NUTS) is a hierarchical system for dividing up the economic territory of the European Economic Area, the UK, and Switzerland for the purpose of data collection and socio-economic analyses. NUTS-2 regions are roughly equally populated, with population ranging from 0.8—3 million, and these are the smallest geographical units for which employment data are available in Eurostat for all 6 countries of our interest. We observe the countries fairly since the start of the robotisation till 2017. This means we cover the years 1997–2017 for the old EU member states and the UK and 2007–2017 for Czechia and Poland. Covering fully the 1990s for the old EU member states was not possible due to data availability.
To measure fertility, we use TFR and the age-specific fertility rates for the following age groups: 20–24, 25–29, 30–34, 35–39, 40–44, 45 + . These data have been provided by Eurostat at the NUTS-2 level since 1990. They are computed by combining national statistics on births by mother’s age and population of women by age. They are fairly complete with some missing data in fertility of women aged 45 + (around 10% of all observations). We use simple linear interpolation to supply them.
To measure worker’s exposure to automation we use data on industrial robot stocks provided by the International Federation of Robotics (henceforth: IFR). Industrial robots are defined by IFR as fully autonomous machines that do not require a human operator. Their main tasks are handling operations and machine tending (55% of all European robots fall into this category) and welding and soldering (22% of all European robots) (Jurkat et al., 2022). IFR provides annual data on the operational stock of industrial robots by country and industry since 1993. The industries are coded according to the International Standard Industrial Classification of all economic activities (ISIC, UN, 2008). The stocks of robots are provided by IFR at 1 digit level for all ISIC industries, and max 3 digits for manufacturing industries. The IFR data is complete. We utilise records at 1 digit for three following ‘heavy’ industries: mining and quarrying, electricity, gas, water supply, and construction. We utilise records at 2 digits for the remaining 13 manufacturing industriesFootnote 1 to match our regional employment structure data, which is also coded in 2-digit industry categories. We don’t include non-industrial categories such as Services, Public Administration, or Education, as those industries employ predominantly service, not manufacturing robots, and at a much smaller scale than robots operating in manufacturing or ‘heavy’ industries (Hajduk and Koukolova, 2015).
The data on robots are linked to data on regional employment structures by industry using the methodology developed by Acemoglu and Restrepo (2020) and described in detail in Sect. 5.2. Eurostat has provided NUTS-2 regional employment structures by 2-digit industry codes classified according to Nomenclature of Economic Activities (NACE Rev. 1.2 before 2008, NACE Rev. 2 after 2008) since 1986. We reclassify these data to the ISIC classification to match them to robot stocks. Moreover, since our main covariate (explained in detail in Sect. 5.2) relies on summation of employment numbers over time, we impute missing records of the regional employment structure. Finally, changes in the past NUTS classifications require reclassifying regional codes to one, consistent version. Both reclassifications and the imputation are described in detail in the Appendix in Tables 6 and 7.
Besides fertility rates, Eurostat online database provides us also with NUTS-2 level controls by calendar year, as well as potential moderators, which we interact with our main explanatory variable in order to test our research hypotheses. We include the following set of controls at the regional level: share of population aged 15–24, share of population aged 25–49, share of population aged 50 + , share of highly educated (ISCED levels 5–8), ratio of share of highly educated women to share of highly educated men, the square of the latter and women’s economic activity rate. The variables denoting population structure by age are introduced to control for any variation in population exposed to childbearing. We also account for the population education level given the educational gradient in fertility (Wood et al., 2014). The share of highly educated women relative to highly educated men and the square of this ratio aim at capturing the difficulties to find a partner in regions with better educated female population (Bellani et al., 2017) given that partners tend to form unions if they have similar education levels (de Hauw et al., 2017). Finally, women’s economic activity rate is also tightly linked to fertility.
The potential moderating variables are settled at the regional level as well. They are the initial (measured around the onset of robot adoption) proportion of workers employed outside of manufacturing (used to test H1), the initial proportion of women employed in manufacturing over the proportion of men in manufacturing (H2), proportion of highly educated persons (time-varying) (H3) and the proportion of workers employed in technology- and knowledge-intensive sectors (time-varying) (H4). The control and moderating variables are fairly complete. Any missing values were imputed via linear interpolation. This was done in 14% of cases for population structure by education, and max. 25% for employment data. There are no cases when the entire time series for specific regions are missing.
After accounting for the NUTS reclassifications and excluding foreign territories (see Table 7 in the Appendix), we have data for 34 NUTS 2 regions in Germany, 22 in France, 20 in Italy, 35 in the UK, 16 in Poland, and 8 in Czechia. We pool the data for Czechia and Poland due to the smaller number of regions in the two post-socialist countries and their similarities when it comes to labour market and family policy institutions, economic developments and delayed start of automation in comparison with Western Europe. In total, we have 680 observations for Germany, 440 for France, 400 for Italy, 700 for the UK and 240 for Czechia and Poland jointly.
5.2 Methods
Our methodology relies on regressing fertility rates against workers’ exposure to robotisation as well as a set of control variables mentioned in Sect. 5.1, separately for Germany, Italy, France, the UK and the group formed by Czechia and Poland.
We quantify workers’ exposure to robotisation following the methodology developed by Acemoglu and Restrepo (2020) and used, among others, in Dauth et al. (2021), Anelli et al. (2021), and O’Brien et al. (2022):
$$Exposure\,to\, robots_{r,t} = \mathop \sum \limits_{i = 1}^{N} \frac{{empl_{{r,i,t_{0} }} }}{{empl_{{r,t_{0} }} }}\left( {\frac{{robots_{i,t}^{C} }}{{empl_{{i,t_{0} }} }}} \right)$$ (1)
where \(robots_{i,t}^{C}\) is the country-level stock of robots across industries in year \(t\); \(empl_{{i,t_{0} }}\) identifies the total number of workers (in 10 thousands) employed in sector \(i\) in \(t_{0}\), i.e. at the start of the robotisation (hereafter initial) and \(\frac{{empl_{{r,i,t_{0} }} }}{{empl_{{r,t_{0} }} }}\) denotes the initial distribution of employment in industry i across regions. Effectively, \(\frac{{robots_{i,t}^{C} }}{{empl_{{i,t_{0} }} }}\) captures robots adopted in industry i and country c replacing its initial employment, while \(\frac{{empl_{{r,i,t_{0} }} }}{{empl_{{r,t_{0} }} }}\) disaggregates it onto regions. We set \(t_{0}\) to 1994 for Western European countries and to 2004 for Czechia and Poland, as those are years when robotisation started in those respective countries (see Sect. 3). The measure defined in Eq. 1 is known as “shift-share instrument” or “Bartik instrument” (Goldsmith-Pinkham et al., 2020).
While exposure to robots is already considered exogenous, as its variation relies on employment shares before robotisation had started, concerns about endogeneity of \(robots_{i,t}^{C}\) might still appear, i.e. when external factors affect both the robot adoption and fertility. These may be domestic or sector-specific shocks, such as policy changes. To address this issue, we follow Acemoglu and Restrepo (2020) and instrument the industry-specific stock of robots in country c \(robots_{i,t}^{C}\) with industry-specific stock of robots in other countries, which serve as a proxy for advancements in robotisation in developed economies.Choosing the right country for instrumenting robot adoption in Western European countries turned out to be challenging, however. The US’ industry-specific stocks of robots could not be used for this purpose since robots (relative to workforce) in that country were used on a smaller scale than in Western Europe (International Federation of Robotics, 2020)—thus the USA cannot be considered as a pioneer of robotisation which the Western European countries would follow. Some of the East Asian economies are more advanced in robotisation than Western Europe (e.g. South Korea), but they adopt robots in other industries than European countries. We are thus uncertain about whether Europe will follow their path. We adopt the strategy suggested by Dauth et al. (2021) who used industry-specific stocks of robots from several advanced economies as instruments of robot stocks in Germany (overidentified IV model). We thus build an overidentified model for each country with \(k =\){Germany, France, UK, Italy, Spain, Sweden, Norway, Finland, United States of America} instruments. In models for Germany, France, UK, and Italy, we exclude the country of interest and the USA, and thus apply 7 instruments. In models for Poland and Czechia, all 9 instruments are applied. Those external instruments are likely relevant, as industrial robots are manufactured by only a few international companies, which set global trends in industrial robot adoption. Thus, robot adoption in one developed economy is a good proxy for robot adoption in another one, with a similar socio-economic context. The proposed set of instruments should also be valid, as there is no reason to expect that robot adoption in one developed economy has a direct influence on fertility rates in another one. To test the instruments’ relevance and validity of the overidentifying restrictions, we compute Kleibergen-Paap rk Wald F statistic, and Hansen J statistic (Kleibergen & Paap, 2006; Sargan, 1958; Wooldridge, 2010) and report it along with full model results in the Appendix (Tables 8, 9, 10, 11, 12). Even though this strategy for instrumenting our variable of interest resulted in relevant and valid instruments, it also has a drawback. Namely, we were not able to pool all European countries and estimate one model as that would leave us with collinear sets of instruments, which would be endogeneous and thus of little use.
Our model takes the following form:
$$fertility_{r,t} = \alpha Exposure\,to\,robots_{r,t - 2} + \beta Controls_{r,t - 1} + \eta_{r} + v_{t} + \varepsilon_{r,t}$$ (2)
where \(fertility_{r,t}\) denotes regional total and age-specific fertility rates, \(\alpha\) is our parameter of interest capturing the effect of workers’ exposure to robotisation on fertility in region r, \(\eta_{r}\) corresponds to region individual effects and \(v_{t}\) are time dummies. In order to test hypotheses H1-H4, we interact \(Exposure to robots_{r,t - 2}\) with the potential moderators listed in Sect. 5.1. In all models, we control for a set of demographic and socioeconomic characteristics of a region, \(Controls_{r,t - 1}\), enumerated in Sect. 5.1, which may confound the effects of robot penetration on fertility. They are lagged by 1 year to avoid simultaneity issues. At the same time, we lag the exposure to robots by 2 years to account for the pregnancy and the fact that, once exposed to labour market changes, workers might take some time to decide whether to have a child or not. Equation 2 is estimated using the two-stage least squares approach with a fixed effects “within” estimator (Wooldridge, 2010). Standard errors are clustered at the region level to acknowledge for within-region dependence of the observations and robustify the model to serial correlation.
| 2023-12-14T00:00:00 |
2023/12/14
|
https://link.springer.com/article/10.1007/s10680-023-09657-4
|
[
{
"date": "2023/03/01",
"position": 59,
"query": "automation job displacement"
}
] |
The Employment Effects of Technology, Trade, and ...
|
The Employment Effects of Technology, Trade, and Consumption in Global Value Chains: Evidence for Developing Asia
|
https://search.proquest.com
|
[
"Bertulfo",
"Donald Jay",
"Gentile",
"De Vries",
"Gaaitzen J"
] |
by DJ Bertulfo · 2022 · Cited by 25 — Global value chains (GVCs) have been a vehicle for job creation in developing Asia, but technology can also displace workers through automation or reshoring ...
|
Copyright information
© 2022. This work is published under https://creativecommons.org/licenses/by/3.0/igo/ (the“License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
| 2023-03-01T00:00:00 |
https://search.proquest.com/openview/4504fc03329089240e7b8f9be28684c8/1?pq-origsite=gscholar&cbl=32858
|
[
{
"date": "2023/03/01",
"position": 79,
"query": "automation job displacement"
}
] |
|
how model selection affects digitization risks | Empirica
|
The future of employment revisited: how model selection affects digitization risks
|
https://link.springer.com
|
[
"Lorenz",
"Agenda Austria",
"Vienna",
"Stephany",
"Fabian.Stephany Oii.Ox.Ac.Uk",
"University Of Oxford",
"Oxford Internet Institute",
"Oxford",
"Kluge",
"Türkenstraße"
] |
by H Lorenz · 2023 · Cited by 11 — ... job displacement shifted away from skilled to unskilled labour. Similar to ... workers are employed in jobs with a high risk of automation. Until now ...
|
As we aim at predicting digitization probabilities, we need to find a model that provides a sufficiently good fit. For this purpose, we apply models that are similar to the ones tested by Frey and Osborne (2017): First, we apply a logit model as illustrated in column (1) in Table 3. Second, we test a linear discriminant analysis (LDA) with a Bayesian estimation of the dependent variable (James et al. 2013, Chapter 4).Footnote 7 In order to compare the logit and LDA models, we look at the respective out-of-sample correlations. The comparison shows that both models perform very similarly with correlations of slightly above 0.6. Accordingly, the predictions of both models are very similar as summarized in Table 4. We favor the simpler logit model as it performs well and provides us with interpretable beta coefficients (the LDA does not and is therefore not included in Table 3).
Table 3 Model (1) works with a binary outcome of job digitalisation, while the outcome of model (2) is continuously measured between 0 and 1 Full size table
We deploy the logit model (1) to predict the digitization probabilities. We proceed according to the following formula:
$$\begin{aligned} P(y = 1 | X) = \frac{1}{1 + e^{-(\beta 'X)}}, \quad \beta 'X = \beta _{0} + \beta _{1}x_{1} + \cdots + \beta _{k}x_{k} \end{aligned}$$ (1)
The digitization probabilities P(y=1|X) are estimated for all individuals in the sample, based on their set of characteristics (\(\beta 'X\)) including individual-, job- and firm-specific characteristics as well as their task structure. In doing so, individuals with professions that have not been judged by our experts now also obtain a probability. The average estimated probabilities of future digitization are shown in Fig. 1. We find the usual bimodal distribution with many jobs being exposed to digitization and many that are not. The distribution mirrors the initial expert consensus.
Fig. 1 Future digitalization: Jobs in Austria are polarized between high and low levels of future digitalization. The distribution of individual levels of future digitalization mirrors the initial estimation of our experts Full size image
Based on the consensus of our experts, we are able to specify a degree of future digitization for 45 occupations. More than 75% of our experts agreed that the characteristic tasks of these professions will (or will not) change to a significant degree with the development of digital technologies and mobile robotics. With the use of the PIAAC data set, we are able to relate the degree of digitization to personal characteristics and occupation-specific tasks. Based on these relationships, we can predict the degree of digitization for all professions in the data set. In contrast to the work by Frey and Osborne (2017), we apply local experts’ opinions and perform our estimations on the basis of individual characteristics.
For some tasks we see a clear relationship with the consensus of our experts. In Fig. 2, the frequencies of the 40 tasks are compared to the consensus of our experts. On average, some tasks, such as coding, are, on average, performed less than once a month, while others, such as sharing information with others, are carried out on an almost daily basis. For some activities, prevalence does not differ significantly between the two consensus job groups. However, for most of the activities, a clear separation between the consensus groups is visible. Activity involving long physical work is less commonly performed in professions that are expected to change during digitization, according to our experts. Other activities show the exact opposite pattern. Calculating or the use of computer software Excel, for example, is much more prevalent in professions that are expected to change. This observation, confirmed by the findings of the inferential model, is a first indication that professions with a high degree of computer-based office routines are more likely to change in light of digital technologies.
Fig. 2 Tasks at work: Of the 40 tasks, some are performed on a daily basis, while others are carried out only once a month. Some of the activities vary significantly between jobs with a high and low degree of future digitalization Full size image
In addition to the 40 tasks, individual-, job- and firm-specific characteristics can help explain the consensus opinions of our experts, as shown in Table 3. The logit model (1) indicates that—apart from work activities—variables such as education, firm sector, firm size, job responsibility and job education are related to the degree of future digitization. Individuals who work in the public sector, in smaller firms or in jobs that require education or experience, are typically less likely to be employed in an occupation that is going to change significantly.
Among occupations, there is a clear trend (Figs. 3, 4): Clerical support workers, who perform simple computer-based office routines, are highly susceptible to technological changes. This is in line with previous findings (Frey and Osborne 2017; Nagl et al. 2017). On the other hand, professionals, who work with complex and unstructured information, and skilled workers in agricultural fields, who perform physical work, are less likely to experience major changes in their job profile. Professional occupations involving teaching and healthcare within legal, social or cultural environments (Fig. 4) exhibit particularly low probabilities of digital transformation. This finding is consistent for individuals working in a job that requires an academic degree. On average, most occupations show a probability of change between 40 and 60%.
Fig. 3 ISCO Level 1: For the top level of occupations, clerical professions have, by far, the highest risk of future digitalization. Professionals are at the lower end of the scale Full size image
Fig. 4 ISCO Level 2: Professional occupations involving teaching and healthcare, within legal, social or cultural environments, exhibit particularly low probabilities of digital transformation Full size image
Up to this point, we have worked with a binary model which is similar to Frey and Osborne (2017) who also start with binary opinions of experts and extrapolate them via a classification model for all occupations. Bowles (2014) directly transfers these estimations to European labour markets. Both studies conclude that a high share of workers (47% in the US and 54% in Austria) are at high risk of computerization. Our estimate for Austria using the logit model is 45% (see Table 4). Arntz et al. (2016) and Nagl et al. (2017), on the other hand, begin with discrete probabilities and apply a fractional model in order to extrapolate. In comparison, they show that only about 12% and 9%, respectively, have a digitization risk of more than 70%.
To compare our results to the work by Arntz et al. (2016) we further apply a fractional response model (see column (2) in Table 3), which provides an out-of-sample correlation of even 0.68.Footnote 8 In this model, the mean of the experts’ opinions is considered as the dependent variable. Accordingly, the fractional model refers to a larger sample size.
When comparing our findings, clear differences emerge with regard to the degree of susceptibility to digital technologies. Our estimate of the share of workers at high risk of computerization reduces from 45% using a logit approach to 12% using a fractional model and is therefore much lower than in Frey and Osborne (2017). Heterogeneity, as pointed out by Arntz et al. (2017, 2020), does also play a role. Would we not estimate our fractional model using individual but median task structures by occupation (hence, assuming that all workers within the same occupation perform the same tasks), we would predict 20% of jobs to be at risk. This observation is in line with previous findings by Arntz et al. (2017). Smaller predicted probabilities will come out when heterogeneity is taken into account and non-binary models are used.
Hence, our model testing confirms that the contradicting findings in the literature are driven by (a) variation in tasks at the job level versus the occupational level (while Frey and Osborne 2017 analyze tasks on the occupation level and thus allow for variation in tasks between occupations, Arntz et al. (2016) take a step further to the job level allowing for variations of tasks within the same occupation) but also by (b) the choice of the model. Binary models yield a bimodal distribution of predicted probabilities with large high-risk groups. Fractional models lead to a bell-shaped distribution of probabilities with relatively low levels of high-risk individuals. Our own estimations for a fractional model (Fig. 5) confirm this conjecture. The ranking of occupational classes does not change significantly after the fractional model (Fig. 6) has been used. However, predicted probabilities converge towards the mean.
Fig. 5 Fractional model: Similar to the work by Arntz et al. (2016) and Nagl et al. (2017), the application of a fractional model (Papke and Wooldridge 1996) yields a bell-shaped distribution of predicted probabilities Full size image
Fig. 6 ISCO Level 1: The ranking of occupational classes does not change for the fractional model. However, predicted probabilities converge to the mean Full size image
Similarly, when moving the threshold of the consensus indicator from our chosen value of 75–50%, the predictions of the binary models approach the ones of the fractional model. The predicted digitization risks (see lower part of Table 4) are now somewhat in between the initial logit result and the fractional result. In turn, if we increase the consensus threshold to 90%, the predicted risks increase even further. Hence, the more we force expert opinions into a yes/no-setting (by reducing the consensus threshold), the lower are the shares of jobs at risk as there is more underlying uncertainty in our expert opinions and, hence, our inferential models produce less clear-cut results. On the other hand, if we use the 90% consensus, only those jobs are used that are clearly at risk (resp. not at risk); the prediction will reflect that in terms of a bimodal distribution.
Model choice also entails issues of sample selection and sample size: When comparing the estimation outcomes of the binary and fractional model (Table 3), the results of the latter contain a lower number of covariates, which are statistically relevant for the degree of digitization. The fractional model hardly shows any statistical significance concerning the covariates that have not been relevant in the binary model. In the fractional model, job experience, for example, shows no statistical significance. Likewise, tasks like speaking in front of humans or reading books are not significant in the case of the fractional model environment. This general observation is not surprising from a statistical point of view, since the formally strict binary outcome in a small sample has now been changed to a smooth continuous scale in a sample twice the original size. However, it becomes clear that some covariates, such as physical work, writing reports, performing calculations or firm characteristics, are still aligned with the distribution of the fractional model. The distribution of other covariates has been polarized by the truncation of the binary model. Given that the fractional model performs better than our binary model in terms of out-of-sample correlations, would imply that lower shares of jobs at risk are more plausible than higher ones.
| 2023-05-14T00:00:00 |
2023/05/14
|
https://link.springer.com/article/10.1007/s10663-023-09571-2
|
[
{
"date": "2023/03/01",
"position": 88,
"query": "automation job displacement"
}
] |
What is a Cobot? Collaborative Robots
|
What is a Cobot? Collaborative Robots
|
https://boxrobotics.ai
|
[] |
On the other hand, there are some disadvantages as well. Automation means job loss as machines will start taking over roles that were previously handled by ...
|
A cobot, or collaborative robot, is a type of robot designed to work safely in close cooperation with humans. With the rise of technological advancement in robotics, cobots have become a major focus in recent years. As more companies embrace automation to increase efficiency and productivity, cobots are seen as an essential tool for remaining competitive in the modern marketplace.
When it comes to safety, cobots are much better equipped than traditional industrial robots. While industrial robots have sensors that detect motion and physical contact, cobots use advanced programming to detect intent and anticipate human movements or interactions. This technology allows them to respond and adjust their own behavior to protect humans working alongside them in the workspace. Although cobots may take longer to complete tasks than traditional universal robots, they can be reliably used without danger to the worker and other people around them.
Additionally, cobots are able take input from humans during the task-completion process known as collaborative sensing. This feature makes them ideal for use with tasks that require fine-tuning or judgment beyond what any computer may be capable of providing. Such tasks include product assembly or precision welding jobs that require frequent adjustment or pattern recognition skills. As a result, cobots have grown increasingly popular for applications where environmental sensing is necessary, along with lessening production costs by combining human labor with robotic automation.
Debate: In addition to safety and accuracy concerns, many industry specialists debate whether or not cobots are capable of replacing traditional industrial robots completely; some argue that automation is an important factor for staying competitive but too much automation can result in the loss of jobs while others disagree with this perspective entirely arguing that cobots reduce errors while providing cost saving measures across multiple industries by creating higher production levels with increased safety measures.
Regardless of which side of the debate is correct, it’s evident that cobots have taken a significant foothold in numerous industrial settings today through their advancements in collaborative technology and human-robot interaction capabilities.
With this in mind, this article will transition into exploring how cobots are used in various industrial settings throughout the world today.
| 2023-03-01T00:00:00 |
https://boxrobotics.ai/what-is-a-cobot/
|
[
{
"date": "2023/03/01",
"position": 98,
"query": "automation job displacement"
}
] |
|
Three HR strategies for managing AI job disruption | SAP
|
Three HR strategies for managing AI job disruption
|
https://www.sap.com
|
[
"David Jonker"
] |
Discover three crucial HR strategies to manage AI disruption effectively, ensuring a seamless integration of AI in the workplace without losing the human ...
|
The impact of AI on jobs and the workforce is more varied—and less certain—than the press coverage would have us believe. News headlines about this AI disruption would have us believe that the workplace will be overrun by AI-powered robots with chrome-plated appendages and superhero powers:
It’s enough to send hard-working employees back to bed in the morning. But the impact of AI on jobs and the workforce is more varied—and less certain—than the press coverage would have us believe.
AI’s effects on HR functions
AI-fueled automation is not a zero-sum game in which a single outcome—either an apocalypse of massive job losses or a utopia where all lost jobs are replaced—will dominate. Yes, more jobs probably will continue to be automated away. But at the same time, others will be created, and some may not change for a long time, or not much, as AI evolves. Meanwhile, some (or many) jobs will be augmented by machines that operate alongside humans to assist them with their work.
None of these scenarios will dominate, despite what their advocates argue. All are developing simultaneously while their effects on work will vary over time. We can see this happening as AI-powered sensors displace factory maintenance workers, chatbots free up customer support agents to handle more complex tasks, and companies create new roles for a changing workplace.
HR leaders will have to be prepared to deal with each scenario, perhaps to widely varying degrees, inside the same organization. They will have their hands full grappling with the impact on people and their jobs. “What’s happening is not about losing jobs or creating jobs but rather that work will change,” says Hannah Berkers, a senior researcher with the Professorship of Corporate Governance and Leadership at the Amsterdam University of Applied Sciences, who researches the role of robotization in work design.
AI is already causing more changes to work than any technology before it. Machines used to simply follow rules defined by humans. Now intelligent machines are emerging that increasingly can think on their own and act on what they learn in ways their human creators have not anticipated. They will surprise us by what they’ll be able to do.
Although humans will continue to be in charge, HR leaders will need to think differently about AI than the technologies of the past—not merely as a sophisticated tool but as a member of the workforce with skills and abilities that are as important to the company’s talent strategy as those of any flesh-and-blood employee.
That is, with one main difference. HR leaders have plenty of experience assessing the impact of a new (human) hire on a company, but preparing for a world of ever-evolving, intelligent machines is a whole new ball game—one that requires teamwork with line-of-business (LoB) leaders who know what work needs to be done and technology leaders who know what AI code can do.
Three HR strategies for managing AI disruption AI is coming alive in the workplace. HR leaders will have to make sure it’s a good colleague. Download the PDF
The future of AI-influenced work is fluid
Although there’s no way to know exactly how AI will impact the workforce, these four key scenarios are developing simultaneously:
Scenario 1: Job erosion
Worldwide, more than a billion people could lose their jobs to AI automation over the next ten years, according to the World Economic Forum. This erosion of work—jobs that either disappear or require much less human participation, control, and discretion—threatens to affect about one-quarter of the workforce.
Scenario 2: Augmented work
Rather than make people’s jobs obsolete, AI will help many people work faster and more efficiently. After all, while intelligent machines can operate at record speed with unmatched precision and accuracy, they have yet to learn how to empathize with angry customers, collaborate with colleagues, or think creatively—human skills that are indispensable.
Scenario 3: Job creation
As more companies begin using AI, millions of employees will be needed to create the code and keep it running, says Greg Vert, a senior manager at Deloitte Consulting. But new categories of work will also emerge. Case in point: As AI connects autonomous vehicles to roads, Canada’s Information and Communications Technology Council says new occupations are emerging, such as “smart mobility managers” who know how to build intelligent transportation systems and “autonomous vehicle scientists” who make self-driving vehicles safe and ensure that they work correctly in locations they have never been.
Scenario 4: Business-as-usual
For late adopters, AI and automation will continue to be science fiction—human-like robots that populate the pages of fantasy novels, not corporate headquarters. In fact, AI may have little impact on jobs such as sales representatives that require complex and emotionally rich interactions with customers and partners.
No one should expect HR leaders to predict how the four scenarios will come into play at their companies. Technology and LoB leaders will continue to be the main source of ideas about how emerging technologies will be deployed in the workplace. Rather, HR must build a solid understanding of how AI will change the work that is done in their organizations under any of these scenarios and determine, as each scenario presents itself, how they will respond in a way that keeps work centered on humans rather than machines.
How HR can anticipate multiple futures of work
Of the four scenarios, three are familiar. HR leaders have been contending with technology-induced job erosion and job creation, along with business-as-usual, for decades. As AI makes inroads, these leaders will be able to respond with upskilling and reskilling the workforce in the face of disappearing or new roles, as they always have.
But for HR leaders to do the necessary work to plan for and support AI, they need to have agency to do it. All too often, AI discussions happen well out of earshot of HR leaders. Berkers observes that HR people often aren’t invited to talk about plans to deploy AI because they’ll ask questions about the potential effects on the workforce that are hard to answer. But that’s a mistake. “The important step is getting HR involved and at the table, actively participating in discussions on robotization and dealing with technology,” she says.
Creating a triumvirate of partners—HR, LoB, and technology leaders—gives companies the best shot at balancing human abilities with machine intelligence. Here are three strategies for doing it:
1. Embrace the modern centaur
Despite automation’s much-ballyhooed benefits, such as increased productivity and the end of workplace ennui, not all employees loathe routine work, nor do they all care to awaken their inner Picassos as AI automates mundane tasks.
“Some people are more drawn to routine work or making meticulous calculations,” says Berkers of Amsterdam University. “Not everyone wants to do creative or problem-solving work that might be necessary or will be more of an emphasis in the future.”
For this reason, part of HR’s job is to teach employees how to work in tandem with machines. That’s different from today’s more typical IT training, in which a product designer learns how to use new software to implement a more efficient workflow or analysts try their hands at in-depth data modeling.
Instead, HR leaders should consider embracing the concept of “centaurs”—a term coined by Garry Kasparov, the world chess champion who, after being defeated by IBM’s Deep Blue, suggested that the combination of the different cognitive capabilities of humans and machines can not only enhance the performance of humans but the power of intelligent machines.
This symbiosis is already happening in the workplace. Technology futurist Daniel Burrus uses the example of a patient with a potentially cancerous spot on his skin. “If a doctor performs vision analytics of the spot using AI, he can achieve 99% to 100% accuracy—no human doctor is that accurate.” Yet AI can’t provide a course of treatment. Instead, a doctor’s years of experience, extensive expertise, common sense, and critical judgment can complement a machine’s algorithmic cognition and compensate for any errors—the “augmented thinking” that Burrus says is “the future of AI.”
2. Match abilities of humans with AI
As these centaur systems become more commonplace, it will fall to HR to match the strengths and weaknesses of human employees with the machines they work alongside. This begins with HR awakening employees to not only the qualities that make them distinctly human (think empathy and critical reasoning) but to recognizing the ways that these qualities can complement and draw greater value from AI.
Consider, for example, how AI-powered chatbots can gather critical information about a customer’s missing shipment, use sentiment analytics to identify when that customer becomes increasingly frustrated, and automatically switch the call to a live agent who has the empathy and authority to reach a resolution.
Companies looking to integrate AI into the workplace need to display the same kind of empathy toward employees that they require employees to show to customers. One way to do this is by properly managing employee expectations of AI. Clearly communicating how AI technology will be used in an organization can help foster greater human-machine collaboration, as can rewarding fast adopters of AI systems with opportunities to work on new and innovative projects.
“Organizations need to create the capacity for development in addition to incentivizing and providing different learning pathways to help workers shift from the old to the new,” adds Vert.
For example, veteran factory workers may continue to bristle at the notion of AI-enabled sensors taking over the task of monitoring equipment and predicting maintenance issues. However, an incentives program that rewards factory workers for learning how to operate the sensor systems or hone basic coding skills to fine-tune them can drive greater adoption, boost workforce morale, and foster a more collaborative work environment.
3. Advocate for people in an AI-infused workplace
As AI becomes more integrated into everyday work, HR leaders should serve as advocates for employees. On one hand, HR must shift from treating AI as a bolt-on technology to viewing it as a member of the workforce with the power to redefine responsibilities, influence co-workers, and change the nature of work.
But like human beings, AI systems are fallible: AI chatbots have been known to spew abusive epithets, autonomous cars currently have a higher rate of accidents than human drivers, and gender and racist bias in AI systems has been well documented. By cultivating a culture of openness, inclusivity, and transparency, HR leaders can create spaces in which employees not only feel comfortable blowing the whistle on AI-induced blunders or questioning the output of intelligent machines but have ways to make themselves heard.
Supporting employees also means ensuring that jobs remain human-centered. By working together, technology executives, HR professionals, and LoB leaders can redefine the roles that people have so that AI works to satisfy the needs of humans rather than humans being required to meet the demands of AI. Establishing a strong, healthy work culture and positively reinforcing problem-solving (a distinctly human skill) can create a nurturing environment where people come first. By ensuring that intelligent machines play to workers’ strengths, organizations can foster a culture that breeds synergy, not competition between human and machine.
Give employees a voice in AI deployment
HR leaders can create spaces in which employees feel comfortable blowing the whistle on AI-induced blunders.
There’s no telling exactly what shape the future of AI will take in the workplace or how it will change the work being done. Multiple scenarios await. But more important than any crystal ball is HR’s direct line to employees—a chance for HR leaders to gather firsthand feedback from employees on how they feel about AI, its place in the organization, and its impact on their roles and responsibilities.
“Employees are best able to spot opportunities for how things can be done differently in the future,” says Berkers. “Together, HR leaders and employees can investigate the most valuable aspects of a job and what parts should be handled by AI.” It’s precisely this collaborative intelligence—be it between HR and employees or CTOs, CHROs, and LoB leaders—that will lead to success in an AI-infused world.
| 2023-03-01T00:00:00 |
https://www.sap.com/research/three-hr-strategies-for-managing-ai-disruption
|
[
{
"date": "2023/03/01",
"position": 26,
"query": "AI job losses"
},
{
"date": "2023/04/14",
"position": 76,
"query": "AI impact jobs"
},
{
"date": "2023/04/14",
"position": 4,
"query": "AI job losses"
},
{
"date": "2023/04/14",
"position": 6,
"query": "robotics job displacement"
},
{
"date": "2023/05/01",
"position": 31,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 26,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 26,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 25,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 26,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 28,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 30,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 29,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 28,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 30,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 25,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 28,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 29,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 28,
"query": "AI job losses"
},
{
"date": "2024/10/02",
"position": 39,
"query": "AI job losses"
}
] |
|
Does increased use of AI have to mean job losses? | LSE Research
|
Does increased use of AI have to mean job losses?
|
https://www.lse.ac.uk
|
[] |
One major concern about the growing use of AI is its impact on jobs, particularly those deemed routine or manual. But what if AI can be used to upskill these ...
|
A packet of humble tomato seeds can cost as little as 49 pence, a price that obscures the true value of the market – the flower- and plant-growing industry is estimated to be worth £1.8 billion in the UK alone. Seeds are big business, but profit is linked to a company’s ability to ensure it is selling only those of the highest quality.
Seed sorting might seem like a task ripe for automation – after all, AI excels at pattern recognition. But introducing AI into a process long shaped by human judgement raises complex questions. As with all new systems, deciding how best to integrate machine learning into established processes, and what the longer-term consequences of decisions made as a result of today’s tech might be, are as yet unclear.
One company's decision to use AI to manage its seed sorting processes, however, has enabled Dr Tomislav Karačić, Assistant Professor of Information Systems in the Department of Management at LSE, to interrogate this issue. His study highlights the uses and limits of machine learning and identifies potential pitfalls and opportunities – both with the management of human labour and overall performance of the task – that companies will need to consider if they are to integrate this new technology into their daily businesses effectively.
"AI washing": too many companies are misrepresenting AI usage
Although the study of AI in business is not novel, there is little information systems and management research into the use of AI in agriculture – a striking omission when we consider that around 40 per cent of the world’s population have been estimated to work in agrifood systems.
“There wasn’t a single paper that I was able to find, which actually took this context, and that seemed strange to me because with pressing issues like climate change and population growth, the concern about not having enough food is important for the whole world,” says Dr Karačić.
One of the reasons for this may be the difficulty in accessing information. “I spent over a year and a half looking for a case organisation,” Dr Karačić recalls. Beyond the proprietorial nature of company data, Dr Karačić discovered that the messages companies project to the world are not always aligned with the behind-the-scenes reality. “It was surprisingly difficult to find a company that did not just claim to use AI, but which actually used it,” he says. “It’s terrifyingly common for organisations to present themselves as AI-driven when they're actually not.”
Reasons for this range from wanting to be considered more innovative than their competitors, to needing to be seen as standard as AI terminology became more common. “If everybody’s doing it, you ought to be doing it,” says Dr Karačić. “And the definition of AI is not clear anyway: there’s also a lot of grey space so you might as well use the term. No one wants to be the only one not doing it.”
Dr Karačić eventually identified a company that was not just making innovative use of AI to develop new plant varieties, but was also willing to give him access to study its implementation. “This company had committed to not using any gene editing in their development, but because of this, they struggled to cope with the speed of competitors that were using these techniques. They decided that investing in machine learning would be the way to address this and help them develop new varieties of plants that are, for example, resistant to diseases or more adaptable to climate change with the same speed of their competitors using gene editing,” he says.
Developing a new plant variety is only one part of the process, however. For a company to be competitive, it must ensure that farmers buying its seeds can be confident in their crop. Industry standard is that at least 90 per cent of seeds in a packet must germinate, meaning that companies selling seeds below this mark will quickly find themselves out of profit. As a result, any seed considered defective is sorted out of the assembly line as part of the packaging process.
Dr Karačić gives the example of tomato seeds to explain: “Imagine one tomato seed would be sold for one euro, and a kilo of tomato seeds has 250,000 seeds. A very small bag can be worth one million euros. If only 85 per cent of them are good, however, then it basically becomes worthless – it's considered a waste – whereas if 90 per cent grow, that is a quarter of a million per kilo. So if you invest several million euros into a machine that can get it right every time, that pays off relatively quickly.”
Seed sorting might sound simple from the outside, but it actually requires a high degree of sensory and experiential knowledge. “Although the workers have machines to help, they would have to build up knowledge through doing – so grabbing the seed in their hands, rubbing them, smelling them, looking at them, trying to figure out what's wrong with them if they thought they were bad,” says Dr Karačić.
“This means that it's very hard to articulate how they are actually doing seed sorting. It’s also not a foolproof process – sometimes it just doesn’t work.” And so seed sorting was a perfect case to study how AI changes are implemented.
The human cost of AI
At first glance, the story seemed to confirm a common fear: that AI would simply replace human workers. “Initially, the ambition was to automate much of the sorting process,” says Dr Karačić. “There was a clear push to see how far the technology could go.” But what unfolded was more complicated – a process that revealed the importance of thoughtful integration and close attention to how people and machines could support one another.
"At first they did manage to build these machines, and they worked perfectly. They had a camera which would image each seed, and the model could predict with extremely high accuracy what a good or bad seed was. Then these pneumatic tubes would sort them into waste or good bags, so the whole process became machine-driven,” says Dr Karačić.
“What this did initially to the seed sorters was to effectively turn them into operators – so they were only needed to press start and stop or clean the machines – but the sorting was done by machine."
Although on the surface it might appear that the “expertise” had been successfully transferred from a person to a machine, Dr Karačić explains that more was in play than immediately apparent. Although former seed sorters' responsibilities had shifted to the manual operation of these machines, their learning of what a good and bad seed looked like continued to develop.
“These machines use X-ray imagery to determine if a seed is viable or not, and so for the first time in their lives these workers, who had been sorting seeds for decades, could see the insides of these seeds. And they started noticing patterns.”
At first, Dr Karačić says, these patterns were just discussed amongst themselves – a detail considered unimportant. Until, that is, the machines stopped working.
“The developers were really not sure why the machines had become less reliable. But because the seed sorters had developed an understanding of the patterns in the physical seeds over their years of physical sorting, they had also noticed when different and new patterns began to emerge on the images. Through their expertise, the company was able to identify what was happening.”
As a result of their observations, the company understood the problem: as new varieties of seed were being developed, new defects were also being introduced into the seed’s lifecycle. Although the AI imagery could see these defects, because they had not existed at the time the model was developed, the machines were unable to identify which were good or bad.
| 2023-03-01T00:00:00 |
https://www.lse.ac.uk/research/research-for-the-world/ai-and-tech/ai-job-losses
|
[
{
"date": "2023/03/01",
"position": 30,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 32,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 28,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 35,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 29,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 36,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 30,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 32,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 31,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 30,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 32,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 29,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 29,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 30,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 30,
"query": "AI job losses"
},
{
"date": "2025/05/27",
"position": 9,
"query": "AI job losses"
}
] |
|
10 Jobs Paying Over $100K That Are Safe From the Rise of AI
|
10 Jobs Paying Over $100K That Are Safe From the Rise of AI
|
https://www.aol.com
|
[
"Aol Staff",
"John Schmoll",
"July",
"At Pm"
] |
“AI could wipe out half of all entry-level white-collar jobs, and spike unemployment to 10-20% in the next one to five years,” said Dario Amodei ...
|
Drazen Zigic / Getty Images/iStockphoto
Artificial Intelligence (AI) is a helpful tool to complete tasks, but its growth poses a threat to the jobs of many Americans. At least, that’s what the headlines continue to say. Industry experts are making similar statements, too.
Learn More: Here’s How To Use AI To Quickly Start a Side Gig, According to Codie Sanchez
Find Out: 6 Unusual Ways To Make Extra Money That Actually Work
“AI could wipe out half of all entry-level white-collar jobs, and spike unemployment to 10-20% in the next one to five years,” said Dario Amodei, CEO of Anthropic in an interview with Axios in May.
It remains to be seen whether the fear will materialize into reality. For Americans concerned with job security, diversifying yourself is key. Adaptability and maintaining an array of complementary skills should help Americans avoid job loss to AI. You can achieve this by earning certifications, identifying ways to gain firsthand experience and creating a portfolio to showcase your work.
Targeting the right industry will also help Americans stave off AI-induced job losses. Thankfully, some of those industries don’t require a college degree, and some pay over $100,000 annually, otherwise known as new-collar jobs. In a recent study, Resume Genius identified the top jobs paying over $100,000 that have relative shelter from the rise of AI.
All of the jobs are skills-driven. “New-collar jobs are jobs that don’t rely on physical labor (like many blue-collar jobs), but also don’t require a college degree (like the majority of white-collar jobs). These jobs are skills-driven, where learning happens on the job and adaptability matters just as much as credentials,” said Eva Chan, Career Expert at Resume Genius.
These jobs all have a median salary of over $100,000, no four-year degree requirement, minimal physical labor, remote or hybrid availability and less than a 50% risk of being replaced by AI.
Trending Now: Suze Orman's Secret to a Wealthy Retirement--Have You Made This Money Move?
Marketing Manager
Median annual salary : $159,660
Estimated job growth (2023 – 2033) : 8%
AI job takeover risk: 39%
Human Resources Manager
Median annual salary : $140,030
Estimated job growth (2023 – 2033) : 6%
AI job takeover risk: 24%
Sales Manager
Median annual salary : $138,060
Estimated job growth (2023 – 2033) : 6%
AI job takeover risk: 33%
Computer Network Architect
Median annual salary : $130,390
Estimated job growth (2023 – 2033) : 13%
AI job takeover risk: 39%
General and Operations Manager
Median annual salary : $129,330
Estimated job growth (2023 – 2033) : 6%
AI job takeover risk: 36%
Information Security Analyst
Median annual salary : $124,910
Estimated job growth (2023 – 2033) : 33%
AI job takeover risk: 49%
Sales Engineer
Median annual salary : $121,520
Estimated job growth (2023 – 2033) : 6%
AI job takeover risk: 38%
Health Services Manager
Median annual salary : $117,960
Estimated job growth (2023 – 2033) : 29%
AI job takeover risk: 26%
Art Director
Median annual salary : $111,040
Estimated job growth (2023 – 2033) : 5%
AI job takeover risk: 34%
Construction Manager
Median annual salary : $106,980
Estimated job growth (2023 – 2033) : 9%
AI job takeover risk: 13%
Unease is understandable when considering the possibility of job loss due to AI. Americans who are able to adapt and willing to focus on skill-driven responsibilities should be more secure and likely able to earn a higher income.
More From GOBankingRates
This article originally appeared on GOBankingRates.com: 10 Jobs Paying Over $100K That Are Safe From the Rise of AI
| 2023-03-01T00:00:00 |
https://www.aol.com/finance/10-jobs-paying-over-100k-161428283.html
|
[
{
"date": "2023/03/01",
"position": 44,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 53,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 49,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 20,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 48,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 22,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 43,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 48,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 47,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 54,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 47,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 47,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 45,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 43,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 43,
"query": "AI job losses"
}
] |
|
Job losses from AI are here now, and Ford's CEO thinks “literally half ...
|
Job losses from AI are here now, and Ford’s CEO thinks “literally half” of white-collar jobs are at risk
|
https://sherwood.news
|
[
"Hyunsoo Rim"
] |
Executives have historically downplayed job loss fears, emphasizing AI's role in augmenting human work rather than replacing it.
|
What used to be cautious optimism in tech is now turning into blunt warnings from CEOs across industries: AI is coming for white-collar jobs, and the cuts could be deep, according to a new Wall Street Journal report.
At an event last week, Ford CEO Jim Farley said AI will replace “ literally half ” of all white-collar jobs in the US. Meanwhile, Marianne Lake, CEO of Consumer & Community Banking at JPMorgan , recently projected a 10% cut in operations headcount over the next five years due to AI tools.
Executives have historically downplayed job loss fears, emphasizing AI ’ s role in augmenting human work rather than replacing it. But now, many admit it could dramatically shrink workforces — with some companies consolidating roles or expecting employees to do more without increasing headcount.
Fiverr CEO Micha Kaufman recently shared a “ wake-up call ” in an X post: “It does not matter if you are a programmer, designer, product manager, data scientist, lawyer, customer support rep, salesperson, or a finance person — AI is coming for you.”
Shopify CEO Tobi Lütke has paused hiring unless managers prove AI cannot perform the job.
Amazon CEO Andy Jassy anticipates a smaller corporate workforce due to AI.
Anthropic CEO Dario Amodei has warned that AI could wipe out half of all entry-level white-collar jobs within the next five years.
ThredUp CEO James Reinhart predicts AI will “ destroy more jobs than the average person thinks. ”
Moderna merged its tech and HR teams in May, with CEO Stéphane Bancel saying earlier that the pharma giant can maximize its output “with a few thousand people ” thanks to AI tools.
Klarna’s push into AI has seen it slash its workforce by ~40%.
Still, some tech leaders argue that while job displacement is real, fears may be exaggerated — and that AI-driven efficiency gains could also create demand for new skill sets.
Related reading: Big Tech isn’t hiring like it used to, unless you say the magic words
| 2023-03-01T00:00:00 |
https://sherwood.news/business/job-losses-ai-here-now-ford-ceo-jim-farley-thinks-half-white-collar-jobs-risk/
|
[
{
"date": "2023/03/01",
"position": 56,
"query": "AI job losses"
}
] |
|
Layoffs.fyi - Tech Layoff Tracker and DOGE Layoff Tracker
|
Tech Layoff Tracker and DOGE Layoff Tracker
|
https://layoffs.fyi
|
[] |
I'm a startup founder that's tracking tech layoffs and Trump's US government job cuts. Let me know if you see anything missing!
|
74,408 tech employees laid off ∙ 154 tech companies w/ layoffs 152,922 tech employees laid off ∙ 551 tech companies w/ layoffs 264,220 tech employees laid off ∙ 1,193 tech companies w/ layoffs 165,269 tech employees laid off ∙ 1,064 tech companies w/ layoffs ∙
67,749 gov’t employees laid off by DOGE ∙ 178,296 total federal departures ∙ In 2025
[apr field=”# Companies Excluding Public”] tech companies w/ layoffs ∙ [apr field=”# Employees Excluding Public”] tech employees laid off ∙ Since 3/11/20
[LIVE] I’m a startup founder that’s tracking tech layoffs and Trump’s U.S. government job cuts. Let me know if you see anything missing!
Featured In:
Tech Layoffs
Federal Government Layoffs
Layoff Charts
List of Employees Laid Off Companies are in reverse chronological order. View site on a desktop to sort, filter, search. Visit Comprehensive.io for FREE salary range data from 5,000 tech companies. Find out what companies are paying today. View site on a desktop to sort, filter, search. Methodology Total Federal Departures include (1) DOGE-related layoffs, (2) non-DOGE firings, and (3) the deferred resignation program, which covers employees who accepted a paid resignation offer. All data is sourced from public media reports, with individual firings and government contractor layoffs excluded. Lists of Employees Laid Off Below are links to crowdsourced layoff lists created by other people. Please only use for recruiting purposes!
| 2023-03-01T00:00:00 |
https://layoffs.fyi/
|
[
{
"date": "2023/03/01",
"position": 59,
"query": "AI job losses"
},
{
"date": "2023/03/01",
"position": 5,
"query": "AI layoffs"
},
{
"date": "2023/04/01",
"position": 5,
"query": "AI layoffs"
},
{
"date": "2023/05/01",
"position": 71,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 5,
"query": "AI layoffs"
},
{
"date": "2023/06/01",
"position": 4,
"query": "AI layoffs"
},
{
"date": "2023/07/01",
"position": 62,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 4,
"query": "AI layoffs"
},
{
"date": "2023/09/01",
"position": 44,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 5,
"query": "AI layoffs"
},
{
"date": "2023/11/01",
"position": 64,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 4,
"query": "AI layoffs"
},
{
"date": "2023/12/01",
"position": 45,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 58,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 4,
"query": "AI layoffs"
},
{
"date": "2024/02/01",
"position": 62,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 63,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 71,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 61,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 3,
"query": "AI layoffs"
},
{
"date": "2024/06/01",
"position": 66,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 3,
"query": "AI layoffs"
},
{
"date": "2024/07/01",
"position": 62,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 3,
"query": "AI layoffs"
},
{
"date": "2024/08/01",
"position": 58,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 4,
"query": "AI layoffs"
},
{
"date": "2024/12/01",
"position": 3,
"query": "AI layoffs"
},
{
"date": "2025/01/01",
"position": 58,
"query": "AI job losses"
},
{
"date": "2025/02/25",
"position": 4,
"query": "AI layoffs"
}
] |
|
Fears about AI-driven job losses are overdone | Capital Economics
|
Fears about AI-driven job losses are overdone
|
https://www.capitaleconomics.com
|
[] |
Several companies have recently announced AI-related job cuts. For example, Microsoft announced 6,000 jobs cuts, equivalent to 3% of its ...
|
Already have an account?
You may already have access to this premium content as part of a paid subscription.
Sign in to read the content in full or get details of how you can access it
Sign in
| 2023-03-01T00:00:00 |
https://www.capitaleconomics.com/publications/long-run-update/fears-about-ai-driven-job-losses-are-overdone
|
[
{
"date": "2023/03/01",
"position": 64,
"query": "AI job losses"
}
] |
|
'Lean into it': Amazon CEO Andy Jassy thinks enterprises need to ...
|
‘Lean into it’: Amazon CEO Andy Jassy thinks enterprises need to embrace AI to avoid being left behind – even if that means fewer jobs in the future
|
https://www.itpro.com
|
[
"Nicole Kobie"
] |
Similarly, Salesforce pinned jobs cuts on an AI focused reorganization, Intuit slashed 1,800 jobs to power AI investment, and CrowdStrike cut ...
|
Amazon won't need as many human workers in the future, according to CEO Andy Jassy, but that’ll be a good thing for those that remain.
In a recent interview, Jassy doubled down on the AI plans outlined in a memo to staff last month. At the time, Jassy said the use of the technology will enable the company to unlock “efficiency gains” - a common talking point for companies ramping up AI adoption.
Ultimately, the strategy will lead to "fewer people doing some of the jobs that are being done today”, Jassy said.
Speaking to CNBC , the Amazon chief exec defended the move when asked about the impact of the technology on the workforce, suggesting that it will deliver benefits for both the company and staff.
"This AI technology is going to be the most transformative technology in our lifetime," Jassy said, adding: "it's going to change the way we work."
He claimed AI agents would be able to do coding , research, analytics, security, website localization and "they'll do spreadsheets over time."
"That means it's going to change a lot of these particular job functions,” he added.
Get the ITPro daily newsletter Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Given the scale of the predicted change, Jassy said businesses and employees alike now face a decision - they can either “lean into it and embrace it” or they can “wish it away and have it happen to you”.
Amazon is leaning into AI
Jassy argued that Amazon is better off doing the former, suggesting it would make for better work for the staff who remain employed.
"They won't have to do as much work," he said, "and every single person gets to start every single task at a more advanced starting spot. That's going to make all of our jobs more interesting."
"Like with every technical transformation, there will be fewer people doing some of the jobs that the technology actually starts to automate," Jassy added.
"But there’s going to be other jobs — we're going to hire more people in AI, more people in robotics."
Amazon has reportedly cut more than 20,000 jobs since 2022, including recent cuts to its books and driverless divisions.
While these cuts weren’t linked to AI and largely occurred during the infamous “year of efficiency” in 2023, concerns are growing over the impact of the technology on the job market.
Recent analysis found that entry-level roles are being impacted by the influx of AI tools across a range of industries, for example.
This has prompted calls from leading industry figures to initiate a frank discussion on the matter, with Anthropic CEO Dario Amodei suggesting workers are “burying their heads in the sand” on the topic.
There have already been high-profile examples of AI-related job cuts, most notably at Klarna. The buy now, pay later giant cut staff as part of a sharpened focus on AI adoption, which proved highly beneficial for the company.
Similarly, Salesforce pinned jobs cuts on an AI focused reorganization , Intuit slashed 1,800 jobs to power AI investment , and CrowdStrike cut 500 jobs earlier this year as part of a strategic shift to AI.
Some industry leaders appear unfazed despite growing concerns, however. Google CEO Sundar Pichai offered a relaxed outlook on the topic in a recent interview, for example.
According to Pichai, while the technology may impact jobs, it will ultimately lead to better quality of work - at least for those still employed.
MORE FROM ITPRO
| 2025-07-03T00:00:00 |
2025/07/03
|
https://www.itpro.com/business/business-strategy/amazon-ceo-andy-jassy-ai-job-cuts
|
[
{
"date": "2023/03/01",
"position": 72,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 68,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 40,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 65,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 38,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 65,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 65,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 67,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 77,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 66,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 77,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 69,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 64,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 64,
"query": "AI job losses"
}
] |
Regulate AI — for the Sake of Our Jobs, Brains, and Planet
|
Regulate AI — for the Sake of Our Jobs, Brains, and Planet
|
https://otherwords.org
|
[
"Peter Certo",
"Sonali Kolhatkar"
] |
Setting aside the prospect of massive job losses from AI and the troubling realization that AI models lie and proliferate misinformation, do ...
|
Everywhere we look, AI is treated like an inevitability.
AI companies like ChatGPT and Open AI are expanding rapidly, and many Americans rely on AI assistants such as Alexa and Siri in their day-to-day lives. For some college students, not relying on AI can feel like a disadvantage.
Critics might be soothed by the idea that AI can be put to good use — such as in data modeling to better predict our changing climate, an idea that’s generated a lot of enthusiasm. But using AI to tackle climate change is like bombing a country in the name of peace.
AI is a driver of climate change, not a solution. According to the United Nations Environment Program, rapidly proliferating AI data centers “use massive amounts of electricity, spurring the emission of planet-warming greenhouse gases,” as well as consuming enormous quantities of water and minerals.
ChatGPT, the popular AI software, is hardly worth the climate impact. One analyst, Sophie McLean, writing for Earth.org, explains, “For a simple conversation of 20-50 questions [to Chat GPT], the water consumed is equivalent to a 500ml bottle.” Spread across “billions of users,” that’s a “substantial” footprint.
AI is spreading so fast, we’ve hardly had an opportunity to consider its impact. Even the world’s most popular search engine, Google, started using AI by default. Each time you do a simple search, Google uses generative AI to offer an “AI overview” before listing its results.
Moreover, AI services like ChatGPT gobble up and regurgitate the work of humans. If plagiarism weren’t bad enough, they’re notorious for generating misinformation in fields such as medicine and computer programming.
Even for simple queries like searching for citations, AI programs often make up references that don’t exist. Experts are worried the technology’s propensity to “hallucinate” is so severe, it will never achieve high levels of accuracy.
Setting aside the prospect of massive job losses from AI and the troubling realization that AI models lie and proliferate misinformation, do we really want to push our climate to the brink because of a technology that offers convenience? AI is a technology foisted upon a society that doesn’t need it, and that faces very real harm because of it.
What’s needed at minimum is strict regulation, not only to protect information and jobs, but also people and the planet.
But as soon as he took office, President Donald Trump began dismantling the few, very modest government checks on AI. And the GOP’s so-called “Big Beautiful Bill” included, alongside massive tax breaks for the wealthiest Americans, a ban on states being able to regulate AI. (That provision was scrapped, but it may well return in another form.)
We need to demand better. Technology analysts Paul Scharre and Vivek Chilukuri argue for “a principles-based approach to regulation, instead of fixed technical standards that could be outdated before the ink dries.” They also suggest an independent regulatory agency dedicated to this powerful technology.
In the meantime, we as individuals should avoid using programs like ChatGPT. Not only does it diminish our own capabilities — researchers at MIT recently found that an over-reliance on programs such as ChatGPT significantly lowers brain activity — but because it actively fuels climate change.
Think of AI avoidance as mental exercise in the same way you might choose to walk instead of drive for physical well being.
Consider turning off Google’s AI Overviews. The tech company doesn’t make it easy for most people to figure out how to do it — and of course, it offers an AI overview that may or may not be accurate when you search for how to turn it off. After some digging, I found a human-generated answer that actually works.
If we want a safer world, protected from the dumbing down and waste associated with AI, we have to begin programming ourselves and our world to make choices that center human wellbeing.
| 2025-07-02T00:00:00 |
2025/07/02
|
https://otherwords.org/regulate-ai-for-the-sake-of-our-jobs-brains-and-planet/
|
[
{
"date": "2023/03/01",
"position": 75,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 83,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 75,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 48,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 49,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 72,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 81,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 84,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 81,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 78,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 81,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 73,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 70,
"query": "AI job losses"
}
] |
Microsoft lays off 9,000 in AI drive, bringing total job cuts to 15,000 ...
|
Microsoft lays off 9,000 in AI drive, bringing total job cuts to 15,000 this year
|
https://www.aol.com
|
[
"Aol Staff",
"Jim Edwards",
"Fortune Intelligence",
"July",
"At Pm"
] |
While Microsoft has not directly attributed the layoffs to AI replacing human workers, the timing and focus of the cuts suggest a shift toward a ...
|
Microsoft announced today it will cut approximately 9,000 jobs, representing just under 4% of its global workforce, in its largest round of layoffs since 2023. The reductions, which span multiple divisions, geographies, and levels of seniority, come as the company continues to report robust financial results but seeks to streamline operations and adapt to rapid shifts in the technology landscape.
Why Microsoft is making the cuts
Despite posting an 18% year-over-year increase in net income last quarter—reaching $25.8 billion—Microsoft is moving ahead with significant headcount reductions. The company cited a need to reduce organizational layers with fewer managers and streamline its products, procedures and roles.
In its official statement, Microsoft said: “We continue to implement organizational changes necessary to best position the company and teams for success in a dynamic marketplace.”
The layoffs are part of a broader restructuring effort that has now seen over 15,000 jobs eliminated this year, including 6,000 positions in May.
This latest round is expected to impact sales, customer-facing roles, and the Xbox gaming division. Xbox head Phil Spencer told staff the company would “end or decrease work in certain areas of the business and follow Microsoft’s lead in removing layers of management to increase agility and effectiveness.”
Broader implications for the tech sector
Microsoft’s move reflects a wider trend among major technology companies, many of which are undergoing similar workforce reductions as they double down on artificial intelligence. The company has invested billions in AI infrastructure, and CEO Satya Nadella recently noted that up to 30% of Microsoft’s code is now written by AI tools. While Microsoft has not directly attributed the layoffs to AI replacing human workers, the timing and focus of the cuts suggest a shift toward a leaner, more automated organization.
The layoffs underline that the job market in tech is tightening even as companies like Microsoft continue to deliver strong earnings.
For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. Given the nature of AI tools, mistakes may occur.
This story was originally featured on Fortune.com
| 2023-03-01T00:00:00 |
https://www.aol.com/finance/microsoft-lays-off-9-000-170029953.html
|
[
{
"date": "2023/03/01",
"position": 77,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 90,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 73,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 37,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 73,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 42,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 74,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 74,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 75,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 89,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 75,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 80,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 76,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 75,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 72,
"query": "AI job losses"
}
] |
|
Corporate America's New Sport: AI Job Loss Predictions - OpenTools
|
Corporate America's New Sport: AI Job Loss Predictions
|
https://opentools.ai
|
[] |
As noted in a TechCrunch article, predictions indicate that AI advancements could lead to the loss of up to 50% of white-collar jobs in these ...
|
The Ethical Challenge of AI Bias in Employment
As the deployment of artificial intelligence continues to advance, one significant concern is the ethical challenge of AI bias, particularly in the realm of employment. The integration of AI into hiring processes brings the risk of perpetuating existing biases or even introducing new forms of discrimination. This is largely because AI systems are often trained on historical data, which can reflect and amplify human biases. For instance, biased data can lead to unfair treatment of candidates based on race, gender, or age, potentially resulting in a homogenous workforce that lacks diversity. Addressing these biases is crucial to ensuring that AI in employment contributes to equitable opportunities and diverse workplaces. The AI Bias Problem illustrates these concerns vividly, emphasizing the importance of fairness in AI development. Ensuring that AI does not exacerbate inequalities requires a conscientious approach to algorithm design, with a focus on transparency and accountability. Initiatives for developing guidelines and testing AI systems for potential biases are necessary steps that industry leaders are beginning to embrace. Moreover, the ethical implications of AI bias extend beyond just the hiring phase. They influence other employment-related decisions, such as performance evaluations and promotions, which can significantly impact an individual's career trajectory. As organizations increasingly rely on AI to streamline operations, the potential for bias in these areas demands equal attention. This underscores the importance of ongoing scrutiny and regulation, as detailed in The Promise and Peril of Artificial Intelligence, which discusses how regulation can play a vital role in mitigating such challenges. Considering these factors, a collaborative effort between technologists and policymakers is required to create frameworks that not only address AI bias but also promote its ethical use in employment. By fostering an environment that prioritizes the responsible development of AI, we can harness its potential benefits while safeguarding against its risks, ensuring that technological advancement does not come at the expense of fundamental human rights.
| 2023-03-01T00:00:00 |
https://opentools.ai/news/corporate-americas-new-sport-ai-job-loss-predictions
|
[
{
"date": "2023/03/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 85,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 77,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 76,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 78,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 86,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 77,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 82,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 78,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 77,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 75,
"query": "AI job losses"
}
] |
|
AI is leading to job losses, but not in the way we expected
|
AI is leading to job losses, but not in the way we expected
|
https://www.irishtimes.com
|
[
"Thu Jul -"
] |
To that end, many of the jobs are going to facilitate spending on AI in other parts of the organisation. It appears people are losing jobs not ...
|
Microsoft logo is seen during the Microsoft Build: AI Day event in Jakarta, Indonesia, 30 April 2024. Microsoft is expected to invest 1.7 billion US dollar to build cloud and Artificial Intelligence infrastucture (AI) in Indonesia within the next four years.
Since artificial intelligence (AI) began its turbo charged rise with the advent of ChatGPT, all the talk has been about how the technology would decimate the workforce, particularly for those in lower-skilled and junior roles.
While this has happened in some sectors, the impact has yet to be felt across the board.
However it appears it is starting to eat up tech jobs, just perhaps not in the way that may have been originally anticipated.
Microsoft on Wednesday began a latest round of job cuts, with about 9,000 roles set to go across locations and divisions.
[ Microsoft to cut some Irish jobs in global cost-reduction pushOpens in new window ]
Given the huge bet on AI the company has made, it is seen to be figuring out what it can save elsewhere, especially as Wall Street is notoriously twitchy about firms that can’t keep costs under control.
To that end, many of the jobs are going to facilitate spending on AI in other parts of the organisation. It appears people are losing jobs not because they are being replaced by AI, but because AI spending has created cost pressures overall.
It’s the latest sign of how AI is sucking up resources among Big Tech. Elon Musk’s xAI is reportedly burning through $1 billion (€850 million) a month, while Microsoft and Google, among others, are eyeing nuclear power as a way to supply electricity to their data centres given how much energy, never mind water, AI servers suck from the national supply.
It’s perhaps no surprise then, that some investors are beginning to question how long such spending can last. Jim Chanos, a renowned short seller and veteran of the internet bubble of the late 90s, warned this week that the AI sector may be on course for a pull back, and that could happen quite quickly.
“There is an ecosystem around the AI boom that is considerable as there was for [tech, media and telecoms] back in ‘99 and 2000,” Chanos told Bloomberg’s Odd Lots podcast. “But it is a riskier revenue stream because if people pull back, they can pull back capex very easily. Projects can get put on hold for six months or nine months, and that immediately shows up in disappointing revenues and earnings forecast if it happens.”
No wonder Microsoft is working so hard to keep costs in check.
| 2025-07-03T00:00:00 |
2025/07/03
|
https://www.irishtimes.com/business/2025/07/03/ai-is-taking-jobs-just-not-in-the-way-we-expected/
|
[
{
"date": "2023/03/01",
"position": 80,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 86,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 80,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 76,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 78,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 72,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 77,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 80,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 80,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 87,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 85,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 79,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 78,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 76,
"query": "AI job losses"
}
] |
Minister says Irish jobs will be lost to AI but she can't say how many ...
|
Minister says Irish jobs will be lost to AI but she can't say how many (yet)
|
https://www.thejournal.ie
|
[
"Valerie Flynn",
"Send Tip Or Correction",
"Niall O'Connor"
] |
IRISH JOBS WILL be lost as a result of artificial intelligence (AI), Ireland's first ever minister for AI has said. ... job losses, greenhouse gas ...
|
IRISH JOBS WILL be lost as a result of artificial intelligence (AI), Ireland’s first ever minister for AI has said.
Niamh Smyth, a junior minister at the Department of Enterprise, said it would be foolish not to admit that jobs will be displaced. However, she added that, in her opinion, Ireland cannot afford not to “embrace AI”.
The government is currently establishing a regulator to enforce the EU’s new AI rules rules, while also trying to ensure Irish businesses reap the technology’s benefits, and public services use it to become more efficient.
Smyth said the government will also establish an “AI Observatory” which will provide data on where jobs will be lost, how many are at risk, and how this can be mitigated.
The minister was grilled on the extent to which the government will regulate companies’ use of AI at the new Oireachtas committee on AI this morning.
Sinéad Gibney of the Social Democrats said she was concerned that the government did not seem to be striking the right balance between supporting business to embrace AI and addressing the risks the technology poses, including job losses, greenhouse gas emissions and misinformation.
Outlining her priorities to the committee, Smyth had said she particularly wanted to drive AI adoption and AI literacy among small and medium businesses.
Smyth’s department is establishing a new independent AI office to provide legal guidance to businesses on how to comply with the EU’s AI Act, with Smyth telling the committee this would provide much-needed certainty to businesses, and would be well-resourced when it’s up and running in just over a year’s time.
Advertisement
Gibney noted that the government’s own AI strategy, updated last October, states that the EU AI Act will place “few, or no, obligations on most businesses who use AI”.
She said the minister’s own stated commitments were overwhelmingly focused on supporting businesses, rather than on managing the risks.
Smyth said the EU Act took a “risk-based approach” to protecting citizens.
She said it was too early to say whether any domestic legislation to regulate the use of AI would be introduced.
Meta AI
Paul Murphy of People Before Profit questioned Smyth on recent reporting by The Journal Investigates on the conditions faced by content moderators training Meta AI at Covalen, an Irish outsourcing firm.
Workers were asked to think like paedophiles and to spend entire work days creating suicide and self-harm related ‘prompts’ to regulate the responses given by Meta’s AI products.
Smyth said she was not familiar with the article but said the EU AI Act is intended to protect citizens, taking a “risk-based approach”.
“The example you’re giving there is a very cruel way of using workers, and it certainly wouldn’t be acceptable here in Ireland,” Smyth said.
“This is happening in Ireland,” Murphy said. “It’s an Irish firm, Covalen, outsourced from Meta.”
Related Reads EU parliament approves 'historic' and wide-ranging new rules on AI
“I’ll have to come back to you on it,” Smyth responded. “That is certainly not acceptable in terms of that treatment of workers.”
Data centres
Murphy also raised concerns that the minister’s opening statement to the committee made no mention of the climate impact of AI, given its requirement for energy-hungry data centres.
Smyth said she hoped Ireland could both embrace AI in an ethical way and meet its climate targets. She said that “the energy question…has to be answered by the energy department”.
“There is no getting away from the fact that [data centre development] has put Ireland front and centre in being attractive to a lot of the companies who have come here,” Smyth said.
Smyth said that, given her own background in the arts, as a graduate of the National College of Art and Design, her biggest concern about AI was its potential impact on the creative industries.
“We don’t want their work stolen, used or scraped. We have a balancing act to achieve here that we do allow the right environment and ecosystem for the development of innovation and ensuring that we protect our citizens and the creative industries alongside that,” she said.
She indicated Ireland’s EU presidency next year provides an opportunity to showcase Ireland as a technology leader by hosting an AI summit or conference similar to an event hosted by French President Emmanuel Macron earlier this year.
| 2025-07-08T00:00:00 |
2025/07/08
|
https://www.thejournal.ie/oireachtas-committee-ai-6756528-Jul2025/
|
[
{
"date": "2023/03/01",
"position": 82,
"query": "AI job losses"
}
] |
Weekly AI Pulse #97, Apple May Swap Siri's Brain, Ford CEO Warns ...
|
Weekly AI Pulse #97, Apple May Swap Siri’s Brain, Ford CEO Warns of Job Losses, and Gen Z Rethinks “Safe Careers”
|
https://www.learnwithmeai.com
|
[
"Gencay I."
] |
At the Aspen Ideas Festival, Ford CEO Jim Farley issued a stark warning: AI could wipe out 50% of white-collar jobs in the US. Farley emphasized ...
|
Significant shifts are brewing this week. Apple might ditch its own AI and team up with OpenAI or Anthropic to finally give Siri a brain upgrade. Meanwhile, Ford’s CEO warns that AI could eliminate half of white-collar jobs, and Gen Z is already adapting.
We’re also making moves: our website now lets you save, share, and track your favorite AI news. Oh, and you can now generate startup ideas in seconds, no tech skills needed. Let’s get into it.
LearnAIWithME - How to track AI news for free?
Now, our website has save, share, and read-later features. You can also share the news you like, and all of this information will be stored in your dashboard.
Subscribe for free!
Additionally, we have created our X account, where we will share more frequent updates like this one.
X
Startup-Idea-Generator
We have created a startup idea generator.
Here you can create your start-up-idea, easily. And after you can enhance the feature of this start-up, like this.
All you have to do is answer a couple of questions.
Startup Idea Generator
Listen to this week’s AI News
1× 0:00 -3:43
Audio playback is not supported on your browser. Please upgrade.
| 2023-03-01T00:00:00 |
https://www.learnwithmeai.com/p/weekly-ai-pulse-97-apple-may-swap
|
[
{
"date": "2023/03/01",
"position": 83,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 95,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 84,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 56,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 83,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 62,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 83,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 85,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 86,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 94,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 84,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 92,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 86,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 83,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 81,
"query": "AI job losses"
}
] |
|
AI Disruption Reshapes Office Market Outlook - CRE Daily
|
AI Disruption Reshapes Office Market Outlook
|
https://www.credaily.com
|
[] |
AI disruption is driving job loss fears and reshaping the office market as vacancy rates rise and remote work trends evolve.
|
Top CEOs from Ford, JPMorgan, Amazon, and Anthropic are sounding alarms about AI’s potential to eliminate large portions of white-collar jobs, adding fresh uncertainty to the future of office demand.
Despite bleak projections, data shows that many AI projects still fail to launch, and some forecasts predict net job gains over the next decade.
Real estate experts caution that long-term vision is needed, as shifting work models, trust concerns, and overbuilt markets threaten traditional office sector assumptions. Key Takeaways
Executive Warnings Get Louder
The future of white-collar work—and by extension, the office market—continues to shift under the weight of AI’s rapid advancement, reports GlobeSt. Executives like Ford’s Jim Farley and JPMorgan’s Mariane Lake are openly predicting significant job losses, while Anthropic CEO Dario Amodei warns of unemployment levels nearing those of the Great Depression.
For office investors, this raises urgent questions about long-term occupancy rates and tenant demand, especially as the post-pandemic recovery remains incomplete.
Not All AI Bets Pay Off
While the headlines focus on displacement, the broader AI picture is more nuanced. According to S&P Global, nearly half of AI proof-of-concept projects fail before reaching production. A Harris Poll shows that nearly 80% of US executives worry they could lose their jobs if they can’t show business results from AI investments—underscoring the intense pressure driving dramatic forecasts.
Hope On The Horizon?
Despite disruption, many expect AI to drive future job growth. The World Economic Forum projects a net gain of 78M jobs by 2030, and PwC reports that over 40% of CEOs plan to increase hiring—thanks in part to AI adoption.
What This Means For CRE
For the office sector, the implications are far from clear-cut. JLL’s Ram Srinivasan stresses the need for a long-term view. He suggests AI may not just change where people work—but how and why. Anecdotes of remote employees working multiple jobs hint at the evolving structure of white-collar labor.
Meanwhile, growing concerns over trust and authenticity could spur a return to in-person interactions, giving physical office spaces renewed relevance.
Market-By-Market Impact
In overbuilt markets like Dallas, Houston, and Austin, the shift toward smaller, more efficient office footprints could exacerbate existing vacancy issues. Construction and real estate leader Brad Werner anticipates increased consolidation and repositioning, rather than expansion.
Looking Ahead
The intersection of AI and office demand presents a high-stakes challenge for commercial real estate. Owners and developers face a crucial decision: double down on adaptive reinvestment, repurpose aging assets, or rethink the very concept of workspace. As AI continues to evolve, success in the office sector may depend less on SF and more on strategic foresight.
| 2023-03-01T00:00:00 |
https://www.credaily.com/briefs/ai-disruption-reshapes-office-market-outlook/
|
[
{
"date": "2023/03/01",
"position": 87,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 66,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 68,
"query": "AI job losses"
}
] |
|
Enterprise hits and misses - AI regulation and job losses capture the ...
|
Enterprise hits and misses - AI regulation and job losses capture the headlines, but cybersecurity looms
|
https://diginomica.com
|
[] |
Microsoft to cut about 4% of jobs amid hefty AI bets - the key takeaway here is not that Microsoft is laying off four percent because AI can now ...
|
Lead story - Not just any cyber-crisis, this is an M&S cyber-crisis - the inside track on UK retail's most high-profile online security attack
Security and risk management are frequent themes in our enterprise coverage, but: you could argue these topics should be front and center even more. Stuart handles that this week, via his analysis of what went wrong at M&S:
It was 8pm on Easter Saturday when the nightmare began for Archie Norman. That was when his phone rang and the Marks & Spencer Chairman was made aware that the UK retail champion was under cyber-attack from some hostile party. Months later, the company is still picking up the pieces, with normal service unlikely to be resumed until August.
Disconcerting truth: a truly crippling cybersecurity incident can wreak havoc for so long, you're lucky to have customers on the other side. To that point, Stuart adds: "after which the longer term damage repair work will have to get underway with customers." Seven years into a promising omni-channel transformation, the proverbial ___ has hit the fan at M&S:
For a period of months now, M&S’s ability to trade online has been essentially non-existent. Visitors to the company website were told they could browse, but no transactional e-commerce was possible, not even an option to click-and-collect in-store. The stores themselves remained open, so that half of the omni-channel mix remained up-and-running, albeit with staff unable at first to do things like check stock levels in other stores online.
Could the fallout have been worse? Perhaps... By the time of the attack, M&S had already made significant cybersecurity investments, and rehearsed a hostile attack. M&S may have taken one on the chin, but we can learn from their communications. Stuart:
To its credit, M&S has sought to keep customers up-to-speed with what’s been going on, with regular status updates. That’s a candor that perhaps others in a similar plight might not have demonstrated.
Stuart points to the elephant in the room:
As this painful period for the 141 year old retailer enters full recovery phase, the question will undoubtedly linger - was there anything more that could have been done by management that might have avoided the chaos of the past few months?
At the moment, we don't know. Stuart quotes Chairman Archie Norman:
This is a question that I’m sure we’ll be asking ourselves for the next 12 months. There’s always something that could have been done. We owe it to everybody to examine that and get all the facts on the table.
Businesses will be judged not for being invulnerable, but for how contained the attacks were, and how resilient the business itself was. Based on what I'm reading here, I can't give M&S high marks for containment, but they may be doing better on the resilience side.
Perhaps the most sobering part? Your transformation - no matter how ambitious, no matter how "customer-centric" - is only as strong as your weakest link. Enterprises in the midst of such projects would do well to double their investment/rigor in cybersecurity, especially on the pro-active side (new tech and new attack vectors). Even so, when it comes to adversarial tech adoption, hostiles are usually one step ahead, with more than enough resources to inflict misery. Striving to keep pace is infinitely better than hoping for the best.
Diginomica picks - my top stories on diginomica this week
Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:
A few more vendor picks, without the quotables:
Jon's grab bag - Stuart parses the AI strategy shift in Enterprises now have strategic AI budgets, but the agentic revolution's timeline will take longer according to NASSCOM. And yet, as Chris writes, trust issues loom: Trust in data is critical to Artificial Intelligence adoption, says TELUS survey. But is that right? Cath covers an innovative government program - not a phrase I use everyday - in UK's Ofgem removes disability "reasonable accommodations" barriers with its Optimal Working Passport app. Finally, Brian recaps an AI advice request in How should businesses kick off their AI initiatives? Time for the AI advice column - your doctors are in.
Best of the enterprise web
My top six
Direction of AI policy will dictate the direction of humanity - this post from Vijay Vijayasankar nails a key issue: direction of innovation. Viayasankar:
If we choose to solve the high value problems like better and cheaper energy, food and water – the short term risk for investors is high but medium to long term returns are massive. And those things will create new jobs – which can then offset any job losses that can happen from eliminating the mind numbing roles people do now. We get to a better place with less pain.
Whiffs
First rule of layoffs: don't shoehorn them into a techno-marketing opp:
Laid-off workers should use AI to manage their emotions, says Xbox exec https://t.co/vZxqRIs8nY
-> sorry you lost your job because of our AI investments, but hey, at least AI will listen to you... AI can even let your landlord know you can't pay rent this month!
cc:… — Jon Reed (@jonerp) July 6, 2025
More AI pride from Microsoft:
'AI is no longer optional': Microsoft is allegedly pressuring employees to use AI tools through manager evaluations tech.yahoo.com/ai/articles/... our AI tools are so incredible, so transformative, so impactful to productivity -> so let's force our people to use them — Jon Reed (@jon.diginomica.com) 2025-07-06T19:54:39.960Z
But move over Microsoft, Anthropic can whiff with the best of them when it unleashes agents without constraints sets its mind to it:
The Morning After: Don’t let an AI run a vending machine https://t.co/4QQ9wv7re5
AI "tried to fire the humans who helped restock its shelves and attempted to contact building security"
-> also insisted it had an embodied form — Jon Reed (@jonerp) July 3, 2025
See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.
| 2025-07-07T00:00:00 |
2025/07/07
|
https://diginomica.com/enterprise-hits-and-misses-ai-regulation-and-job-losses-capture-headlines-cybersecurity-looms
|
[
{
"date": "2023/03/01",
"position": 88,
"query": "AI job losses"
},
{
"date": "2023/05/01",
"position": 97,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 91,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 57,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 86,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 64,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 91,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 93,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 92,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 97,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 90,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 91,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 90,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 93,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 84,
"query": "AI job losses"
}
] |
Xbox producer suggests using AI to cope with job losses | AnewZ
|
Xbox producer suggests using AI to cope with job losses
|
https://anewz.tv
|
[
"Ilknur Seydamirova"
] |
Xbox producer suggests using AI to cope with job losses ... Matt Turnbull, executive producer at Xbox Game Studios Publishing, faced criticism ...
|
Matt Turnbull, executive producer at Xbox Game Studios Publishing, faced criticism after a LinkedIn post recommending artificial intelligence tools to help employees deal with the emotional impact of job cuts.
Microsoft recently announced plans to cut up to 9,000 jobs this year, affecting roughly 4% of its 228,000 global workforce. Reports indicate Xbox’s gaming division could be among those affected.
In the now-deleted post, Turnbull suggested using AI tools like ChatGPT or Copilot to “reduce the emotional and cognitive load that comes with job loss.”
He shared prompts for career planning, resume writing, and emotional support through AI.
Some users called the suggestion “plain disgusting” and said it left them “speechless.”
Microsoft is investing heavily in AI, with plans to spend $80 billion on data centres to train AI models, reflecting its push into the technology despite workforce reductions.
| 2023-03-01T00:00:00 |
https://anewz.tv/science/artificial-intelligence/10293/xbox-producer-suggests-using-ai-to-cope-with-job-losses/news
|
[
{
"date": "2023/03/01",
"position": 90,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 97,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 80,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 94,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 87,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 95,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 98,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 98,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 96,
"query": "AI job losses"
},
{
"date": "2024/06/01",
"position": 96,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 95,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 95,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 93,
"query": "AI job losses"
}
] |
|
Ford CEO latest to claim AI will wipe out half of white collar jobs in ...
|
Ford CEO latest to claim AI will wipe out half of white collar jobs in the U.S. — 'AI will leave a lot of white-collar people behind'
|
https://www.tomshardware.com
|
[
"Jon Martindale",
"Freelance Writer"
] |
Ford CEO Jim Farley has joined the chorus of CEOs who believe AI could wipe out 50% of white collar jobs.
|
Ford CEO Jim Farley said in a recent interview that he believes "half of all white-collar workers in the U.S." could lose their jobs to artificial intelligence (AI) in the coming years. Although other executives, like Nvidia's Jensen Huang, claim otherwise, Farley is not alone in his predictions. A number of other business leaders have been quoted this year saying that the workplace as we know it will change dramatically due to AI, and that's going to mean redundancies as per WSJ.
The debate over the impact of AI on workers and their employment status has been hot for the past few years. Some of the biggest fear-mongers come from the AI industry itself, with the CEO of Anthropic, Dario Amodei, recently stating that he thought half of all white collar workers would be replaced by AI, and that this could spike unemployment to 20%.
Although an argument could be made that Anthropic benefits from overhyping the capabilities and potential impact of AI, other CEOs in non-affiliated industries are also sounding warning bells. Along with Ford's Farley, Fiverr's CEO Micha Kaufman said in a staff memo earlier this year that it didn't matter what field you were in, "AI is coming for you."
The CEO of Shopify, Tobi Lütke, was quoted as telling workers to only make new hires if AI couldn't do the job instead. IBM has reportedly replaced several hundred of its HR workers with AI agents. There are also reports that Microsoft is internally trying to use AI for more tasks, which could be a factor in its latest round of layoffs.
The CEO of JPMorgan Chase also said in May that it could see its headcount being cut by 10% in the coming years as it makes use of new AI tools in its business. Amazon's CEO, Andy Jassy, told employees in June that it expected its overall corporate workforce to be reduced over the coming years. This may come in line with its expanded use of robotics across its logistics network.
However, CEOs are known to want to share exciting potential for new efficiency savings and potential profit expansion to stockholders and the media, and not every executive is convinced that AI will have such a devastating effect on the employment of millions of workers around the world.
Nvidia's Huang has previously said he “pretty much disagree[s] with almost everything," that the Anthropic CEO said about the potential dangers of AI. OpenAI's COO, Brad Lightcap, also stated on the "Hard Fork" podcast recently that there has been little evidence so far that AI has replaced entry-level jobs on a large scale and that any such change won't be as swift or as broad as others claim.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
It may be that certain jobs are displaced, but that others become more important in turn. Following IBM's reduction in HR staff, it actively hired more programmers and salespeople.
Others argue that AI will simply make workers more efficient, increasing productivity, rather than reducing the need for workers altogether.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
| 2025-07-03T00:00:00 |
2025/07/03
|
https://www.tomshardware.com/tech-industry/artificial-intelligence/ford-ceo-predicts-half-white-collar-workers-lose-jobs-ai
|
[
{
"date": "2023/03/01",
"position": 92,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 50,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 59,
"query": "AI job losses"
}
] |
CIOs see AI prompting new IT hiring, even as boards push for job cuts
|
CIOs see AI prompting new IT hiring, even as boards push for job cuts
|
https://www.cio.com
|
[
"More This Author",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline",
".Wp-Block-Co-Authors-Plus-Avatar",
"Where Img",
"Height Auto Max-Width",
"Vertical-Align Bottom .Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow .Wp-Block-Co-Authors-Plus-Avatar",
"Vertical-Align Middle .Wp-Block-Co-Authors-Plus-Avatar Is .Alignleft .Alignright"
] |
But in the weeks leading up to the release of the Deloitte report, Meta, Salesforce, Microsoft, Dell, and Intel collectively announced more than ...
|
CIO expectations that AI use will create more IT jobs appear to be on a collision course with cost-cutting demands of CEOs and board members looking to AI to shrink the workforce.
Many company boards now insist on using AI to cut workforce costs by about 20%, but IT leaders largely believe they will be able to hire more staff to implement generative AI solutions.
Nearly seven in 10 IT leaders plan to increase headcount in response to gen AI, according to a Deloitte survey of C-level IT leaders released in June. Most CIOs see the need to hire additional AI experts to ramp up their capabilities before enabling some AI-driven efficiencies, says Lou DiLorenzo Jr., practice leader for technology, AI, and data strategy at Deloitte.
| 2023-03-01T00:00:00 |
https://www.cio.com/article/4015750/cios-see-ai-prompting-new-it-hiring-even-as-boards-push-for-job-cuts.html
|
[
{
"date": "2023/03/01",
"position": 93,
"query": "AI job losses"
},
{
"date": "2023/07/01",
"position": 88,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 69,
"query": "AI job losses"
},
{
"date": "2023/11/01",
"position": 85,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 66,
"query": "AI job losses"
},
{
"date": "2024/01/01",
"position": 87,
"query": "AI job losses"
},
{
"date": "2024/02/01",
"position": 86,
"query": "AI job losses"
},
{
"date": "2024/03/01",
"position": 87,
"query": "AI job losses"
},
{
"date": "2024/04/01",
"position": 98,
"query": "AI job losses"
},
{
"date": "2024/05/01",
"position": 88,
"query": "AI job losses"
},
{
"date": "2024/07/01",
"position": 93,
"query": "AI job losses"
},
{
"date": "2024/08/01",
"position": 88,
"query": "AI job losses"
},
{
"date": "2025/01/01",
"position": 90,
"query": "AI job losses"
}
] |
|
AI job predictions become corporate America's newest ... - TechCrunch
|
AI job predictions become corporate America’s newest competitive sport
|
https://techcrunch.com
|
[
"Connie Loizos",
"Jagmeet Singh",
"Julie Bort",
"Rebecca Bellan",
"Sarah Perez",
"Maxwell Zeff",
"Lauren Forristal",
".Post-Authors-List__Authors --Font-Size Var",
"Align-Items Center Display Flex Gap Var",
".Post-Authors-List__Authors .Post-Authors-List__Author-Thumbs Display Flex Flex-Shrink Margin Padding .Post-Authors-List__Authors .Post-Authors-List__Author-Thumbs Li List-Style None Margin-Left Margin-Top Important .Post-Authors-List__Authors .Post-Authors-List__Author-Thumbs Li First-Child Margin-Left .Post-Authors-List__Authors .Post-Authors-List__Author-Thumbs .Post-Authors-List__Author-Thumb Background-Color Var"
] |
In late May, Anthropic CEO Dario Amodei appeared to kick open the door on a sensitive topic, warning that half of entry-level jobs could ...
|
In Brief
In late May, Anthropic CEO Dario Amodei appeared to kick open the door on a sensitive topic, warning that half of entry-level jobs could vanish within five years because of AI and push U.S. unemployment up to 20%.
But Amodei is far from alone in sharing that he foresees a workforce bloodbath. A new WSJ story highlights how other CEOs are also issuing dire predictions about AI’s job impact, turning employment doom into something of a competitive sport.
Several of these predictions came before Amodei’s comments. For example, at JPMorgan’s annual investor day earlier in May, its consumer banking chief Marianne Lake projected AI would “enable” a 10% workforce reduction.
But they’ve been coming fast, and growing more stark, ever since. In a note last month, Amazon’s Andy Jassy warned employees to expect a smaller workforce due to the “once-in-a-lifetime” technological shift that’s afoot.
ThredUp’s CEO said at a conference last month that AI will destroy “way more jobs than the average person thinks.” Not to be outdone, Ford’s Jim Farley delivered perhaps the most sweeping claim yet, saying last week that AI will “literally replace half of all white-collar workers in the U.S.”
It’s a dramatic shift from executives’ previous cautious public statements about job displacement, notes the Journal. Indeed, the outlet notes that while some tech leaders — including from powerful AI companies — have proposed that fears are overblown, the growing string of warnings suggests massive restructurings are coming, whether people are ready for them or not.
| 2025-07-02T00:00:00 |
2025/07/02
|
https://techcrunch.com/2025/07/02/ai-job-predictions-become-corporate-americas-newest-competitive-sport/
|
[
{
"date": "2023/03/01",
"position": 94,
"query": "AI job losses"
},
{
"date": "2023/09/01",
"position": 78,
"query": "AI job losses"
},
{
"date": "2023/12/01",
"position": 76,
"query": "AI job losses"
}
] |
AI is already replacing jobs in Europe, and no one's tracking it.
|
AI is silently rewriting jobs across Europe
|
https://euroweeklynews.com
|
[
"Linkedin Profile"
] |
AI is already replacing jobs in Europe, and no one's tracking it. ... Across Europe, artificial intelligence has become a significant employment ...
|
By Tarek Salame • Published: 07 Jul 2025 • 19:10 • 3 minutes read
A worker protests against job displacement caused by artificial intelligence in Europe. Credit: Tatyana via Canva.com
Across Europe, artificial intelligence has become a significant employment opportunity. You wouldn’t know it from the headlines, but the government has announced new innovation hubs and funding schemes. The real shift is happening at the ground level. This is where routine tasks are disappearing, the job descriptions are shrinking, and entire roles are being redefined to fit software. There is no official count nor a labour dashboard that marks this change, but from freelancers to mid-level staff, people are already being pushed out, not due to layoffs but rather through gradual exclusion from the work they once did.
This article investigates how AI is altering the European labour market in ways that aren’t being captured by statistics or policy. To see the sectors already affected, workers are left with fewer options, and governments are still treating this as a future risk when, for many, it’s already shaping the present.
AI is displacing human input.
Across sectors in Europe, AI is stripping away tasks one by one. And in many cases, the roles are becoming downgraded or outsourced.
Translation and content:
In the Netherlands , Germany , and Finland , professional translators and copy editors have reported significant declines in freelance contracts as companies adopt AI assistant platforms, such as DeepL Pro and Grammarly Business.
According to the European Language Industry Association , over 60% of these agencies now utilise AI tools for initial drafts with minimal human review.
Call centres and customer support
In Portugal and Greece , known hubs for outsourced support work are seeing early stages of voice and chat automation.
Lisbon-based global service providers , such as , such as Teleperformance , have introduced AI assistants to handle up to 70% of Tier 1 queries.
Human agents are now being shifted into “escalation-only” roles — fewer hours and lower base pay.
Logistics and Warehouse
Operations in Germany and Poland , including predictive scheduling, AI route planning, and adjusting work and delivery networks.
A 2024 McKenzie report on Automation and flagship warehouse identifies it as one of the top three sectors for partial task erosion, noting that Amazon-style systems are replacing picking and inventory checking in DPD and DHL facilities.
The reshaping of middle-skill work
AI is a fundamental disruption in Europe. It’s happening in the middle among the jobs that once offered progression, stability, and a clear role within organisations. The administrative roles have been redefined, as HR assistants, payroll clerks, and scheduling coordinators are now being changed through a 2023 Eurofound analysis. This analysis revealed that administrative roles declined by 18% and hiring across major job platforms.
Even law firms in France, Spain, and Belgium are using generative AI for early-stage research as well as document drafting.
In Spain, firms are using platforms like Luminance or Harvey AI to report that they are cutting first-year paralegal hours by over 30% .
Even human remnants of automated jobs are being outsourced to cheaper markets or into precarious gig work. What remains in Europe is becoming more fragmented; what we’re seeing is part-time, short-term, or repackaged project coordination. It’s not unemployment, but it’s underemployment dressed in a new suit.
The illusion of upskilling
The data from the European Commission’s Digital Decade Progress Tracker shows that 64% of large companies state that they offer some form of digital training, with only 23% of workers, particularly those in low- or mid-skilled roles, reporting access to it.
In 2024, a CEPS study found that over 70% of retraining investments in the EU went to “digitally engaged” demographics, the same workers who are least likely to be displaced by AI in the first place.
In 2024, the OECD report revealed a growing divide in AI productivity gains across member states. Countries such as the Netherlands and Sweden have seen increases in output, but without corresponding job growth.
The Bank of Spain noted in March 2025 that early AI adoption and firms correlated with net task displacement in roles with an annual salary of under €25,000.
But what we’re seeing is more than just the disruption that AI does not reduce labour; it’s reassigning it. In doing so, it destabilises wage structures, rewrites job hierarchies, and exposes the social gaps that European institutions are currently not equipped to address.
The takeover
AI is targeting jobs we don’t usually expect, such as the hiring assistant who uses to drive jobs, the coordinator who relies on the system, and the junior analyst who makes sense of data.
These are fragments of roles now handled by language models, by scheduling bots and even predictive dashboards. But taken together, it adds up to something relatively bigger, which is a soft erasure of the human layer and everyday work.
AI is learning how to perform the connective tasks between departments and roles, and no one is keeping track of how much it is evolving and expanding within Europe. By the time we find out, it may already be too late.
| 2025-07-07T00:00:00 |
2025/07/07
|
https://euroweeklynews.com/2025/07/07/ai-is-already-replacing-jobs-in-europe-and-no-ones-tracking-it/
|
[
{
"date": "2023/03/01",
"position": 98,
"query": "AI replacing workers"
}
] |
Do you know anyone who's lost their job to AI? : r/singularity
|
The heart of the internet
|
https://www.reddit.com
|
[] |
Human translators are losing jobs worldwide. Due to significant improvements in machine translation there has been less and less demand for human translators ...
|
Losing a job to automation makes sense on paper. I also see many people losing their shit as of lately on this subreddit over this pending concern. However I do not know of anyone personally who's lost their job yet to recent strong AI, weak AI or really anything novel of late.
The closest thing I know to this is my brother who lost his job as a parking attendant (the person who stands at the front of a paid parking lot). But this was over a decade ago and a result of dumb automation.
So does anyone know of anyone who's actually lost their job yet to recent AI?
And by this I don't mean:
- Not hiring people because of increased efficiency. e.g. Amazon warehouses.
- Transferring people to other jobs.
- Letting go of people for things unrelated to AI. e.g. recent bloated tech firings.
What I mean is a bonafide "Hey Bill, sorry but bot got your job, we don't need you anymore."
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/singularity/comments/11vbyrn/do_you_know_anyone_whos_lost_their_job_to_ai/
|
[
{
"date": "2023/03/01",
"position": 9,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 6,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/01",
"position": 10,
"query": "artificial intelligence layoffs"
}
] |
|
Brace for Impact: Facing the AI Revolution and Geopolitical ...
|
Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040
|
https://www.mdpi.com
|
[
"Gerlich",
"Michael Gerlich"
] |
by M Gerlich · 2024 · Cited by 11 — Specifically, the study forecasts a high probability of AI-induced unemployment reaching 40–50%, alongside the rapid evolution of AI technologies, outpacing ...
|
This study aims to explore the intertwined impacts of AI and geopolitical developments on the global landscape from 2025 to 2040. By focusing on these two critical trends, the research seeks to provide a comprehensive scenario analysis that highlights the potential societal, economic, and political transformations driven by AI and the shifting geopolitical order. The study’s findings underscore the urgent need for governments and institutions to develop strategies that address the dual challenges of technological disruption and geopolitical instability, ensuring a resilient and sustainable future. As AI continues to evolve, it is poised to redefine the future of human societies in ways we are only beginning to comprehend [ 17 ].
The intersection of AI and geopolitical developments creates a complex and dynamic global environment that demands adaptive strategies and robust policy responses. As AI becomes increasingly intertwined with geopolitical strategies, its role in enhancing or destabilising global security cannot be understated. Nations are not only competing in traditional military and economic arenas but are also vying for supremacy in AI, recognising its potential to confer significant strategic advantages [ 16 ]. The dual impacts of AI and geopolitical shifts necessitate a comprehensive approach to global governance, one that considers the multifaceted challenges and opportunities presented by these trends.
Simultaneously, the geopolitical landscape is undergoing profound transformations. The re-emergence of great-power competition, particularly between the United States and China, is reshaping global dynamics in significant ways. This competition spans military, technological, and economic spheres, leading to a strategic realignment in various regions worldwide [ 7 ]. The rivalry between these superpowers has disrupted global trade and investment patterns, and is exacerbated by rising protectionism and trade wars, which have significantly impacted global economic [ 8 ]. The imposition of tariffs and trade barriers, primarily between the U.S. and China, has resulted in considerable economic losses and heightened uncertainty in global markets. The rise of other powers, such as Russia, has introduced additional complexities into the global geopolitical environment. The assertive foreign policies and strategic alliances pursued by these nations challenge the existing hegemony of Western powers, leading to a more multipolar world order [ 9 11 ]. The ongoing conflict between Ukraine and Russia exemplifies the disruptive potential of geopolitical tensions. This conflict has not only resulted in significant humanitarian crises, with over 30,000 civilian casualties and millions of displaced individuals [ 12 ], but it has also strained international relations and divided global powers. The United States, despite facing its own economic challenges, has provided substantial aid to Ukraine, reflecting the broader strategic interests at play [ 13 15 ].
The rapid advancements in Artificial Intelligence (AI) and the intensifying geopolitical tensions are two of the most significant forces reshaping the global landscape in the 21st century. AI, with its transformative potential, is driving unprecedented changes across various sectors, from economic growth and employment to international relations and security. As AI technologies evolve, they not only promise to revolutionise industries and enhance efficiency but also pose substantial risks, including job displacement, privacy concerns, and ethical dilemmas. The implications of AI extend beyond the economic realm, influencing geopolitical strategies, power dynamics, and the nature of global competition. AI has emerged as a pivotal element in the Fourth Industrial Revolution, fundamentally altering the way societies function. Its capacity to automate complex tasks, enhance decision-making processes, and create new economic opportunities positions it as a critical driver of future growth [ 1 2 ]. However, this technological revolution also presents significant challenges, particularly in terms of employment. The potential for AI to displace a substantial portion of the global workforce raises urgent questions about the future of work and the adequacy of current educational and retraining systems [ 3 4 ]. Moreover, the ethical and regulatory challenges posed by AI, including issues of bias, transparency, and accountability, necessitate a proactive approach from both policymakers and industry leaders [ 5 ]. The evolution of global dynamics, particularly influenced by technology, has been significant over the past few decades [ 6 ].
The findings from the Delphi process, scenario analysis, and probabilistic modelling were integrated into a cohesive framework which provides a comprehensive view of the potential futures shaped by AI and geopolitical developments. This framework serves as a valuable tool for policymakers, industry leaders, and researchers, helping them navigate the complexities of a rapidly changing global landscape.
The study ensures the privacy and confidentiality of the experts by not collecting personal identification information. Data are stored securely for one year, after which all responses are deleted. Ethical guidelines and approvals from relevant institutions are followed throughout the research process.
Data collection involves three rounds of discussions with the selected experts. The first round focused on identifying significant issues, uncertainties, and potential developments. The second and third rounds refined these insights and developed robust scenarios. The final data was used for probabilistic modelling, where the likelihoods of different scenarios are quantified.
The results from the probabilistic modelling provided valuable insights into the relative likelihood of different scenarios and their potential impacts. These quantitative findings were integrated with the qualitative scenario narratives, allowing the study to offer a more comprehensive and nuanced understanding of the future landscape shaped by AI and geopolitical developments. These insights are critical for stakeholders who need to make informed decisions in the face of uncertainty, offering a way to quantify risks and plan strategically for various possible futures.
To quantify the impact of the identified uncertainties, a Monte Carlo simulation was conducted. This simulation involved running thousands of iterations, each representing a different possible future based on the defined probability distributions. The Monte Carlo method allowed the study to explore the full range of possible outcomes for each scenario, providing a probabilistic assessment of risks and opportunities [ 25 ].
Once the key variables were defined, probability distributions were developed to represent the range of possible values for each variable. These distributions were based on expert judgments, historical data, and scenario-specific considerations. The probability distributions showed the potential variability in outcomes across different scenarios. For example, the distribution for AI adoption rates was designed to capture both optimistic and pessimistic projections, reflecting the uncertainty inherent in predicting technological diffusion.
The first step in the probabilistic modelling process involved identifying and defining the key variables that would be subject to uncertainty. These variables included AI adoption rates, geopolitical stability indices, economic growth projections, and potential regulatory impacts. The definitions of these variables were informed by the expert panel’s input and the existing literature, ensuring that they were both relevant and accurate [ 24 ].
To complement the qualitative insights provided by the multiple-scenario analysis, the study employed probabilistic modelling to quantify the uncertainties associated with each scenario. This approach was particularly valuable in assessing the likelihood of different outcomes and in providing a more rigorous foundation for decision-making.
The finalised scenarios were summarised, including the dominant trends, critical uncertainties, and potential outcomes. These scenarios serve as a critical tool for policymakers, business leaders, and researchers, offering a structured way to anticipate and prepare for the diverse futures that AI and geopolitical developments might bring.
After the initial construction of the scenarios, they were presented to the expert panel for validation. This step involved a detailed review by the experts, who provided feedback on the plausibility, coherence, and completeness of the scenarios. The feedback was meticulously analysed and incorporated into the final versions of the scenarios, ensuring that they accurately reflected the collective expertise of the panel and were internally consistent.
The scenarios were enriched by integrating both qualitative insights from expert opinions and quantitative data where available. This dual approach ensured that the scenarios were not only conceptually robust but also grounded in empirical reality. For instance, AI adoption rates and economic impact projections were included in the scenarios, drawing on existing studies and the Delphi panel’s estimations.
For each scenario in the matrix, a detailed narrative was constructed to illustrate how the world might look under different combinations of AI and geopolitical developments. These narratives were not merely speculative but were grounded in the expert insights gathered during the Delphi process. Each narrative included a comprehensive exploration of the implications for global stability, economic growth, technological innovation, and international relations. The narratives were designed to be as vivid and plausible as possible, providing a rich context for understanding the potential impacts of AI and geopolitical shifts on various sectors.
The multiple-scenario analysis was guided by a structured framework which was developed based on the key uncertainties and driving forces identified in the Delphi process. The framework allowed for a systematic exploration of how these factors might interact in various ways to shape different future trajectories. The key uncertainties, such as the rate of AI adoption, the robustness of international regulatory frameworks, and the stability of geopolitical alliances, were positioned on axes to create a scenario matrix. This matrixserved as the foundation for the development of detailed scenario narratives.
After achieving consensus through the Delphi method, the study employed a multiple-scenario analysis to explore how the identified uncertainties and trends might evolve in different combinations, resulting in several distinct and plausible future scenarios. This step was essential for understanding the range of possible outcomes that could emerge from the complex interaction between AI and geopolitical dynamics.
The Delphi method’s iterative nature and its focus on achieving expert consensus ensured that the scenarios developed in this study are both comprehensive and reflective of the most credible insights from the leading minds in AI and geopolitical strategy.
The final round of the Delphi process aimed to solidify the consensus among the experts. In this stage, the scenarios that had emerged as most significant were presented back to the experts for final validation. Experts reviewed these scenarios, focusing on their plausibility and potential impact. This round allowed the experts to adjust or confirm their views based on the collective insights gained throughout the Delphi process. The results from this round highlight the agreed-upon probabilities and impacts of the identified scenarios, providing a robust foundation for the subsequent scenario analysis.
The second round of the Delphi process involved refining the broad range of ideas and sub-factors identified in the first round. The research team synthesised the responses to identify common themes and patterns, which were then developed into more focused questions and potential scenarios. Experts were asked to review these refined scenarios, providing their evaluations and insights on the most probable and impactful developments. This round aimed to achieve a higher level of consensus among the experts by narrowing down the focus to the most critical issues and uncertainties. The refined scenarios and themes were mapped.
In the first round of the Delphi process, experts were asked a series of open-ended questions designed to elicit a broad range of ideas and perspectives on the potential futures influenced by AI and geopolitical shifts. This round was crucial for gathering diverse insights into the possible outcomes and uncertainties associated with these developments. The responses from this round were extensive, with over 1947 sub-factors identified and considered during this stage. These sub-factors encompassed various aspects of AI integration, geopolitical strategies, economic impacts, and social changes, providing a comprehensive foundation for the subsequent refinement process.
A panel of 30 senior specialists was selected from leading global organisations known for their expertise in AI and geopolitical affairs. The organisations included Microsoft, Google, IBM, OpenAI, Amazon, the International Political Science Association, the American Academy of Political and Social Science, the London School of Political Science and Economy, the University of Cambridge, the Center for Strategic and International Studies, the World Bank, the United Nations, and the World Economic Forum. These institutions were chosen based on their prominent role in shaping AI development and global policy, ensuring that the panel comprised individuals with the necessary depth of knowledge and strategic insight into both technological and geopolitical domains. The selection process focused on identifying experts who had made significant contributions to their fields, thereby ensuring a high level of expertise and credibility in the Delphi process.
The initial step in the Delphi process involved identifying the key issues and setting clear objectives for the development of future scenarios. This task was undertaken by the research team, who reviewed the existing literature and conducted preliminary analyses to pinpoint the most significant trends and uncertainties within the fields of AI and geopolitical developments. These identified issues were intended to guide the entire Delphi process. The identified issues and objectives were then presented to the experts in subsequent rounds for verification and further refinement.
The Delphi method was chosen for its ability to manage complex, uncertain topics by relying on the collective intelligence of experts. Given the rapidly evolving nature of AI and the unpredictable nature of geopolitical developments, traditional forecasting methods may fall short. The Delphi method, however, provides a structured way to harness diverse expert opinions, refine these through feedback loops, and build a robust consensus on likely future scenarios. This is particularly important for the exploration of topics where data is scarce or where trends are still emerging, as is the case with the impact of AI in conjunction with geopolitical developments.
The Delphi method is employed to gather expert opinions and achieve a consensus on significant issues, uncertainties, and potential developments at the intersection of AI and geopolitical dynamics. The process involved multiple rounds of surveys with a carefully selected panel of experts from leading institutions known for their contributions to AI and geopolitical research [ 22 23 ].
The research design of this study is structured around a mixed-method approach, integrating qualitative and quantitative methodologies to comprehensively explore how AI and geopolitical developments will shape the global landscape from 2025 to 2040. The design is methodologically rigorous, combining the Delphi method, multiple-scenario analysis, and probabilistic modelling to answer the central research question:
The CAS framework is visually represented in Figure 1 , which illustrates the dynamic interactions and interdependencies between AI technologies and geopolitical developments. This figure serves as a foundational tool for understanding how various factors, such as technological innovation, economic shifts, and geopolitical strategies, interact within a global system.
The CAS framework is essential for this study because AI and geopolitical dynamics are inherently complex and adaptive, involving multiple feedback loops and dependencies. For instance, advancements in AI can influence geopolitical power balances, which in turn can affect the pace and direction of AI development. Historical economic shifts provide important insights into how such transformative forces can reshape entire industries [ 20 ]. The CAS framework allows us to capture these interdependencies, providing a more holistic view of how these forces shape the future. Small changes in one area (e.g., AI policy) can lead to significant, unpredictable outcomes in another (e.g., international relations). The CAS framework enables us to build future scenarios which are more robust and realistic by recognising this interdependence [ 21 ]. By employing the CAS framework, this study recognises that AI and geopolitical factors do not operate in isolation. Instead, they are interdependent elements within a complex global system in which feedback loops, historical contexts, and interconnected variables drive change. The CAS framework allows for a more nuanced analysis of potential future scenarios by acknowledging the open and adaptive nature of these systems. This approach is essential for exploring the multifaceted challenges and opportunities that arise from the interaction between AI advancements and geopolitical shifts, making it possible to develop scenarios that capture the intricacies of global developments between 2025 and 2040 [ 21 ].
This study utilises the Complex Adaptive Systems (CAS) framework, a robust analytical tool particularly well-suited for examining the dynamic and non-linear interactions between Artificial Intelligence (AI) and geopolitical developments. The CAS framework, originating from interdisciplinary research at the Santa Fe Institute, is instrumental in understanding how small changes in one area, such as AI innovation, can lead to significant and often unpredictable outcomes in another, such as global political stability [ 18 19 ].
The literature highlights the need for proactive and coordinated efforts to address these challenges, emphasising the importance of the strengthening of institutional capacities and fostering resilience within societies [ 4 26 ]. However, the current prognosis suggests that without significant changes in policy and governance, governments and societies will be ill-equipped to navigate the complexities of the future, leading to potentially severe consequences for global stability and human well-being. Figure 3 summarises the key impact chains and probabilities associated with the influence of AI on global dynamics by 2040.
The geopolitical developments further compound these challenges. The rise of nationalistic movements and ongoing conflicts such as the Russia–Ukraine war and tensions surrounding China’s geopolitical strategies present significant risks to global stability. The scenarios suggest that governments are largely unprepared to manage the cascading effects of these developments, including the increased anxieties and social fragmentation that are likely to accompany them.
This low level of readiness is particularly concerning, given the profound impacts predicted across various domains. The rapid and exponential speed of AI-driven change, combined with increased security risks due to AI and quantum computing, are expected to overwhelm existing regulatory frameworks. Governments are likely to struggle to keep pace with these advancements, leading to significant governance gaps that could exacerbate economic inequalities and social unrest. Similarly, the potential for AI to drive rapid advancements in medical and technical fields may outstrip the capacity of healthcare systems to adapt, particularly in regions with already limited resources. This lack of readiness is further exacerbated by the short-term orientation prevalent in Western businesses, as highlighted by Gerlich [ 75 ], which prioritises immediate returns over long-term strategic planning. Such an approach undermines the capacity to develop resilient responses to the complex, interconnected issues posed by these emerging global trends.
A crucial finding from the study is the alarmingly low readiness of governments and societies to effectively manage and mitigate the identified challenges posed by AI and geopolitical developments. The scenarios indicate that there is only an overall 10–15% probability that current governmental structures and societal systems are prepared to tackle the complex and interconnected issues that are likely to arise by 2040.
These developments are also expected to cause a 70% probability of increased anxieties globally. The potential for conflict involving a major power like China raises fears of a broader destabilisation of the international order, with significant implications for global security, trade, and economic stability [ 69 72 ]. The anxiety associated with China’s geopolitical manoeuvres reflects the broader uncertainty about the future of global governance and the potential for a shift away from the current international system towards a more multipolar or even bipolar world.
The combination of economic dependency and the risk of conflict contributes to a broader sense of global instability. As nations navigate their relationships with a rising China, the 70% probability of increased anxieties reflects the uncertainty surrounding China’s long-term strategic goals and the potential for these to clash with the interests of other major powers [ 72 ]. China’s geopolitical strategies, particularly its approach to international conflicts and economic policies, carry a 70% probability of increasing the risk of geopolitical conflicts or wars. The literature on China’s foreign policy suggests that its assertiveness in territorial disputes and its efforts to expand its influence through economic means could lead to significant tensions, particularly with the United States and its allies [ 73 74 ].
China’s economic policies and global influence are expected to play a pivotal role in shaping the future geopolitical landscape. The scenarios suggest a 70% probability that China’s economic rise will increase global economic dependencies, particularly in developing countries that rely heavily on Chinese investment and trade. This scenario aligns with the existing literature on China’s Belt and Road Initiative and its impact on global trade patterns [ 68 69 ]. However, these dependencies are also likely to lead, with 70% probability, to increased anxiety, especially among countries and regions that become heavily reliant on Chinese economic influence. The fear of economic coercion or dependency can lead to political instability and resistance against perceived Chinese dominance [ 70 ]. Furthermore, the scenarios indicate a 70% probability that China’s geopolitical strategies, particularly in the Asia-Pacific region, will increase the risk of geopolitical conflicts, which could lead to sanctions and have a negative economic impact on Western economies [ 71 ].
The scenarios indicate a 75% probability that Israel’s foreign policy and regional conflicts could lead to an increased risk of geopolitical conflicts or wars. The literature on Middle Eastern geopolitics underscores the potential for regional tensions to escalate, particularly in the context of unresolved conflicts, shifting alliances, and the involvement of major powers [ 4 66 ]. As these conflicts intensify, there is a 70% probability of increased anxiety among the populations within the region and globally. The volatility of the Middle East, coupled with the involvement of global superpowers, creates a complex and unstable environment that can lead to widespread fear and uncertainty [ 67 ]. This anxiety is often exacerbated by the unpredictable nature of the conflicts, in which even minor incidents can escalate into broader confrontations, affecting global security and economic stability.
In addition, there is a 50% probability that Russian politics will lead to the isolation of Russia on the global stage. This isolation could destabilise global energy markets, disrupt security alliances, and strain international trade relations. The literature highlights the risks associated with such isolation, particularly in terms of economic sanctions and the potential for Russia to pursue more aggressive foreign policies as a [ 64 65 ]. The 70% probability of increased anxieties due to this isolation reflects the global uncertainty that arises when a major power becomes estranged from the international community.
The ongoing conflict between Russia and Ukraine, coupled with Russia’s broader foreign policy, presents a significant risk to global stability. The scenarios suggest a 70% probability that the Russia–Ukraine war will expand, potentially drawing in other countries and escalating into a broader regional conflict. This aligns with the literature on the dynamics of conflict escalation and the risks posed by proxy wars and regional power struggles [ 61 62 ]. The expansion of the conflict is expected to cause, with 70% probability, increased anxiety among populations both within and outside the region. The fear of war spreading and the potential for direct involvement of other nations could lead to heightened tension and uncertainty, contributing to widespread psychological stress [ 63 ]. This anxiety is not limited to the immediate vicinity of the conflict but can ripple across global markets, affecting everything from energy prices to international trade flows [ 64 ].
Moreover, the rise of nationalistic movements is expected to lead, with 70% probability, to increased anxiety among the population. The resurgence of nationalism often brings with it a sense of uncertainty and fear, as individuals may feel threatened by perceived external and internal enemies. This anxiety can further erode social cohesion and trust in government institutions, potentially leading to increased social unrest [ 59 60 ].
As nationalistic movements gain momentum, there is a 60% probability that they will divert focus from global sustainability efforts. This shift could result in governments prioritising short-term economic gains over long-term environmental goals, exacerbating the challenges of addressing climate change [ 36 37 ].
The scenarios indicate a 70% probability that geopolitical developments will lead to increased nationalisation and the rise of nationalistic movements. This shift towards nationalisation is likely to result in greater geopolitical fragmentation, as countries prioritise national interests over global cooperation. The literature supports this scenario, with Gidron and Hall [ 57 ] and Rodrik [ 58 ] identifying economic insecurity and cultural backlash as key drivers of nationalism in the 21st century.
This loss of trust is likely to result, with 60% probability, in increased social isolation and a decline in social skills, as individuals become more dependent on AI-driven interactions. Turkle [ 44 ] and other scholars have noted that overreliance on digital technologies can lead to a reduction in meaningful face-to-face communication, weakening the social fabric and contributing to a sense of alienation. Figure 2 summarises the key impact chains and probabilities associated with the influence of AI on global dynamics by 2040.
One of the more troubling scenarios involves the 65% probability that the increasing integration of AI into decision-making processes will lead to a loss of trust in human relationships and institutions. As AI systems become more prevalent, there is a risk that people will rely more on these technologies than on human judgment, potentially undermining interpersonal trust and exacerbating social fragmentation [ 54 56 ].
AI holds significant potential to contribute to sustainability efforts, with a 60% probability found that it will help develop solutions to environmental challenges. These solutions could include optimising energy usage, improving resource management, and aiding in the development of new, environmentally friendly materials. The literature highlights the potential for AI to drive innovations that contribute to a more sustainable future, particularly in the context of climate change [ 37 38 ]. There is also a 30% probability that initial successes in AI-driven sustainability could lead to complacency, reducing the urgency for broader systemic changes needed to address the root causes of environmental degradation. This scenario is supported by Stern [ 36 ] who cautions that technological solutions alone are insufficient to solve the complex challenges of sustainability and that they must be accompanied by significant policy and behavioural changes.
AI is anticipated to drive significant advancements in medical and technical fields, with a 90% probability that these sectors will evolve rapidly by 2040. This evolution is expected to lead to improvements in medical standards worldwide, with a 60% probability that technology will result in more comfort and better healthcare outcomes. The literature supports this optimistic view, noting that AI-driven innovations in precision medicine, diagnostic tools, and personalised treatments have the potential to revolutionise healthcare [ 50 51 ]. However, these advancements’ rapid pace could also exacerbate healthcare access inequalities, particularly in developing countries. The 54% probability of this occurring is concerning, as it could lead to a widening gap between those who have access to cutting-edge medical technologies and those who do not [ 52 ]. Furthermore, the focus on technological solutions might detract from addressing the social determinants of health, which is crucial for achieving long-term health equity [ 53 ].
These developments are also likely to contribute to a 60% probability of increased anxiety among the population, as individuals and businesses grapple with the constant threat of cyberattacks and the potential for significant personal and economic losses. The psychological toll of living under such constant threats can lead to a decline in societal well-being and further erode trust in digital technologies and the institutions that govern them [ 49 ].
Additionally, there is a 40% probability that governments and countermeasures will fail to keep up with advancements in cybercrime, resulting in significant disruptions to the economy and individual lives. This aligns with predictions by Schneier [ 48 ], who warns that the rapid evolution of AI in cyber operations could outstrip traditional security measures, leaving critical infrastructure vulnerable to attacks.
AI and quantum computing are expected to revolutionize security, and there is a 100% probability that these technologies will increase security risks, particularly in the realm of cybersecurity. This increase in risk is driven by the 60% probability that global security systems will become more vulnerable, potentially leading to international conflicts. The dual-use nature of AI and quantum technologies means they can be employed for both defence and offence, creating a complex security landscape that is difficult to manage [ 46 47 ].
Moreover, due to the overwhelming focus on managing the immediate effects of these rapid changes, there is a 75% probability that governments will deprioritise sustainability initiatives. This deprioritisation could lead to increased environmental degradation as short-term economic concerns take precedence over long-term sustainability goals [ 36 ]. The literature emphasises that such shifts in policy focus could significantly undermine global efforts to combat climate change, leading to a more extractive and less sustainable global economy [ 38 ].
The stress and anxiety associated with the inability to keep pace with AI-driven changes can also undermine public trust in institutions, which may further weaken social stability. Studies by Susskind and Susskind [ 28 ] highlight how rapid technological advancements, if not managed properly, can erode trust in traditional institutions, leading to greater societal unrest.
The exponential speed of AI-driven change is identified as a major impact factor with a 100% probability. This rapid evolution is expected to outpace government regulatory frameworks, leading to significant governance challenges. The inability of governments to adapt their regulations to keep up with technological advancements has a 100% probability of occurring and would result in regulatory gaps that could exacerbate economic inequalities and create new ethical dilemmas [ 4 43 ]. There is a 75% probability that the general population will struggle to keep up with this rapid pace of change, leading to widespread anxieties and psychological distress. The literature supports this, indicating that rapid technological change can lead to feelings of obsolescence and a loss of control, contributing to increased levels of anxiety and depression [ 44 45 ]. This psychological impact is further compounded by the 70% probability that societal structures will not be able to adapt quickly enough, leading to social fragmentation and a decline in community cohesion.
Finally, the scenarios highlight a critical weakness in the current governmental readiness to handle such a profound societal shift. There is only a 10% probability that governments are prepared to manage the challenges posed by this level of unemployment. This lack of preparedness could exacerbate the socio-economic and political crises predicted in these scenarios. Studies have long warned about the inadequacies of existing social and economic policies in the face of rapid technological change [ 41 42 ]. The potential for governments to be caught off-guard by the speed and scale of AI-driven changes is a significant risk that requires urgent attention from policymakers and stakeholders.
In addition to these broader economic and environmental impacts, the scenarios also predict an 80% probability that the increased unemployment will lead to a rise in depression and other mental health issues as individuals struggle with feelings of uselessness and disconnection from the workforce. The psychological impact of long-term unemployment is well-documented, with studies by Paul and Moser) [ 39 ] and Jahoda [ 40 ] linking job loss to higher rates of depression, anxiety, and suicide. The rise in mental health issues could further strain public health systems, which are already grappling with the challenges posed by an ageing population and the increased prevalence of chronic diseases.
The internal challenges faced by governments due to unemployment are likely to have further cascading effects. The scenarios indicate a 75% probability that, due to these internal pressures, governments will deprioritise sustainability initiatives. This deprioritisation could result in significant setbacks for global efforts to combat climate change, as resources and attention are diverted to more immediate economic and social issues. The literature supports this outcome, with studies by Stern [ 36 ] and, more recently, Rockström et al. [ 37 ] warning that short-term crises often lead to the abandonment of long-term sustainability goals. The potential for AI to contribute to a more extractive and less sustainable global economy is a concern that has been raised by various scholars, who argue that without strong regulatory frameworks, AI could exacerbate existing environmental challenges [ 38 ].
Compounding this issue, there is a 90% probability that governments will be unable to manage the socio-economic fallout from such high unemployment levels. The financial and sociological burdens of providing adequate social safety nets and retraining programs and maintaining public order could overwhelm governmental capacities. Studies by Reinhart and Rogoff [ 35 ] on economic crises highlight how even well-prepared governments can struggle to manage the financial demands of large-scale unemployment, particularly when it is compounded by simultaneous pressures on multiple fronts. This scenario suggests that many governments may face difficult trade-offs, in which the need to stabilise the economy might come at the expense of other critical areas, such as healthcare or education.
This surge in unemployment, across both routine and analytical jobs, is expected to place immense pressure on governments. The scenarios suggest a 65% probability that the resulting internal pressures could lead to increased social unrest, which might escalate into external conflicts. The potential for AI-induced unemployment to cause such instability is supported by studies on the social consequences of technological disruption. Ref. [ 32 ] and, more recently, Acemoglu and Restrepo [ 33 ] have shown that economic deprivation, particularly when linked to rapid technological change, can fuel grievances that lead to conflict. Cramer [ 34 ] adds that the erosion of job security can exacerbate existing social divisions, leading to the rise of populism and nationalist movements, further destabilising the political landscape.
The consultancy sector, for example, is likely to see AI taking over many tasks traditionally performed by human consultants, such as data analysis, strategic planning, and even decision-making. McKinsey Global Institute [ 29 ] estimates that nearly a third of activities in 60% of all occupations could be automated, with AI making significant inroads into jobs that require analytical thinking and judgment. Similarly, in the finance sector, AI’s ability to conduct complex financial modelling, risk assessments, and even predictive analytics is expected to reduce the need for human financial analysts and advisors [ 30 ]. The impact on these sectors will not only result in job displacement but could also lead to a redefinition of the roles that remain, requiring workers to develop new skills that complement AI technologies [ 31 ].
The first and most significant impact of AI identified in the scenarios is the potential for massive unemployment. There is a 90% probability that AI-driven automation will cause unemployment rates to surge to 40–50%, affecting not only industries heavily dependent on routine tasks, such as manufacturing and logistics, but also sectors traditionally considered more secure, such as consultancy, finance, and broader service industries. Research by Frey and Osborne [ 26 ] initially highlighted the vulnerability of routine jobs to automation, but more recent studies have pointed to the increasing susceptibility of analytical jobs as well. AI’s ability to process and analyse large datasets with speed and accuracy far beyond human capability is expected to impact jobs in consultancy and finance significantly [ 27 28 ].
The scenarios developed through the Delphi process, combined with probabilistic modelling, underscore the significant role that Artificial Intelligence (AI) is expected to play in shaping global dynamics by 2040. AI’s influence is multifaceted, affecting economic structures, societal norms, and security paradigms, each of which has a cascading impact on other critical factors.
The findings of this study provide a comprehensive analysis of the potential impacts of Artificial Intelligence (AI) and geopolitical developments on global dynamics by 2040. The results are presented in two major domains: AI’s impact and geopolitical developments. Each domain encompasses several critical factors that are projected to influence economic, societal, and security-based outcomes on a global scale. The scenarios developed through the Delphi process highlight the cascading effects of these factors, each carrying significant probabilities of occurrence. The analysis identifies key areas where governments and societies are particularly vulnerable, with a strikingly low readiness to address the challenges posed by these developments. The following sections delve into the detailed impact chains for AI and geopolitical developments, exploring the intricate interdependencies and potential outcomes that could shape the future global landscape.
4. Conclusions
This study provides a critical examination of the potential impacts of Artificial Intelligence (AI) and geopolitical developments on global dynamics by 2040. The research employs a robust scenario analysis methodology, integrating a Delphi process with probabilistic modelling to forecast the cascading effects across economic, societal, and security domains. The findings reveal a complex interplay of factors that are poised to shape the future in profound ways, underscoring the urgency of strategic foresight and preparedness.
AI is identified as a transformative force with the capacity to alter the fabric of societies and economies significantly. The study highlights a 90% probability that AI-driven automation will lead to an unemployment surge of 40–50%, affecting not only routine and manual jobs but also, increasingly, analytical roles within sectors such as finance and consultancy. This surge in unemployment is expected to generate immense pressure on governments, which may struggle to manage the socio-economic fallout. The financial and sociological burdens of mass unemployment could potentially overwhelm governmental capacities, leading to a 65% probability of internal conflicts that might spill over into international tensions.
Moreover, the 100% probability that the exponential speed of AI-driven change will outpace existing regulatory frameworks poses significant governance challenges. As AI continues to evolve at an unprecedented rate, governments are likely to find themselves ill-equipped to regulate these technologies effectively, leading to potential market monopolies, unchecked corporate power, and deepening economic inequalities. This governance gap could also hinder innovation, as businesses may face legal uncertainties that slow down the deployment of AI technologies.
The impact of AI on societal structures is equally profound. The study identifies an 80% probability that widespread AI adoption will lead to increased psychological risks, including a surge in depression and anxiety, as individuals struggle to find purpose in an AI-dominated economy. This psychological impact is exacerbated by the 70% probability that societal structures will be unable to adapt quickly enough to the changes imposed by AI, resulting in increased social fragmentation and a decline in community cohesion.
On the environmental front, AI presents a paradox. While there is a 60% probability that AI can help develop sustainability solutions, particularly in optimising energy use and resource management, there is also a 30% probability that early successes in AI-driven sustainability could lead to complacency. This complacency may result in a failure to address the root causes of environmental degradation, ultimately undermining long-term sustainability goals.
The study also underscores the societal risks associated with increased reliance on AI in decision-making processes. The 65% probability that AI will contribute to a loss of trust in human relationships and institutions reflects a broader concern about the erosion of interpersonal trust. As AI becomes more integrated into everyday life, there is a 60% probability that social isolation will increase, leading to a decline in social skills and a weakening of the social fabric.
Geopolitical developments are expected to play an equally critical role in shaping the global landscape. The study identifies a 70% probability that increased nationalisation and the rise of nationalistic movements will lead to greater geopolitical fragmentation. This trend is likely to result in significant disruptions to global cooperation as countries prioritise national interests over collective global action. The resurgence of protectionism and trade barriers, particularly between major powers like the United States and China, could further destabilise global markets and exacerbate economic inequalities.
The ongoing Russia–Ukraine conflict is highlighted as a key geopolitical risk, with a 70% probability that the war will expand, drawing in other countries and escalating into a broader regional or even global conflict. The study warns of the potential for this conflict to disrupt global energy markets, strain international alliances, and lead to significant economic losses. The 70% probability of increased anxieties among global populations as a result of this conflict underscores the far-reaching psychological impact of geopolitical instability. In the Middle East, Israel’s foreign policy and the potential for regional conflicts pose additional risks to global stability. The study identifies a 75% probability that Israeli actions could lead to increased tensions and the risk of broader conflict in the region. This is compounded by the 70% probability that China’s economic policies and geopolitical strategies will increase global dependencies on Chinese investment and trade, particularly in developing countries. These dependencies could lead to a more polarised world in which economic and political influence becomes increasingly concentrated in the hands of a few powerful states. China’s rise is also expected to contribute to a 70% probability of increased geopolitical conflicts, particularly in the Asia-Pacific region, where territorial disputes and strategic rivalries could escalate into broader confrontations. The study highlights the potential for these conflicts to disrupt global trade routes, impact international security frameworks, and create new challenges for global governance.
A critical insight from this study is the alarmingly low level of readiness among governments and societies to address the challenges posed by AI and geopolitical developments. The 10% probability of adequate preparedness indicates a significant gap in current strategies, with most governments and institutions being ill-equipped to manage the complex and interconnected risks that lie ahead. This lack of readiness is particularly concerning given the cascading effects identified in the scenarios, in which small initial shocks could lead to large-scale disruptions across multiple domains. The short-term orientation prevalent in Western businesses further exacerbates these challenges. The prioritisation of immediate returns over long-term strategic planning undermines the capacity of both the public and private sectors to develop resilient responses to the evolving global landscape. This short-termism is particularly detrimental in the context of AI and geopolitical shifts, where the rapid pace of change demands forward-looking and adaptive strategies.
The study underscores the need for a paradigm shift in governance, one in which long-term planning and cross-sector collaboration become central to policy-making. Governments must prioritise the building of resilience in their institutions, ensuring that they can adapt to the rapid technological changes and geopolitical shifts that are expected to characterise the coming decades. This includes investing in education and workforce retraining programs to mitigate the social impacts of AI-driven automation and developing robust regulatory frameworks that can keep pace with technological advancements.
Additionally, the study emphasises the need for increased international cooperation to address the geopolitical risks identified in the scenarios. As the world becomes increasingly interconnected, the ability to navigate complex geopolitical landscapes will require coordinated efforts among nations, with foci on diplomacy, conflict resolution, and sustainable economic development.
This study provides a detailed and nuanced understanding of the potential futures shaped by AI and geopolitical developments. The findings highlight the critical importance of proactive governance, strategic foresight, and global cooperation in managing the risks and leveraging the opportunities presented by these emerging trends. As the global community faces unprecedented challenges, the imperative for robust, adaptive strategies has never been greater. Only through concerted and sustained efforts can we hope to achieve a stable and equitable future, ensuring that the benefits of technological and geopolitical advancements are realised without compromising global security or societal well-being.
| 2024-09-14T00:00:00 |
2024/09/14
|
https://www.mdpi.com/2075-4698/14/9/180
|
[
{
"date": "2023/03/01",
"position": 11,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 43,
"query": "AI regulation employment"
},
{
"date": "2023/03/01",
"position": 10,
"query": "AI economic disruption"
}
] |
Why aren't governments afraid that AI will create massive ...
|
The heart of the internet
|
https://www.reddit.com
|
[] |
There are multiple posts everyday in this subreddit that AI will replace millions if not hundreds of millions of job in a span of just 3-5 years.
|
From the past 3 months, there are multiple posts everyday in this subreddit that AI will replace millions if not hundreds of millions of job in a span of just 3-5 years.
If that happens, people are not going to just sit on their asses at home unemployed. They will protest like hell against government. Schemes like UBI although sounds great, but aren't going to be feasible in the near future. So if hundreds of millions of people get unemployed, the whole economy gets screwed and there would be massive protests and rioting all over the world.
So, why do you think governments are silent regarding this?
Edit: Also if majority of population gets unemployed, who is even going to buy the software that companies will be able create in a fraction of time using AI. Unemployed people will not have money to use Fintech products, aren't going to use social media as much(they would be looking for a job ASAP) and wouldn't even shop as much irl as well. So would it even be a net benefit for companies and humanity in general?
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/ChatGPT/comments/11t0lz7/why_arent_governments_afraid_that_ai_will_create/
|
[
{
"date": "2023/03/01",
"position": 18,
"query": "AI unemployment rate"
}
] |
|
Artificial Intelligence and Economic Development
|
Artificial Intelligence and Economic Development: An Evolutionary Investigation and Systematic Review
|
https://pmc.ncbi.nlm.nih.gov
|
[
"Yong Qin",
"Business School",
"Sichuan University",
"Chengdu",
"Zeshui Xu",
"Xinxin Wang",
"Marinko Skare",
"Juraj Dobrila University Of Pula",
"Pula"
] |
by Y Qin · 2023 · Cited by 134 — The negative school of thought believes that AI will replace labor and take over human jobs, which will likely lead to unemployed people much faster than ...
|
In today’s environment of the rapid rise of artificial intelligence (AI), debate continues about whether it has beneficial effects on economic development. However, there is only a fragmented perception of what role and place AI technology actually plays in economic development (ED). In this paper, we pioneer the research by focusing our detective work and discussion on the intersection of AI and economic development. Specifically, we adopt a two-step methodology. At the first step, we analyze 2211 documents in the AI&ED field using the bibliometric tool Bibliometrix, presenting the internal structure and external characteristics of the field through different metrics and algorithms. In the second step, a qualitative content analysis of clusters calculated from the bibliographic coupling algorithm is conducted, detailing the content directions of recently distributed topics in the AI&ED field from different perspectives. The results of the bibliometric analysis suggest that the number of publications in the field has grown exponentially in recent years, and the most relevant source is the “Sustainability” journal. In addition, deep learning and data mining-related research are the key directions for the future. On the whole, scholars dedicated to the field have developed close cooperation and communication across the board. On the other hand, the content analysis demonstrates that most of the research is centered on the five facets of intelligent decision-making, social governance, labor and capital, Industry 4.0, and innovation. The results provide a forward-looking guide for scholars to grasp the current state and potential knowledge gaps in the AI&ED field.
The contributions of this study are twofold. Firstly, we position the research perspective at the intersection of AI and economic development. Compared with other investigations, the work in this paper is more contemporary and novel. It helps to establish an understanding of the complexity and interdisciplinary nature of research on the application of AI in economic development. Secondly, the two-phase methodology, i.e., the two-pronged approach of bibliometric analysis and content survey, guarantees the comprehensiveness and reliability of the study (Qin et al., 2022 ). Using advanced bibliometric techniques, the outline of the evolution and knowledge structure of the AI&ED field is outlined. Also, the emerging research on AI applied to economic activities is clearly perceived, which helps theory and practice to go hand in hand. In particular, for the different knowledge streams, we deploy qualitative content analysis to discuss key publications to determine which topics and issues are front and center in the context of AI and economic development, and how the different topics are bundled in the knowledge streams.
In moving forward to redress this challenge, this paper attempts, through a combination manner of bibliometric analysis and literature review, to gain a one-stop overview on the publications’ performance, collaboration patterns and intellectual structure of the AI&ED domain. More pertinently, this study responds to this practical need by answering the following three broad research questions: (1) What is the performance and current status of AI in economic activities and its related fields? (2) Which research themes in the field of AI&ED have received sufficient attention and exploration in recent years in the existing knowledge? (3) Which research agenda should endeavor in this domain in the future? By doing so, we establish an overview of the basic information in the field of AI&ED and its current status and trends, so as to summarize possible knowledge gaps, provide new ideas for investigation and locate areas of expected contribution for subsequent research (Donthu et al., 2021 ).
The growing breadth and fragmentation of topics at the intersection of AI and economic activity have made it increasingly difficult for scholars to attempt a comprehensive understanding of the field. To make matters worse, the complexity of the topic has led to a diversity of insights, generating a wealth of ideas and investigations on the link between AI and economic development. While there have been some reviews of the literature on AI and economic development, the multifaceted nature of the field suggests that this is still far from sufficient (Aghion et al., 2018 ). On the one hand, short-term studies such as these do not necessarily show subtle changes over time. On the other hand, the available reviews are selective in the literature they employ and the range is usually limited to fit the volume and variety of relevant literature. At the same time, it is not easy for scholars themselves to objectively summarize and sort out the literature (Lee & Lim, 2021 ).
However, while people are cheering this inspiring fact, some are expressing their skepticism. Although the widespread application of AI will cause a short-lived economic boost at this stage (Goertzel et al., 2017 ), in the long run, people’s over-reliance on AI is likely to pose some potential threats (McClure, 2017 ). Such as the unemployment fiasco, moral and ethical risks, and personal privacy concerns that are often mentioned by scholars in the literature (Kak, 2018 ). What is more, the technical bottlenecks in the development of AI technology itself also lead to a large gap between the conception of theoretical research and the blueprint in actual practice. In light of recent events between AI and economic development (AI&ED), it is becoming extremely difficult to ignore the existence of the two colliding with each other. Accordingly, a considerable amount of literature has been published on AI&ED. These studies over the past two decades have provided important information on discussions between AI and economic development. More importantly, the evidence shows the increasing urgency and depth of the intersection between AI and various sectors of economic activity. For instance, to allow the power sector to provide good services at competitive prices, Hernández-Callejo et al. ( 2013 ) designed an architecture model for power load forecasting based on artificial neural networks that conduct short-term load forecasting.
In recent years, the sound of artificial intelligence (AI) has always been in everyone’s ears, and it seems to be telling us that the arrival of AI is the destiny of the age (Makridakis, 2017 ). Indeed, AI technology is appearing in various forms at all levels of our contact with society, from small daily chatting intelligent robots to large industry and government-level assisted offices, and is quietly changing the way of life around the world (Li et al., 2017 ). By convention, AI is described as a sub-discipline of computer science dedicated to the development of data processing systems and the execution of functions that match human intelligence, such as learning, reasoning, and self-improvement (Peres et al., 2020 ). According to Trifan and Buzatu ( 2020 ), AI is machine learning, that is, a neural network trained on a data set. Drive resources, data resources and computational theory are the three core elements that influence the development of AI. In contrast to any of the technologies that have emerged in the past, AI can get more brilliant at a particular practical task with the accumulation of time owing to its unique learning ability. AI is designed to serve humans in making the best decisions. To this end, AI has been incorporated into operating systems in the hope of creating systems that can assist humans or even be utterly AI-driven in their decision-making (Gomes et al., 2020 ). Progressively, AI is becoming indispensable technological support for daily social life and economic activities (Naimi-Sadigh et al., 2021 ). Its tremendous contribution to sustainable economic development in all industries is rapidly becoming evident, leading it to become an instant focus of attention at the industry, academic and even government levels (Heylighen, 2017 ). Arguably, AI-related activities will be the driving force for further economic development and result in fundamental shifts in the structure and approach to production, and in the quantity and quality of consumption (Vyshnevskyi et al., 2019 ).
In the first phase, we utilized bibliometric analysis to provide an objective, but only cursory, understanding of the intrinsic structure and overall extrinsic performance of the AI&ED domain. This macroscopic model of mathematical statistics appears to be more extensive and clearly demonstrates the connections between different attributes, but it does not allow for profound qualitative conclusions to be drawn. In view of this, clusters formed by core knowledge streams in the bibliographic network based on AI&ED publications are reviewed qualitatively and manually in order to summarize the hot spots and gaps in current knowledge on different topics and thus answer specific research questions. Although the traditional process of qualitative literature analysis can be laced with viewer subjectivity, the benefits of this approach are well recognized (Vallaster et al., 2019 ). Besides, as Gaur and Kumar ( 2018 ) stated, it is the combination of content analysis with other methods that facilitates its tremendous potential. Undoubtedly, bibliometrics perfectly matches the traditional content review (Ante et al., 2021 ). The complementary content analysis allows us to identify hot spots and blind spots in the various research tributaries in AI&ED, thus prompting subsequent research directions to be discovered.
To fulfill the objectives of the quantitative analysis and visualization of the retrieved documents, we need to adopt some advanced bibliometric tools. Bibliometrics is based on quantitative methods designed to identify, describe, and evaluate published research (Bretas & Alon, 2021 ; Garfield, 1979 ). Its use of scientific mapping and graphical presentation of reproducible statistics reduces the subjective bias of literature reviews on the one hand, and overcomes the limitations of diagnosis and the error-prone nature of manual summarization on the other (Su & Lee, 2010 ; Tariq et al., 2021 ; van Eck & Waltman, 2010 ). Gradually, the ideas and theories of bibliometrics have become an invaluable manner for many scholars to explore and discover new knowledge in academic research (Qin et al., 2020 ; Wang et al., 2020a , b , c ). In response to this trend, many advanced algorithms and sophisticated visual analysis tools have been developed to help scholars quickly perform bibliometric analysis. In this paper, we apply two bibliometric tools Bibliometrix and VOS viewer that are more mature at this stage. Bibliometrix is a powerful open-source tool developed by Aria and Cuccurullo ( 2017 ), which supports a recommended workflow to perform bibliometric analysis aimed at performing comprehensive scientific mapping work. By using this tool, we accomplished almost all the bibliometric parts of this paper, that is, the performance analysis and science mapping analysis of the collected records, including publication trend, most relevant sources, most influential papers and authors, conceptual structure, and intellectual and social structure. As an equally excellent structured analysis software, VOS viewer is more focused on the graphical representation of bibliometric maps (van Eck & Waltman, 2010 ). With the assistance of its bibliographic coupling procedure, this paper achieves an in-depth exploration and examination of the intellectual structure of the core publications of AI&ED.
The implementation of a bibliometric analysis can empower us to identify the dynamic nature of the AI&ED research field (Qin et al., 2021 ). We chose the most popular and authoritative Web of Science (WoS) Core Collection database as the starting point of the project. In line with the approach of most scholars at this phase, we defined the field boundary using a set of keywords that are coherent with the purpose of the study. To ensure that the final search results include as much of the desired literature as possible, broader search strings were initially identified, i.e., TS = (“Artificial Intelligence” OR “Machine learning” OR “Deep learning” OR “Intelligent agents” OR “Neural networks” OR “Data mining” OR “Natural language processing” OR “Pattern recognition”) AND TS = (“Economic development” OR “GDP” OR “Economy”). In parallel, to ensure state-of-the-art of records, purely peer-reviewed academic journal articles were considered for this study. Only the publications with language in English were taken into consideration. Besides, to guarantee the annual property of the data, we limited the search span during the period from 1900 to 2021. The search was carried out in March 2022, and a total of 2522 items matched these constraints and were initially included in this examination. Prior to the quantitative statistical analysis, we manually checked the titles, abstracts and keywords of the identified documents and those irrelevant publications were removed. In the end, 2211 records were created for this investigation.
With consideration to the prevalence of cooperation and linkages between authors from different regions or countries, we conduct a collaboration-based assessment of international cooperation. By performing the Collaboration WorldMap function in the Bibliometrix and setting the minimum edges as three, Fig. 8 sheds light on the social structure within the AI&ED domain. Overall, there are 627 pairs of country/region key cooperation on this map. At the same time, the higher the productivity of a country or region, the darker its color is, while the connection of the lines indicates the presence of collaboration, and the more robust the line, the higher the rate of collaboration. The assessment shows that China, the USA, and India are among the world leaders in terms of individual country or regional contributions with 624, 412, and 210 publications, respectively. Another interesting finding shows that scholars from the USA and China are fostering the strongest collaborations, and they are building strong ties with their counterparts around the world. In fact, the highest rate of collaboration between the USA and Chinese scholars has also been maintained, with a total of 66 co-authored articles. As it clearly appears, there are still several authors from different countries or regions who are not involved in this area of communication and collaboration.
After examining the conceptual structure concerning the AI&ED field, the intellectual and social structure would be further revealed in this part. To be specific, we are committed to visualizing co-citation network and country collaboration map in the AI&ED field. Co-citation analysis is used for the analysis of the cited sources, which allows us to quickly capture the mainstream source communities. In the same way, Fig. 7 outlines the three source clusters amongst the 50 most influential sources. In the first cluster (shown in red), 16 sources are detected, and high-quality journals such as “Neurocomputing,” “Expert Systems with Applications,” and “Decision Support Systems” occupy the main position. 18 sources make up the largest Cluster 2 (shown in blue), in which the representative sources include “Applied Energy,” “Renewable and Sustainable Energy Reviews,” “Energy,” and so forth. In the last cluster (shown in green), 16 sources are more dispersed in the figure, with “Nature” and “Science” journals occupying the center of the diagram.
Therefore, five prevalent themes are finally identified in the diagram. Research related to “artificial intelligence,” “big data,” and “Internet of things” is aligned to the first quadrant, suggesting research on these topics dominates and profoundly influences other topics in the AI&ED field. The developed but isolated theme in the second quadrant, namely “neural network,” “optimization,” and “energy management,” should be given sufficient attention to breaking down the silos of research. Interestingly, related studies on “machine learning”, “data mining” and “classification” are recognized as disappearing or emerging themes, which to some extent foreshadows future research frontiers. Not surprisingly, the problems about “forecasting” become the general and broadly researched themes. How AI boosts economic development and finding effective paths to it will be a topic of continuous discussion in the future.
Several dominant themes are usually shaped in the development of a particular domain within the research. To this end, Bibliometrix provides the strategic diagram function to identify themes in different phases based on the centrality and density ranking. On the basis of co-occurrence analysis for the author’s keywords, the comprehensive strategic diagram of AI&ED research from 1986 to 2021 is constructed as presented in Fig. 6 . As a result, the nutshell overview of the dominant research topics on AI&ED is highlighted. Obviously, the X-axis (centrality) and Y-axis (density) split the two-dimensional space into four different regions (i.e., quadrants). In this setting, four types of themes with different meanings are clearly distinguished (Cobo et al., 2011 ). Centrality gauges the level of inter-cluster interaction, whereas density measures the level of intra-cluster cohesion (Forliano et al., 2021 ). More to the point, themes that fall in the first quadrant (upper-right quadrat) are usually well-developed and are significant in shaping the field of study. They have high centrality and density values and are usually referred to as motor themes. A theme is characterized by low centrality and high-density values, which is positioned in the second quadrant (upper-left quadrat) as a highly-developed and isolated theme. Diametrically opposed to the thematic characteristics of the first quadrant, themes in the third quadrant (bottom-left quadrat) are not only low in centrality but also low in density, with disappearing or emerging themes gathering here. Lastly, basic and transversal themes usually lie in the fourth quadrant (bottom-right quadrat) with high centrality and low-density values (Lam-Gordillo et al., 2020 ). Visible here is that each theme cluster is composed of a number of keywords, and its name is determined by the most frequent keyword. Besides, the higher the frequency of keywords per theme, the larger the area of the circle will be accordingly.
Table 1 lists the specific TC and TC/Y across the top 10 documents. Also, the other useful information on them is specified. Evidence from Table 1 indicates that more than half of the publications have been cited more than 300 times in total. Besides, two observations could be obtained from this table. In the first place, five of these documents were pressed before the year 2010, and five after 2010. Surprisingly, the article titled “Automated detection of COVID-19 cases using deep neural networks with X-ray images”, published in 2020, earned a whopping 622 citations. The sudden appearance of the novel coronavirus in 2019 has brought a great impact on the life and health of people all over the world. To accurately detect and diagnose potential people suffering from this disease, an automated assisted diagnosis tool named DarkCovidNet based on deep neural networks was developed by Ozturk et al. ( 2020 ). Furthermore, the article called “Brain Intelligence: Go beyond Artificial Intelligence” on the list, despite being published as recently as 2018, has 409 citations (Lu et al., 2018 ). A novel technology concept named brain intelligence was introduced in their work to break through the many limitations of extant AI. Secondly, in terms of research contents, AI technologies have penetrated various areas of the economy concerned, such as finance, energy and machinery, and are increasingly playing an essential role.
Citations for an article are regarded as an appropriate manner to measure its influence and authority in the field (Wang et al., 2021a ). Given this backdrop, highly cited documents over the period 1986 to 2021 in the AI&ED field are assessed, and the top 10 cited publications are exhibited in Table 1 . Nevertheless, the total number of citations (TC) per se does not completely determine the quality of an article, and the time factor usually needs to be considered. Thus, the average number of citations received each year (TC/Y) is also generally deployed as an effective metric for an article’s impact.
Overall, the 2211 selected documents cover 1096 different sources. Figure 3 sets out the international panorama of the top 20 most relevant sources in the AI&ED field. In this case, we could easily find that the top source comes from “Sustainability” with a total number of 61 publications updated to 2021. The “IEEE Access” owns the second rank with 60 publications, followed by “Energies” (39 publications). With the same number of 39 publications, “Expert Systems with Applications” is in the fourth position. In this regard, investigators concerned with the AI&ED topic need to be particularly attentive to these sources. Moreover, Fig. 4 provides the year-wise growth of the top 5 sources over the period 1986–2021. The temporal evolution of these sources demonstrates that the majority of journals are distributed in a growing trend. In particular, “IEEE Access,” “Sustainability,” “Energies,” and “Journal of Intelligent & Fuzzy Systems” become productive during the last lustrum. Instead, “Expert Systems with Applications” exhibits a slower increase trajectory in recent years.
The 2211 documents included in the final dataset generate the annual scientific production in the field of AI&ED, as depicted in Fig. 2 . Studies on AI&ED started in 1986, when Yamashiro posted their seminal work in which online secure-economy preventive control of power system was presented based on pattern recognition (Yamashiro, 1986 ). Although research on this issue has received attention from the scientific community since then, the published studies on AI&ED increased dramatically until approximately 2016, especially during the period from 2018 to 2021. The exponential growth pattern of the field in recent years suggests two facts. The extensive application of AI to economic development and relevant areas is a very recent phenomenon. On the flip side, there exists a fierce argument in management research within AI’s role in the achievement of economic development. According to the visible observed trend in Fig. 2 , research on AI&ED is still immature and in the stage of infancy. With the deepening of AI technology, we can expect a great deal of research in the future dedicated to further enhancing domain knowledge on economic research through AI.
AI and Economic Development
Bibliographic coupling occurs when two publications cite a third common publication in their bibliographies (Wang et al., 2021b). As a similarity measure, it is often used to cluster similar research streams. Obviously, the magnitude of coupling is proportional to the relevance of the research topic and content between publications. The significant difference compared to co-citation analysis is that bibliographic coupling analysis can better identify the distribution of recent research topics and current trends in AI&ED, which can inspire us to ponder about future research (van Oorschot et al., 2018). Thus, with the assistance of the VOS viewer tool, Fig. 9 visualizes the coherent bibliographic network of the AI&ED literature to detect similar subject areas, and determines the mindset of core researchers.
Since the bibliographic network generated by the initial 2211 publications cannot identify the number of controllable and valid clusters, we set some filtering conditions and modulate some parameters to derive the number of clusters that can be analyzed. Expressly, to obtain core insights and capture closely linked research results within each cluster, we eliminate unconnected items to show the largest set of connected items. What is more, in our study, we adjusted the minimum cluster size and set it to 12 instead of the default of 1, which makes the final number of clusters more concentrated. In fact, we have also fine-tuned the final rendering of the graph by changing the repulsion parameter to -1, while leaving the attraction parameter as default. Finally, Fig. 9 generates five highly distinguishable clusters that are given different colors to highlight. In what follows, this paper will review these five relatively independent research streams in detail. The five presented broad research topics are: AI supports intelligent decision-making ("AI supports Intelligent Decision-Making" subsection), AI empowers social governance ("AI Empowers Social Governance" Subsection), AI enhances labor and capital ("AI Enhances Labor and Capital" subsection), AI accelerates Industry 4.0 ("AI Accelerates Industry 4.0" subsection) and AI fuels innovation ("AI Fuels Innovation" subsection).
AI supports Intelligent Decision-Making In this cluster, how to use AI techniques to maximize successful decision-making in economic problems becomes the main research focus. Intelligent decision-making could be generally understood as the application of the knowledge representation and thinking process of AI into the decision-making theory, by introducing theories and methods from management, computer science and related disciplines for analysis and comparison, thus providing wise and intelligent aid for managers to make the right decisions (Niu, 2018). However, the prerequisites for efficient prediction largely determine the likelihood that intelligent decisions will eventually be realized. Forecasting is based on the historical data of things, through certain scientific means or logical reasoning, to make estimation, speculation and judgment on the future development of its situation, and seek the future development law of things. In recent years, the fact that correct predictions (or forecasts) will lead to successful decisions and thus provide maximum economic benefits has increased the interest in predictive modeling. Indeed, in contrast to traditional econometric techniques, AI technology, with its mighty computing power, has injected new blood into scientific forecasting, providing more feasible ideas and solutions for forecasting technology. Also, it significantly improves the accuracy and reliability of forecasting and provides decision support capabilities for various industries that beyond traditional statistical-based analysis (Binner et al., 2004). As a consequence, AI-based predictive algorithms are increasingly being considered in various areas of human economic creation. Energy is of strategic importance to the development and social welfare of any economy (Cen & Wang, 2019). Effective forecasting of energy demand, consumption and prices is directly related to the compatibility between the economy and the environment. For example, Ardakani and Ardehali (2014) developed an optimal regression and ANN (artificial neural network) model for predicting EEC (electric energy consumption) based on several optimization methods, examined the effects of different historical data types on the accuracy of EEC prediction, and then made long-term predictions for two different types of economies, Iran and the United States, respectively. In order to improve the accuracy of oil market price prediction, Cen and Wang (2019) used Long Short Term Memory, a representative model of deep learning, to fit crude oil prices. Moreover, swarm intelligence approaches, including artificial bee colony (ABC) and particle swarm optimization (PSO) techniques were introduced to evaluate the electrical energy demand in Turkey (Kıran et al., 2012). Also for Turkey, Uzlu et al. (2014) applied the ANN model and TLBO (teaching–learning-based optimization) algorithm to estimate its energy consumption, which also showed good prediction performance. However, a single model cannot always meet the requirements of time series prediction and fuel consumption variation (Liu et al., 2016). In parallel, the fact that energy consumption involves a large number of parameters makes its forecasting a complex and challenging task to carry out. To this end, combining the excellent predictive models available is the most straightforward response, and it has proven to be effective (Li et al., 2018). Predicting the interrelationship between energy activities and real economic fluctuations is also further explored by relying on AI algorithms. In different domestic and international environments, varying oil price shock incentives can cause different oil price shocks and have different macroeconomic impacts. In response to this problem, Ju et al. (2016) proposed an ontology-supported case-based reasoning approach to an incentive-oriented AI early warning system, namely the relationship between oil price shocks and the economy early warning system, for predicting the linkage changes between macroeconomic and oil price shocks in China. Furthermore, the economic dependence between urban development policies and energy efficiency improvement was revealed by building a neural network model (Skiba et al., 2017). In addition to the energy sector, other areas involved in economic development are also actively incorporating AI technologies to achieve the best forecasting results, such as the spatial prediction of land subsidence susceptibility (Arabameri et al., 2020), the prediction of standardized precipitation evapotranspiration index (Soh et al., 2018) and predicting the monthly closing price of major USA indices (Weng et al., 2018). In fact, forecasting can also be considered as the process of filling in the missing information, i.e., using the information already collected to generate information that we do not yet have or that we expect to have. Based on the vast amount of available data, AI technology can quickly and efficiently make diagnoses or judgments to help people make the best decisions in a short period of time, minimizing economic risk at the organizational, industry and national levels. At the end of 2019, the sudden onslaught of the novel coronavirus 2019 not only posed a huge threat to people’s lives and health, but also caused a heavy blow to economic development worldwide. As the epidemic continues to spread around the world, diagnosing infected patients has become one of the urgent tasks to be solved at that time. For this reason, many radiological images have been widely used for the detection of COVID-19. In particular, the integration of AI technology allows the diagnosis of patients with COVID-19 infections at a significant advantage (Tsiknakis et al., 2020). For instance, Ozturk et al. (2020) presented a new model for the automatic detection of COVID-19 using raw chest x-ray images. The model can achieve an accuracy of 98.08% for the classification of binary classes and 87.02% for the classification of multiple classes. It is worth pointing out that despite the widespread use and effectiveness of AI in fraud detection, the emergence of new fraudulent vectors has posed severe challenges to fraud detection in the AI framework (Ryman-Tubb et al., 2018). Besides, the boom in the fitness industry in recent years has led to a critical need for scientific and practical instructional programs. In light of this, real-time monitoring and guidance based on exercisers’ daily fitness data, supported by AI technology, has become a trend for future fitness applications (Yong et al., 2018). The development of computers and information technology gave rise to the creation of a decision support system (DSS) in the mid-1970s to help decision-makers improve the level and quality of their decisions. Suffice it to say that the rapid advancement of AI technology has given people a wonderful aspiration for the intelligence of traditional DSS (Pinter et al., 1995). Later, DSS was combined with AI and expert system technologies, and the prototype of an intelligent decision support system (IDSS) was outlined, enabling the original system to cope with more complex and uncertain decision scenarios. With this opportunity, IDSS has been widely studied by scholars and involved in many human economic activities. For example, to achieve effective management and rapid response to different customer needs in transportation enterprises, He et al. (2014) proposed a general framework that integrates intelligent technologies as components into the architecture of service-oriented group decision support system, and skillfully used AI technology to solve the conflict problem in distributed group decision-making. The multi-agent system theory and techniques in AI likewise provide essential insights for the development of DSS. To address the complex issues in agricultural development, Xue et al. (2013) designed an agent-based regional agricultural economy decision support system (RAEDSS) to simulate and evaluate the impact of policies on rural development under different scenarios. Considering that intelligent decision-making should have the ability to explore and discover uncertain environments, scholars have tried to combine fuzzy logic with IDSS to enhance its knowledge representation and reasoning capabilities. Using fuzzy cognitive maps, Albayrak et al. (2021) developed an IDSS to achieve high yield of honey. In addition, uncertain production goals are extremely common in production plants, and this uncertainty leads to the invalidation of regular management. In view of this, Rodriguez et al. (2020) proposed an IDSS for production planning based on machine learning and fuzzy logic to solve the closed-loop supply chain management problem.
AI Empowers Social Governance At present, AI technology is developing deeply and AI application scenarios are enriching, which then calls out a new governance concept and governance form for society. Overall, the new pattern of AI-powered social governance is in the preliminary exploration stage (Mania, 2022). What is certain, however, is that AI technology has been used more widely than ever in recent years. These wide ranges of applications are not only reflected in common daily aspects such as image analysis, face recognition and big data analysis, but also gradually rise to the level of major social rulings and human emotional cognition applications (Coglianese & Lehr, 2017; Huang et al., 2019). At the city level, the great strength of AI in processing big data has contributed to a major change in the urban fabric, a prospect greatly facilitated by the emerging smart city concept that promotes the combination of sensors and big data through the Internet of things (Allam & Dhunny, 2019). The core idea of the smart city emphasizes the underlying support of big data, which requires not only tens of thousands of data, but also the integration of multi-dimensional data. This is well evidenced by the full impact of the COVID-19 pandemic. The recent COVID-19 pandemic has prompted a great deal of thought by many scholars about many vital issues and potential complexities for organizations and societies (Dwivedi et al., 2020; Iandolo et al., 2021), particularly the controversy over data sharing related to the concept of urban health and safe cities (Allam & Jones, 2020). On one side, strengthening standardized protocols to increase data sharing will not only help the efficient development of epidemic prevention and control, but also facilitate the further construction and design of smart cities, as well as lead to better global understanding and management of the same. However, it is undeniable that sharing urban health data has the potential to impact the economy and politics of a country or region. Besides, as AI continues to permeate all aspects of human society, some administrative agencies are attempting to employ intelligent algorithms to improve the intelligence of government governance. On a technical level, this is entirely possible. Hildebrandt (2018) pointed out that data-driven artificial legal intelligence may be much more successful in predicting the content of positive law. Likewise, profound developments in information technology are changing the way banks work, relying more on reliable quantitative information from online and credit bureaus, contributing to AI-based decision-making (Jakšič & Marinc, 2019). Finally, over recent years, AI technology is also quietly changing the face and operation of other social industries such as education (Mehmood et al., 2017; Williams, 2019), marketing (Rust, 2020) and accounting (Moll & Yigitbasioglu, 2019), seeking to improve economic efficiency. However, we should also see that while AI accelerates economic development and promotes social governance to a new level, it brings additional challenges to human society in terms of legal norms, moral ethics and governance guidelines that should not be underestimated. Firstly, as mentioned earlier, big data gives AI enough valuable data to support it. Generally speaking, the larger and more dimensional the data, the more promising the final effect of intelligent algorithms, which inevitably involves individual-level data analysis, collection and application. Scholars have long debated the protection of personal data and concerns related to privacy (Kak, 2018). On the one hand, some scholars have called for striking as much of a balance as possible between data protection and data-related concerns (Dwivedi et al., 2020). On the other hand, some scholars pointed out that no one owns data and that property rights protection of data is not appropriate to promote better privacy, more innovation or technological progress, but is more likely to stifle freedom of expression, freedom of information and technological progress. Thus, the case for property rights to data is not compelling, and there is no need to create new property rights for data (Determann, 2018). Secondly, Allam and Newman (2018) cautioned against the blind acceptance of technology and encouraged further embedding into the social fabric. Such a reminder stems in large part from the ethical issues of fairness, responsibility or subjectivity that AI can raise. Research in AI could be roughly divided into three stages: mechanical AI, thinking AI and feeling AI. While mechanical AI is already mature and thinking AI is developing rapidly, the highest level of feeling AI is progressing slowly (Huang et al., 2019). Since AI at this phase does not possess self-awareness, AI platforms are not neutral technologies, they are designed with a purpose and exhibit bias and human rights violations (Bourne, 2019). Additionally, government agencies are beginning to widely adopt AI technology for constitutional democracy and administrative decision-making, and concerns have increased over digital robots replacing the government sector. At the same time, reliance on AI has also led to an increasing challenge to human subjectivity. Therefore, in response to the above challenges, human workers must pay more attention to the extension of the empathy and emotional dimension in their work (Huang et al., 2019). On the flip side, creating a new culture that incorporates the principles of democracy, rule of law and human rights through the design of AI as well as considers diversity in the design and implementation of algorithms is a viable solution for the future (Nemitz, 2018; Turner Lee, 2018).
AI Enhances Labor and Capital The debate over the conclusion that AI enhances the quality of labor as well as the quality of capital. In the traditional economic development model, the three production factors, i.e., capital, labor and total factor productivity (TFP), determine the development dynamics of the economy. When the two physical factors, capital and labor, rise in quantity or are used more efficiently, they contribute to economic development. Of course, an increase in TFP due to technological or innovative advances would also generate economic development. Collectively, it seems to be an undisputed fact that the widespread use of AI promotes economic development. More importantly, a large body of empirical literature supports this view as well (Chattopadhyay & Rangarajan, 2014). Firstly, the development of AI has significantly reduced the cost of traditional automation while creating an opportunity for the era of intelligent automation (von Joerg & Carlos, 2022). Although traditional automation technologies have led to dramatic increases in labor productivity, specific and homogeneous settings, allow them to perform only simple and repetitive tasks. In contrast to the former, the era of intelligent automation has created a new kind of virtual labor force, which can be considered as a new factor of production. This phenomenon, on the one hand, diminishes the dependence on manual labor at the current stage of production and triggers the substitution of capital for labor (Autor, 2015). On the other hand, due to its self-learning and self-renewal characteristics, AI will effectively solve the complex labor needs of the many automated jobs in real life (Bahrammirzaee et al., 2011). In particular, this change in the structure of production factors will rapidly produce high-end labor, which in turn will significantly boost economic development (Vivarelli, 2014). Secondly, with powerful and innovative AI technologies, the efficiency of the existing capital and labor has been enhanced to an unprecedented degree, while enabling the skills and capabilities of labor and physical capital to be also supplemented and improved. In fact, in addition to the above-mentioned substitution relationships, there are also many complementary relationships between AI and human intelligence (Huin et al., 2003). With human–machine collaboration, workers’ productivity can not only be effectively utilized and extended, but also be motivated to focus on the areas they are good at and thus do more creative work. The scenario of human–machine integration has led to increasing labor productivity (Wolff, 2014). For example, accurate estimation of the local scour depth concerning bridge piers is crucial for engineering design and management, which places higher demands on the professionalism of bridge engineers. To this end, a new hybrid smart artificial firefly colony algorithm-based support vector regression model was developed to predict the scour depth near bridge piers by Chou and Pham (2017). The results showed that the model could effectively assist the concerned staff in constructing safe and cost-effective bridge substructures. In terms of improving capital quality, as described in the “AI Supports Intelligent Decision-Making” section, AI is able to model, predict and ultimately optimize decisions in real time from massive amounts of data in the production process. It can almost completely avoid the problems of low accuracy, low integration and low adaptability in production activities, and achieve intelligence in the production process, thus realizing capital efficiency improvements. For the manufacturing industry, this is particularly evident. AI has become an important driver for intelligent manufacturing technology innovation, promoting economic development and improving people’s quality of life. Research results showed that the adoption of highly interconnected and deeply integrated intelligent production lines would lead to significant improvements in manufacturing productivity as well as a corresponding reduction in the number of system instructions (Hu et al., 2018). Finally, AI’s ability to increase TFP across the board is well documented, and some existing studies even categorize it as a new factor of production that will further fuel economic development in the future.1 Nevertheless, in the long run, many scholars are divided on the question of whether the progress of AI will play a sustainable role in promoting economic development. The negative school of thought believes that AI will replace labor and take over human jobs, which will likely lead to unemployed people much faster than productivity can be increased (Vermeulen et al., 2018). In a situation where the labor market is disrupted, income inequality and mass unemployment among workers are probably creating a further future of high unemployment and even economic stagnation (Frey & Osborne, 2017). As a consequence, AI’s boost in the economy is seen as unsustainable (Vermeulen et al., 2018). As noted by Gasteiger and Prettner (2017), human dependence on AI would eventually lead to an economic rout, as the utilization of automation inhibited wage development and thus investment growth. In contrast, the positive school of thought argues that while AI can rapidly replace labor, the AI revolution will not necessarily have a fatal impact on employment. Specifically, on the one hand, the development and application of AI technology still require many human resources for research and development and design, as well as the operation and maintenance of AI equipment cannot be separated from the participation of senior technical personnel. This demand for high-end human capital creates a higher economic value, but of course also puts forward higher requirements for the quality of the future workforce (Chen et al., 2009). On the other hand, the skill requirements of jobs are dynamic, and the impact of AI is likely to generate new labor demand and new job opportunities (Frank et al., 2019). The historical experience of the industrial revolution tells us that at this stage, human beings are likely to be in a short transitional period with frictional unemployment, followed by economic prosperity (Vermeulen et al., 2018). On top of everything else, some observers consider that the impact of AI on economic development and employment depends heavily on institutions and policies, and that inappropriate labor market and education policies may reduce the positive impact of AI and automation on employment (Aghion et al., 2019).
AI Accelerates Industry 4.0 According to our consensus, the Industry 1.0 era was marked by the invention of the steam engine by the Englishman Watt, which exponentially increased the efficiency of production technologies that previously relied on human and animal labor. The widespread availability of electricity has inaugurated the era of Industry 2.0. In this context, the productivity of factories has been developed and further improved. The Industry 3.0 era then witnessed the advent of computers and automation (Syam & Sharma, 2018). And in 2013, as the German government introduced the concept of Industry 4.0, it instantly attracted the attention of various countries and industrial giants (Carayannis et al., 2022). Industry 4.0 can be characterized as the emergence of cyber-physical systems involving entirely new capabilities for people and machines (Mhlanga, 2020). Even though these capabilities rely on the previous phase of Industry 3.0, the continued incorporation of extraordinary technologies has allowed for a long optimization of the third computerized industrial revolution (Sharabov & Tsochev, 2020). At the same time, the technology embedded in Industry 4.0 has created a new way of human life at this stage. Underpinned by these disruptive technological advances, Industry 4.0 aims to blur the boundaries among the physical, digital and biological worlds (Huynh et al., 2020). Simply put, it is expected to establish a highly flexible, personalized and digital production pattern of products and services, where the original industry boundaries will be broken down and the industry chain will be redefined (Sharabov & Tsochev, 2020). Journal articles and related reports in the context of Industry 4.0 indicate a huge demand for developing reliable and usable AI for real-world applications (Lee & Lim, 2021). It is foreseeable that AI will play an integral role in the future production paradigm of Industry 4.0 (Skrop, 2018). There seems to be a consensus among social scientists that AI is the key technology of the fourth industrial revolution (Liu et al., 2021). The principal features of Industry 4.0 are technological transformations, digital revolution and AI (Wang et al., 2020a, b, c). More precisely, Sanz et al. (2021) pointed out that intelligent and automated solutions should be included in industrial processes that employ AI (AI-driven framework) to be competitive in the Industry 4.0 paradigm that essentially affects manufacturing. For this reason, a great deal of research has been conducted on how to combine and embed AI into the existing Industry 4.0 manufacturing value chain (Peres et al., 2020). To meet Industry 4.0 manufacturing standards, Nasr et al. (2020) proposed a hybrid adaptive neuro-fuzzy inference system (ANFIS) based on a multi-objective particle swarm optimization approach to obtain optimal combinations of milling parameters and matching rates to minimize feed force, depth force, and surface roughness. Artificial neural network model for dynamic behavior optimization of robotic arms, an AI technology, was designed to improve the sustainability of Industry 4.0 (Azizi, 2020). Furthermore, more places for AI in Industry 4.0 have been identified and perceived by researchers, such as predictive analytics, predictive maintenance, industrial robotics, inventory management and computer vision (Sharabov & Tsochev, 2020). Collectively, industrial AI excels in five dimensions: infrastructures, data, algorithms, decision-making, and objectives (Peres et al., 2020). There is no doubt that the role of AI is central to the factory of the future, driven by the Industry 4.0 vision and reflected in the great blueprint for the factory of the future (Bécue et al., 2021). From an industrial perspective, AI can be viewed as enablers for systems to sense their environment, handle the data they acquire and address complicated tasks, as well as study from experience to enhance their ability to tackle particular challenges (Peres et al., 2020). While a high degree of autonomy is one of the core requirements for the future of Industry 4.0, the injection of additional human intelligence may be more beneficial to the operation of future factories and remains true, at least from this phase (Peres et al., 2020). In this regard, different levels of autonomous systems are more in line with the differentiated needs of factories at this stage. From the above description, we can get that the basic concept of Industry 4.0 lies in the organic combination of hardware and software devices, so as to build a smart factory where people, machines, and resources communicate and collaborate with each other (Dopico et al., 2016). Currently, Industry 4.0 is a common trend in international development, bringing new opportunities to the economic expansion of many countries (Pham-Duc et al., 2021). However, it is not an easy task to truly implement the Industry 4.0 framework in industrial manufacturing processes (Sanz et al., 2021). The realization of this digital revolution is costly, and it is even sometimes impossible to quantify (Trifan & Buzatu, 2020). In fact, people are questioning whether the era of Industry 4.0 will ever exist, because the event space is infinite. And the actual software and hardware will never cover the infinite event spaces (Vogt, 2021). Besides, despite the potential of industrial AI, a large amount of training data and a large amount of computing power are required to make it suffer from a very precarious end as well (Sharabov & Tsochev, 2020). What is worse, real factory environments provide unique and difficult challenges for which organizations are not ready (Peres et al., 2020). And the physical nature of the systems and processes that industrial AI deals with leads to special constraints that other types of AI do not face (Bécue et al., 2021). For instance, the dynamics of anomalous and expected behaviors can cause the original fixed settings to be unable to accurately determine the boundaries between them, making it difficult to detect new threats, which can eventually lead to a series of industrial production security problems (Luo et al., 2021). Still, AI may have a perfect niche for its flourishing and implementation in industrial environments, as its applications can answer different questions and possibilities in each of the main pillars of the Industry 4.0 construct (Dopico et al., 2016).
| 2023-03-11T00:00:00 |
2023/03/11
|
https://pmc.ncbi.nlm.nih.gov/articles/PMC10005923/
|
[
{
"date": "2023/03/01",
"position": 21,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 23,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/01",
"position": 33,
"query": "AI labor market trends"
},
{
"date": "2023/03/01",
"position": 4,
"query": "AI economic disruption"
},
{
"date": "2023/03/11",
"position": 93,
"query": "AI economic disruption"
}
] |
AI: How Far It Has Come and Where It Is Heading
|
AI: How Far It Has Come and Where It Is Heading : Argano
|
https://argano.com
|
[] |
In healthcare, AI is being used to improve diagnostics accuracy at a faster rate than humans alone could achieve. For example, AI algorithms can detect ...
|
The development of Artificial Intelligence (AI) has revolutionized the way humans interact with technology. From its early days as a test of machine intelligence to its current applications in natural language processing, AI has rapidly advanced in sophistication and capability. In an ever-evolving world, AI is now being used in everyday life, from assisting with medical diagnoses to helping optimize traffic flow. But with these advancements come potential risks and opportunities that must be considered for the responsible use of AI. This article will explore the history and present-day applications of AI, discuss its advantages and disadvantages, and examine what might lie ahead for this powerful technology.
A brief overview of AI
The concept of Artificial Intelligence (AI) has been around for decades, but only recently have advancements in technology transformed it into a powerful tool for everyday life. In its earliest forms, AI was used mainly for research and development purposes. But now, with the aid of powerful computing systems, sophisticated algorithms, and advanced language models, AI can perform complex tasks like robotics, healthcare diagnostics, financial analysis, and more.
Today’s AI is rapidly evolving from what it was even just a few years ago. It can understand natural language processing with greater accuracy than ever before — it can interpret large data sets quickly and accurately; and it can identify patterns that are invisible to the human eye. This has made AI invaluable in industries such as healthcare, finance, retail, manufacturing — you name it!
However, despite its capabilities in so many areas of our lives today, there are still many challenges that need to be addressed before AI can reach its full potential. There are concerns that automation could lead to unemployment or create ethical issues due to an overreliance on algorithms. Others suggest that further integration of AI could lead to bias in decision-making processes or result in increased cyber security risks.
As with any new technology, both the opportunities and risks associated with AI must be taken into consideration when deciding how best to use this powerful tool going forward. We must continue to monitor these developments closely while balancing innovation with safety so that we can reap the benefits while avoiding any potential pitfalls.
AI milestones
The history of AI research is full of major milestones that have shaped the development of the field. In 1950, Alan Turing proposed a test designed to measure a machine’s capability to exhibit intelligent behavior indistinguishable from a human’s. This became known as the Turing Test and has long been used as an indicator for true AI success.
In 1997, IBM’s Deep Blue computer famously defeated world chess champion Garry Kasparov, marking the first time a computer beat a human at chess and signifying a major breakthrough in AI research. Following this success, AI continued to make advances in various fields, such as natural language processing (NLP), image recognition, robotics, and financial analysis.
In 2016, Google DeepMind achieved AlphaGo Zero – an AI system capable of beating professional Go players without any prior knowledge or data about the game. This breakthrough demonstrated how far AI had come since its inception and showed its potential for further development. More recently, OpenAI unveiled GPT-3 – an advanced natural language processing model which can generate human-like text when given incomplete prompts. This marked another important milestone in the evolution of AI technology and opened up new possibilities for its use in everyday life.
These developments demonstrate just how far artificial intelligence has come over recent years — however, there are still many challenges that must be addressed before it can reach its full potential. These include potential ethical issues arising from decision-making bias and increased cyber security risks due to the ever-expanding amount of data being processed by these systems. Nevertheless, it is clear that AI promises many exciting possibilities for our future and may revolutionize our lives in ways we have not yet imagined.
Applying AI in everyday life
AI is quickly becoming part of our everyday lives, from self-driving cars to facial recognition systems. AI can now be found in a range of industries, helping customers find the products they need and healthcare providers diagnose diseases. With its potential for further integration into our lives, it is important to consider the risks associated with AI.
AI has already made its way into the automotive industry, with self-driving cars being tested on roads around the world. These cars use AI to make decisions about their environment and navigate accordingly. Self-driving cars are expected to reduce collisions and fatalities due to human error. However, there are still safety concerns that must be addressed before these vehicles can become widespread.
AI is also being used in retail stores and online services to help customers find the best products for them based on their preferences and interests. This process is known as “personalization” or “recommendation systems” and uses AI algorithms to analyze customer data to create tailored recommendations for each individual consumer. By providing customers with personalized recommendations, companies can increase sales while providing better customer service experiences.
In healthcare, AI is being used to improve diagnostics accuracy at a faster rate than humans alone could achieve. For example, AI algorithms can detect signs of cancer in X-rays more accurately than a human radiologist could do alone by utilizing deep learning techniques such as convolutional neural networks (CNNs). Additionally, AI chatbots are being developed that could act as digital health assistants by helping patients better manage their conditions or schedule appointments with doctors more efficiently.
Although there are numerous opportunities presented by integrating AI into our lives, there are also potential risks associated with its development and use. These include ethical issues when collecting large amounts of data about individuals — bias in decision making due to inaccurate input data or poorly designed algorithms — reliability issues — and increased cyber security risks due to malicious attacks on automated systems without proper safeguards in place.
As AI continues its rapid growth, it will undoubtedly have an even larger impact on our everyday lives in years to come — both positive and negative — so it is important that we take steps now to ensure that its development takes account of these potential risks while maximizing its potential benefits for society at large
AI upsides and potential downsides
The rapid evolution of Artificial Intelligence (AI) has become a part of our everyday lives, and its advancement brings with it both advantages and drawbacks. Automating mundane and repetitive tasks can give human employees the opportunity to focus on more creative endeavors that require higher-level thinking. Additionally, AI has the capacity to be more precise than humans in certain decision-making processes, such as diagnosing diseases like cancer accurately.
Despite these potential upsides, there are also risks associated with furthering AI technology. Privacy is a major concern— AI systems collect considerable data from their users that could be used for marketing or other purposes without the user’s knowledge or agreement. Additionally, machine learning algorithms may develop biases in decision making due to their use of historical datasets that contain inherent biases. Furthermore, increased reliance on AI systems may expose potential security vulnerabilities if they are targeted by malicious actors.
In conclusion, it is evident that the development of AI carries both benefits and problems for society at large. Businesses and governments should take the time to carefully consider these issues before integrating AI into our daily lives so they can leverage its positive aspects while minimizing any negative consequences associated with its implementation.
What lies ahead for AI?
As Artificial Intelligence continues to evolve, its potential to revolutionize our lives is becoming increasingly clear. With AI systems already being used to automate mundane tasks and increase decision-making accuracy, it has the power to offer vast improvements in security, efficiency, and personalization across multiple industries and sectors. Examples include automating manufacturing processes for faster, more accurate production, or tailored product recommendations based on past purchases or browsing habits. However, while these applications of AI are incredibly promising, it must also be remembered that there are ethical implications that must be considered when integrating such powerful technology into our lives on a larger scale.
A technology partner like Argano can help you evaluate where AI applications may make sense for your business — whether you are looking to implement or update your technology environment, optimize your processes, or enhance data insights, we can help take your business to the next step.
Contact us today to learn more!
| 2023-03-01T00:00:00 |
https://argano.com/insights/articles/ai-how-far-it-has-come-and-where-it-is-heading.html
|
[
{
"date": "2023/03/01",
"position": 42,
"query": "AI unemployment rate"
}
] |
|
A policy primer and roadmap on AI worker surveillance ...
|
A policy primer and roadmap on AI worker surveillance and productivity scoring tools
|
https://link.springer.com
|
[
"Hickok",
"Merve Lighthousecareerconsulting.Com",
"Aiethicist.Org",
"Ann Arbor",
"Maslej",
"Nmaslej Stanford.Edu",
"California",
"Geddes Rd",
"Mi",
"Merve Hickok"
] |
by M Hickok · 2023 · Cited by 30 — Algorithmic worker surveillance and productivity scoring tools powered by artificial intelligence (AI) are becoming prevalent and ubiquitous technologies in ...
|
Developers of AI and algorithmic systems promise these systems benefit organizations by increasing the efficiency, effectiveness, and scalability of processes, “streamlining and redefining” the workplace, reducing costs and standardizing application of rules—and hence improve profitability [15]. However, there is also a group of AI applications available in the market, which promise employers different capabilities to track, monitor and assess their workers. The power to have an all-seeing eye over the workforce is attractive to many employers. These technologies were once more prevalent in factory settings. The gaze of algorithmic surveillance was and is still disproportionately on low-income workers., The workers are now spread across different work contexts such as logistics, hospitality, food and service delivery jobs, or gig workers in online platform companies [26]. These jobs are then disproportionately held by workers of color. However, the reduced costs and improved capabilities for data collection, processing and retention also allows surveillance to extend to pink- and white-collar workers too. Worker surveillance is becoming a common phenomenon across all workplaces. When surveillance moves from the factory or warehouse floor to the devices workers use from other settings (such as home office, vehicles) or even carry on their bodies, surveillance becomes inseparable—it also ‘bleeds into’ workers’ private lives [11, 14]. As these workers become hyper visible through surveillance systems, the employers become more invisible behind the algorithmic decision-making systems.
3.1 Worker surveillance
It is easier to notice the physical cameras around us. However, with surveillance technology available in many shapes and forms, it is not always easy for the workers to know they are being monitored, and their data is collected. In the US, Employers can use such tools without informing their workers [27]. Workplace data collection powering surveillance can be achieved by a combination of hardware and software as listed below.
Hardware and software Analysis Data Computing (laptop, tablets) Smart phones Wearables (fitness tracker, smart watch, body cam) IoTs (RFID sensors, thermal imaging, counters, WiFi routers, GPS) ID badges (fitted with microphone and accelerometer) Camera (CCTV, laptop camera activation) Screen capture Keystroke logging Call recording Voice assistant recording Biometric recognition Productivity tracking Risk assessment Culture fit Count of outputs Workplace analytics Insider threat Biometric ID verification for shift/workplace access Outputs (tasks/items/transactions/sales completed) Communication (Email, text, chat, voice, collaboration tools) Social (posts, comments, likes, 3rd party social media background checks) Engagement (calendar activity, Time spent online) Search history (browser search terms, website visited) Location (access management, geolocation tracking, geofencing) Fitness (activity, prescriptions) Login (user ID and password capture)
3.2 Productivity scoring
“Not everything that can be counted counts. And not everything that counts can be counted.”—paraphrased from William Bruce Cameron [28].
The attraction of new technologies and vendor claims to extract actionable ‘productivity’, ‘risk’ or ‘fit’ scores for workers give rise to a variety of black-box algorithmic products in the market [29]. The products collect a plethora of data points and compare them against subjective rules to provide a score for a worker or infer certain behavioral characteristics. These scores can then be used by human managers to make determinations about the workers efficiency, productivity, risk to company’s assets and reputation. The scores also drive decisions about wages, benefits, promotions, disciplinary action or even terminations. At the extremes, these decisions can be automated and do not even require a human manager to review and validate.
When one thinks of how surveillance and scoring systems work, and how they are connected, it might be helpful to break the process down into smaller components. First, a method to track and record worker activity is necessary. Hardware, such as company-provided devices (phone, tablet, wearable fitness trackers), camera, wireless connection routers, sensors can be used to collect raw data on worker communications, online activity, movement, work outputs etc. Then, once the data is collected, an algorithmic model is necessary to analyze the collected data and make inferences or determinations about the worker behavior and performance. Developers make various design decisions on how to collect data and how to build these AI models. At times, there might be legitimate reasons to install certain data collection technology to ensure workers safety and security. Alternatively, the employer might be required by law to record worker communications. However, outside of these very limited reasons, most of these technologies are built upon problematic design decisions. The choice of the term ‘surveillance’ over ‘monitoring’ in this paper is intentional. Surveillance acknowledges the power employers hold over workers and the practices of ubiquitous collection of data on worker communications, engagement and interactions used mainly for the benefit of the employer. The data can then be used in control and manipulation of work engagement and contractual negotiations (if a contract even exists).
Vendors of these scoring systems claim the surveillance data collected can be used to infer the productivity, risk or fit of workers in relation to their roles. The claims and embedded design decisions include fundamentally erroneous assumptions, such as the ability for technology to correctly capture a human’s complex nature, or infer emotions and sentiments, or that human behavior can always be predicted. Use of surveillance and scoring technologies also infringe upon an individual's rights and freedoms. These technologies and assumptions embedded in them can be in direct contradiction with fundamental human rights [4]. Despite the impossibility of delivering on their marketing promises, these systems still find buyers among the business decision-makers.
Human dignity: Surveillance first and foremost degrades human dignity. Even if workers know about surveillance technologies, they may not have the possibility or privilege of leaving a job due to concerns over aggressive data collection or algorithmic decisions. If worker consent is requested at all, workers are asked to choose between their ability to earn a wage versus their data being collected. Their choice naturally favors employment. In such an imbalanced power situation between employer and worker, one cannot count the consent as a free or informed one. The workers lose control of privacy over their own bodies, movements, and even social interactions [30]. Who gets to draw the boundary about what is crucial information for an employer? In the absence of protections by law or organized labor, the workers are left to themselves to defend against surveillance. The boundary line is drawn ‘upon’ their bodies.
Human dignity is undermined again in the scoring systems as human complexity, engagements, aspirations, and creativity are reduced to points of data and spurious correlations. There is no longer a human story behind the interaction, nor is there the ability to ‘bring your whole self to work.’ The essence of the worker and complexity of a being and human interactions are boiled down to the data deemed important by vendors and employers, and data which can be collected.
Legal scholar Ifeoma Ajunwa also highlights that especially wearable data collection technologies may create new legal challenges such as the possibility that employer engages in unlawful surveillance (defined under National Labor Relations Act) “when it surveils employees engaged in concerted activities by observing them in a way that is ‘out of the ordinary’ and therefore coercive [31]” Such practices also undermine fundamental principles such as the Fair Information Practices, which include collection limitation, purpose specification, use limitation, accountability, security, notice, choice, and data minimization [32]. For example, data initially collected by third parties (such as fitness trackers provided by wellness or insurance companies) via the employers, can eventually be used in ways that restrict the worker’s access to resources and opportunities elsewhere [14].
Right to privacy: One of the most cited issues in worker surveillance is the infringement of privacy. The right to privacy is considered a fundamental human right. In the United Kingdom, Barclays bank faces a $1.1 billion fine over alleged monitoring of employees [33]. In Germany, data protection regulator fined electronics retailer notebooksbilliger.de for $12.6 million for using video cameras to surveil workers [34]. However, in the US, employers can collect information when workers use organization-provided devices or networks. In the absence of federal privacy regulations, a privacy regulator, or any laws limiting worker surveillance practices, the status quo allows employers to do as they see fit for their own interests. However, legal does not always mean ethical.
A recent OECD working paper AI in the workplace highlights that use of AI systems can “extend and systematize ethical failings and fundamentally change the relationship between workers and their managers” [35]. Some surveillance practices cross the line between work and private life where employers can capture very private information about the workers. For example, employers can engage in (1) social media surveillance, (2) video surveillance in the office, (3) mandate workers to use smart assistants which record conversations, or leave their laptop cameras on, (4) take screenshots of monitors at random times during the day, or (5) force workers to download mobile applications on their personal phones which continue to collect information outside of working hours. In 2022, a Dutch court ruled an employer requiring employees to keep webcams on for several hours a day and share screens, violated the right to respect for private and family life. In Germany, the data protection regulator fined retail company H&M for $41 million for the illegal surveillance of employees and keeping "excessive" records on the families, religions and illnesses of its workforce [36]. The European Court of Human Rights had a similar ruling in 2017 [37]. Such intrusion can also lead to unintentional disclosure of information protected by Title VII of the Civil Rights Act of 1964 [38] (such as sex, race, color, national origin or religion, sexual orientation, etc.) or Americans with Disabilities Act (“ADA”). Although non-discrimination regulation prevents employers from making employment decisions based on this protected information, knowledge of such information can lead to possible unconscious biases nevertheless [26, 39]. The target of surveillance shifts from the work to the worker. Notice of such protected information which would not otherwise have been known to the employer can create legal risks for the employer and opens the possibility for allegations of discrimination [40].
Right to expression: Ability to surveil a worker’s private and social interactions undermines freedom of expression. By monitoring emails, chats, phone conversations, employers can get access to workers’ thoughts—without discriminating between personal and professional communications. Knowledge of surveillance can force workers to self-filter or self-regulate their expressions and ideas. In paraphrasing Foucault’s ‘technologies of the self”, Manokha highlights the power of surveillance on individuals to self-restraint and self-discipline [41]. In this case, workers, aware of being under surveillance, may end up self-restraining themselves without any coercion or use of force by employers [42]. Employers' interest in surveilling communication also spills over to personal lives. More and more companies are interested in worker or job applicant social media accounts [43], and some even have patented audio technology to eavesdrop on conversations among workers and customers [44]. Some companies demand login access to social media accounts to enable surveillance of these accounts. In certain states where this boundary is protected by law, employers are able to continue the practice via third-party vendors. These vendors parse social media presence and interactions of both candidates and workers and provide ad hoc or ongoing risk scoring to employers. Risk scoring models can create spurious correlations, however many employers still use outcomes as third-party assessment for their employment decisions. Knowing employers can see and act upon their social media posts can prevent workers from expressing their true identities (i.e. sexual preference, religion, ability, etc.) outside of the workplace. Workers can also refrain from posting about social, economic, political, or other societal issues. This can eventually result in significant societal impacts.
Right to data protection: The data collection enabled by AI surveillance technologies is ubiquitous and pervasive. Without federal privacy legislation or robust worker protections, employers not only collect data but can also share the data further with third parties for different purposes. Workers may not access data collected about them or have any say over what the collecting entity might do with their data. Most of the time the workers may not even understand the full complexity of the data, the inferences made about them, or the extent of possible impact or harm. Both UC Berkeley Center for Labor Research and Education [1] and CoWorker.org’s [45] studies state that such data collection lacks clear and consistent safeguards. A possible breach of data can impact the worker’s access to benefits, resources and opportunities outside of the workplace.
Right to collective action and power: The nature of surveillance creates one party which makes the decision to surveil, collect data and benefit from its conclusions; and another party which gets impacted by the decision. When workers try to reduce the power imbalance through individual resistance or collective action, the data can also be used to oppress protected collective activity such as unionization or grievance. In other words, workers without protection “lack bargaining power to sufficiently fight invasive forms of surveillance, and surveillance is even being used to deter and prevent unionization [46]”.
History offers many examples of corporations hiring private investigators to surveil activities of workers to prevent collective action and break strikes [47]. A 1987 report by the United States Office of Technology Assessment, titled “The Electronic Supervisor: New Technology, New Tensions” provides a historical landscape analysis of tensions and considerations created by electronic employer surveillance systems. The report lists main concerns as privacy, fairness, and quality of work life. The factors included in fairness are listed as “reasonable standards, understanding by workers of the extent and use of the monitoring system, ability of workers to contest or correct records, and participation by workers in the design of the system.” The report makes clear there are no legal requirements in U.S. law that surveillance be “fair,” jobs be well-designed or employees be reconsulted about work standards, except insofar as these points are addressed in union contracts [48]” The report acknowledges both the low levels of unionization in the United States and how the surveillance issue has created more motivation for collective action in some previously unorganized firms.
Unfortunately, 35 years after this report, unionization rates are lower than 1987 rates, technology allows for more invasive data collection, and unions’ internal capabilities to counter these surveillance practices leave much to be desired. With the ability to collect information ubiquitously, employers can use the emerging technologies to exert power over workers. With such information disadvantage, algorithms “act as a force-multiplier for the power held by firms, with no balancing agent on the side of workers [49].” In 2021, Spain passed a law requiring online delivery platforms to inform labor unions of how algorithms affect workers’ working conditions [50].
Employers are obliged to “file ‘Surveillance Reports’ to report certain expenditures and arrangements they make in connection with labor disputes [51, 52].” The expenditures clearly include surveillance technologies and activities. However, when workers and unions are rarely aware of covert surveillance practices, it is hard to keep employers accountable for their transparency obligations or challenge fair practices. Scholars Pasquale and Citron advise “secrecy is a discriminator’s best friend: unknown unfairness can never be challenged, let alone corrected [53]. Establishing workers’ data rights under collective agreements not only protects the workers, but prevents the power of unions from diminishing [54].
Right to work and right to just and favorable remuneration: As per the Universal Declaration of Human Rights, every person has the right to work, to just and favorable conditions of work, to equal pay for equal work, and everyone who works has the right to just and favorable remuneration ensuring for themselves and their family an existence worthy of human dignity [4].
Emerging AI technologies increasingly allow previously disparate data to be connected. A great investigative journalism article by ProPublica details how a software sold to landlords can provide them with information regarding the levels of occupancy, rent amounts in their area, and the possibility to communicate with each other over the platform [55]. Whereas previously landlords had to invest significant resources to collect this kind of data individually, such platforms or technological tools now continuously allow the users access up-to-date information. Access to such information can be used to reduce competition and manage vacancies in a way to drive rent prices up beyond their market values. A parallel can be drawn here for wages and worker rights. Tools like Argyle provide aggregated workforce financial data to employers through applicant tracking systems, and to insurance providers, lenders, and credit card issuers through a single API [56]. Argyle’s vision is not only to provide financial data but “holistic view of a worker’s identity including typical hours, work trajectory, reputation and more [57].” In other words, a consolidated way for employers to see a candidate’s employment history and other compensation details before they make an offer. The asymmetrical information power means an employer can offer a less than fair wage rate or cooperate with other employers to suppress wages. Argyle claims to have profiles for more than 175 million workers, covering 80% of the US workforce [56]. While the vendor positions itself as a “third-party verification service which ‘allows’ workers to securely share their income, job title, and proof of employment information with lenders, background check companies, human resources, or any other party they choose [56]” vendor mentions nothing of massive data collection, use and future risks for workers. Some workers may become permanently locked out of employment opportunities due to the recommendations of the systems used by many employers in the industry.
When algorithmic systems become connected to each other for inputs, or the use of the aggregated systems becomes more prevalent in pre-employment decisions, a separate risk emerges. A biased, erroneous or manipulated outcome from one system becomes a direct input to another decision-making system. With such interconnected systems, workers may be locked out of affordable housing, insurance, healthcare, and similar systems [58].
Validity and black-box decisions: Vendors developing the scoring algorithms tend to make a lot of promises about the capacity of their products without disclosing how the scores are calculated, or what design decisions are made within the system. If a client demands to know the science behind the system, the house of cards may fall apart. Instead, it is a lot easier for a vendor to hide behind intellectual property (IP) protections or suggest one should trust the “neutral” technology. However, lack of vetting can expose the employer clients to liability [40]. A client should and can demand transparency. Unfortunately, since both vendors and employers benefit from these technologies in different ways, questions of scientific validity, or whether they should exist in the first place are not of priority.
Even when an employer is aware of the fact the technology is not delivering on the promise, it might still continue with the practice because it at least gives it a way to collect information about worker activity. The employer may choose to fix the issue with another level of surveillance. For example, when an AI system tracking the movements of workers in an Amazon warehouse fails, video footage is sent to other workers in India and Costa Rica. These workers provide input to improve Amazon’s machine learning tools’ accuracy for surveillance. The workers have “no idea where this particular data [is] going…and what exactly is happening in the backend.” These remote workers were also not aware that they themselves were being monitored by screen and mouse activity [59].
Right to due process: “Data-centric technologies hide, obscure, and validate employer behaviors behind an algorithm [60, 61].” Scoring can lead to automatic penalties in wages, shift distributions, and sometimes even to loss of job [15]. Without understanding how the surveillance and productivity scoring algorithms are used to make determinations about their wages, benefits or work conditions, or the unions putting in safeguards in contract clauses, “workers have few pathways to contest harmful employer decisions like discrimination and wage theft [62].” In many jurisdictions, workers also face the additional challenge of algorithms protected by intellectual property legislation. This means that even if they have the means to analyze algorithmic models, workers or unions may still not have access to them. Workers surveilled and scored by these algorithms need enhanced rights—such as right to procedural data due process [63]. In the US, ‘at will’ employment arrangements, used in most low-income jobs, allow both employers and workers to terminate the relationship at any time without having to provide a reason. However, many other employment decisions could still benefit from due process requirements.
Normative judgements: When the scoring models are created, developers make certain decisions. The decisions can include what activity to collect data on, or in other words, what behavior or activity should count towards productivity or risk. Developers make these decisions based on the technical possibility of collecting a particular set of data and what data should be accepted as a proxy to productive work. They make normative determinations about what ‘normal’ or ‘typical’ productivity should look like, then compare the data collected by workers against those norms. They decide on the labels and categorize workers into these labels. In reducing humans into standard categories, the developers also dehumanize and depersonalize the workers [64]. In making these decisions, developers also embed their own values, experiences, culture and biases into the algorithms they develop [65]. A recent New York Times article on worker productivity tracking articulates this issue as “the working world’s new clocks are just wrong: inept at capturing offline activity, unreliable at assessing hard-to-quantify tasks and prone to undermining the work itself [15].” The “choices in which factors to prioritize, or their failure to specify all relevant factors, can result in unanticipated consequences [102].”
By measuring everyone against a certain norm, and requiring similar behavior, these algorithmic systems create homogeneity. Charlie Munger, vice chairman of Berkshire Hathaway, one of the most successful business investors says, “Mimicking the herd invites regression to the mean (merely average performance) [66].” Companies globally spend significant amounts of time and resources to attract candidates with diverse backgrounds, experiences, identities and perspectives. When surveillance and scoring systems are used to determine a worker’s conformity to certain norms and behaviors, and discourage differences, the employers end up sabotaging their own efforts in the long run.
Context and cultural specificity: Just as the developers of algorithms embed their own normative judgements into scoring systems, they also claim the universality of their products. However, anyone who has traveled to different parts of a country or internationally, would attest cultural differences find their correspondence in work relations. Different cultures prioritize different behavior at work and have variety in how workers interact between themselves.
Even within a homogeneous work environment, scoring systems still cannot capture the complexity of work, nor do they take into account the external factors or circumstances which might be impacting a worker’s ability to deliver an output or complete a task within a certain amount of time. Without appreciating the context of worker interactions and the totality of the effort which goes into creating an output, these systems prioritize quantity and quantification [63] over quality and depth of work. Data is not independent from its context. Some workers subject to productivity algorithms characterize the situation as “infuriating”, “soul crushing” and a “kick in the teeth” as the employers had failed to grasp the totality of the tasks making up their job [11]. The expectation from employers is for workers to be robot-like subjects. This approach leaves no room for differences and diversity, and no appreciation for offline work such as thinking, reading printed material, brainstorming with co-workers, mentoring other workers.
Disability discrimination: When these systems make judgements about what be considered typical or expected productivity, they can also lead to other harms for people with disabilities. Some assessments of ADA [39] suggest “If an employer adopts a faster pace-of-work standard and enforces it rigidly, it could run afoul of the ADA’s prohibition against “standards, criteria, or methods of administration... that have the effect of discrimination on the basis of disability [67].” More than half of disabilities are invisible, and are highly diverse, making them “virtually impossible to analyze at scale [68]. In addition, only 21% of employees with disabilities disclose them to their employers’ human resources departments [69]. Access to biometric or health data collected by wearables or via a worker’s social media accounts can give managers or employers additional information to infer ability or health condition of workers, leading to possible biased decisions, or spurious inferences. Even if the information did not play a role in an adverse employment decision, employers could be alleged to have discriminated due to a disability or perceived disability [40].
The technical shortcomings of the AI system, such as inaccuracy of devices, can also cause unintended harm. For example, wearables collecting health and wellness information may not be accurate in the first place [70, 71] but can be still used for work related determinations. Since scientific validity of the system and possible technical biases are not questioned, the workers can be subjected to discriminatory outcomes. Or imagine a scenario when the developer, or employer is not aware of the bias in the system. For example, assistive devices (for example screen readers) may interfere with the accuracy of data collected. Or if the scoring systems disadvantage neurodivergent people, those with slower reading speeds or those multitasking, then the outcomes might be discriminatory.
Erosion of trust: The history of worker surveillance provides ample evidence for how employers choose the easier route of surveilling workers rather than investing in establishing trust and a shared vision with their workers [72]. In many cases, employers choose the top-down, hierarchical methods to control and shape. The alternative is co-creation and determination of shared values and vision. Workers trusted with adding value and keeping themselves and the employers accountable to agreed outcomes. The absence of trust from employers leads to erosion of trust and loyalty from workers. The work-from-home arrangements that emerged as a result of the COVID-19 pandemic created a panic environment for many employers. A Harvard Business Review article highlights the “negative spiral in which manager mistrust leads to micromanagement, which then leads to drops in worker motivation, further impairing productivity” and this spiral became deeper with COVID-19 pandemic [73]. A recent Microsoft report highlights that 85% of leaders say that the shift to hybrid work has made it challenging to have confidence that workers are being productive [74]. Whether it is tracking remote workers, or those operating in a large physical setting (i.e. warehouses, shops) or mobile workers (i.e. drivers, delivery workers), or those who are Quiet Quitting, use of surveillance and productivity tools breaks trust relationships in unrepairable ways [75] and can backfire to result in less productivity [76, 77].
Impact on health and safety: The increased pace-of-work and productivity expectations which leave no room for rest, thinking or corrective action leads to more workplace accidents [78, 79]. The “electronic sweatshop” requires repetitive, fast-paced work demanding constant alertness and attention to detail [80]. More repetition also leads to more severe physical injuries. Research literature shows increased stress associated with workplace performance scoring technologies [81,82,83]. Loss of autonomy over work, stress, and ubiquitous observation increases risk of psychological harm and mental health problems for workers [67].
Sometimes employers frame the productivity scoring systems as ‘games’. In other words, under the guise of turning work into competitive metrics, employers pitch workers against each other. Employers make the productivity metrics visible to all, potentially causing further stress on the workers. Even when such competition is used as part of a wellness program, the normative judgements of fitness and health are imposed upon workers. For example, expecting workers to meet certain fitness standards, and then making the metrics of those not fitting the ‘expectations’ (i.e. weight loss trackers) visible to everyone can be considered a form of body-shaming. The race to meet the demanded metrics, stress and the toll on physical health eventually leads to worker burnout [84]. In workplaces where one worker is easily replaceable by another without consideration to the human behind the data, and in the absence of any legal consequence, employers do not have any incentive to improve conditions.
Feedback loops and behavioral change: Algorithmic decision-making systems change the behavior of users and those who are impacted by the outcomes of these systems. They change and shape the culture and priorities of the implementing organization in many ways. By incentivizing workers to focus on a particular task rather than innovation and experimenting, “the organization sends a message to its workers simply by the tasks it chooses to monitor [85].” Productivity systems may result in unintended consequences of workers spending more time doing a particular activity, which is counted and rewarded, than achieving results. The metric becomes an end in itself. Surveillance works to discipline workers to conform to expected behavior which can be measured [64]. When worker’s autonomy and agency are reduced, the result is also a reduction in the capacity to be creative and “the ability to think or sometimes act out of the box [35].”
When workers are under surveillance and worry about their scores impacting their compensation or the future of their work, they will also naturally shift into more self-protecting behavior. Instead of collaborating with their co-workers or sharing their knowledge about more efficient ways of completing tasks, individual workers might become more private, distrusting and competitive [86]. They might also feel the need to game the system. Whether this need emerges as a reaction to the oppressive actions by employers, or whether from a need to increase one’s scores and possibly wages and benefits, gaming the system means finding ways to make it look like one is being productive, but in reality refusing to do what is expected. As a response to lack of trust from the management, workers can seek to circumvent intrusive managerial oversight [87].
Hypervigilance about continuous surveillance and datafication also demoralizes workers and takes away from other tasks that may be meaningful or necessary for long-term wellbeing. Scoring only certain kinds of activities can force the workers to make decisions quicker without having the time to delve deeper into an issue, case or condition. Some researchers even suggest, for example, gamified systems in the workplace could complicate and subvert ethical reasoning [88, 89]. For jobs which require more frequent decision-making, such as health, human or social services, such behavior change can result in catastrophic consequences for people dependent on decisions made.
Shoshana Zuboff highlights that at the workplace “invasive technologies are normalized among captive populations of employees [90].” When an individual accepts work surveillance and scoring technologies as inevitable, the result can be a normalization of similar technologies in other parts of life. The individual internalizes the scored society [91] and invasive and questionable techniques are normalized. Pasquale and Citron warn us that “the menace of scoring in the Big Data economy is the volume, velocity, and variety of information that could be fed into a score, and that these scores could become decisive [53]” in a variety of different contexts. Already a spectrum of products are in use to score an individual, ranging from when assessments for a credit, insurance, employment, education, immigration, or even criminal justice. The practices workers are forced to accept in workplaces will not stay limited to employment decisions.
| 2023-08-14T00:00:00 |
2023/08/14
|
https://link.springer.com/article/10.1007/s43681-023-00275-8
|
[
{
"date": "2023/03/01",
"position": 46,
"query": "AI unemployment rate"
},
{
"date": "2023/03/01",
"position": 69,
"query": "AI labor market trends"
},
{
"date": "2023/03/01",
"position": 6,
"query": "AI labor union"
},
{
"date": "2023/03/01",
"position": 31,
"query": "artificial intelligence workers"
}
] |
AI — The Good, The Bad, and The Transformative
|
AI — The Good, The Bad, and The Transformative
|
https://gesikowski.medium.com
|
[
"Cezary Gesikowski"
] |
The fear that AI will replace human jobs and create a new era of unemployment is real, and so are the concerns about its potential misuse. ... The hidden cost of ...
|
AI — The Good, The Bad, and The Transformative
A Journey Through Changing Attitudes in Our Love-Hate Relationship with AI Cezary Gesikowski 13 min read · Mar 17, 2023 -- Listen Share
“With artificial intelligence, we’re summoning the demon.—Elon Musk”
Elon Borg from I AM AI series on Algography.Art | Created with Midjourney v5 + Photoshop Beta Nural Filters
Have you noticed how our attitudes toward artificial intelligence have changed over the years? It’s been quite a ride! The recent hype with generative AI such as ChatGPT, DALL-Y, Stable Diffusion, Midjourney, etc., is just a rush of expectations pent up for years in obscure AI research labs that are finally able to manifest themselves in technology that anyone can see and touch.
From the days when AI was only a far-fetched concept in sci-fi movies, to now where it’s an integral part of our daily lives, AI has come a long way. But what’s the most interesting part of this journey? The fact that everyone has an opinion about it, and they’re not afraid to share it.
We’ve all heard about the potential benefits of AI, like self-driving cars and virtual assistants, but what about the darker side? The fear that AI will replace human jobs and create a new era of unemployment is real, and so are the concerns about its potential misuse. With the recent announcement of MS Copilot based on GPT-4 technology, it’s easy for some to start reaching for the big red panic button.
Despite these concerns, attitudes towards AI remain largely positive, with many people believing that it has the power to solve some of the world’s most pressing problems. But as we continue to develop new AI applications, we need to be responsible and balance its potential benefits with the potential risks. So, Don’t Panic… and make sure you bring along a towel. The AI galaxy is a crazy place of improbable wonder… if you know how to enjoy the ride.
High Noon on AI Frontier
But looking at all these opinions we’ve had over the years, is there a common thread emerging in our changing attitudes toward AI? Some see AI as a force for good, while others worry about its potential negative impact on society. Let’s explore both sides of the argument here.
On one hand, proponents of AI argue that it has the power to solve some of the world’s most pressing problems, such as climate change and disease. They see AI as a way to improve healthcare, transportation, and education, among other things. Futurist and inventor Ray Kurzweil is one such supporter of AI, who believes that it will lead to a future where machines and humans work together to create a better world.
On the other hand, critics of AI, like Elon Musk, worry about the potential negative consequences of AI, such as job loss and the rise of autonomous weapons. Musk has warned that AI is the greatest threat to humanity and has called for regulations to prevent its misuse. Other critics worry about the potential for AI to be biased or discriminatory, perpetuating existing societal inequalities.
While many are looking at the advancements of AI tech in the US and Europe, some tend to ignore the fact that some of the greatest advances in this technology are happening in places with much more restrictive social and political structures that are seeking AI to maintain power and control in the hands of the ruling political elites.
Despite the differing views on AI, there are some common threads in the arguments. Both sides acknowledge the transformative power of AI, but differ in their views of its impact. As we continue to develop new applications for AI, it’s important to balance the potential benefits with the potential risks. We need to consider the ethical and societal implications of AI and ensure that it is developed and used responsibly.
Yet ethics is probably one of the most difficult yet crucial aspects of AI research that is often emerging in AI debates. Stay tuned and follow my articles here on Medium as I plunge deeper into this topic. AI is a powerful technology that has the potential to be both good and evil. While some see it as a way to create a better world, others naturally worry about its potential negative consequences. It’s up to us to ensure that we use this technology in a way that benefits society as a whole.
Opinions on AI: Everyone’s Got One to Share
Overall, the opinions on AI vary, and it’s important to consider multiple perspectives when exploring this complex and rapidly evolving field. Here is a quick scan of prominent opinions over the years about AI:
Stephen Hawking, a famous physicist, warned that the development of full AI could lead to the end of humanity. He believed that AI could surpass human intelligence and render us obsolete.
“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”— Stephen Hawking told the BBC
Claude Shannon, an American mathematician, envisioned a future where humans would be to robots what dogs are to humans and rooted for the machines. This suggests that some view AI as an inevitable force that will eventually dominate us.
“I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.”—Claude Shannon
Larry Page, co-founder of Google, saw AI as the ultimate search engine that would understand everything on the web and make our lives easier. Today, MS Copilot is doing just that…
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”— Larry Page
Elon Musk, CEO of Tesla and SpaceX who was an early investor in OpenAI, warned of the rapid progress of AI and the potential for something seriously dangerous to happen within five years unless we have direct exposure to groups like Deepmind. He believes we need regulatory oversight at national and international levels to ensure we don’t do something foolish with AI.
“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.”— Elon Musk wrote in a comment on Edge.org
Nick Bilton, a tech columnist, warned of the potential for upheaval caused by AI, with catastrophic consequences if not carefully managed. He gave an example of a medical robot that could conclude that the best way to rid cancer is to exterminate humans genetically prone to the disease.
“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”— Nick Bilton, tech columnist wrote in the New York Times
James Barrat, an author, revealed the fear some have regarding AI, with highly placed people in AI having retreats to flee to if things go wrong.
“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.”— James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, told the Washington Post
Elon Musk also warns that we need some regulatory oversight, maybe at the national and international level, to ensure that we don’t do something very foolish with AI. He believes that with AI, we’re summoning the demon, and we need to be cautious.
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.”— Elon Musk warned at MIT’s AeroAstro Centennial Symposium
Gray Scott, a futurist, raised the need for an artificial intelligence bill of rights to address ethical considerations in AI development and deployment.
“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”— Gray Scott
Klaus Schwab, founder and executive chairman of the World Economic Forum, emphasized the need to address moral and ethical issues raised by cutting-edge research in AI and biotechnology that can fundamentally transform society.
“We must address, individually and collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.” — Klaus Schwab
Ginni Rometty, CEO of IBM, suggests that AI will augment our intelligence, improving our lives.
“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.”— Ginni Rometty
Gemma Whelan, an actress, is frightened by the possibility of a world run by machines and the potential danger of AI.
“I’m more frightened than interested by artificial intelligence — in fact, perhaps fright and interest are not far away from one another. Things can become real in your mind, you can be tricked, and you believe things you wouldn’t ordinarily. A world run by automatons doesn’t seem completely unrealistic anymore. It’s a bit chilling.”— Gemma Whelan
Gray Scott believes that AI has the potential to disrupt our culture and calls for preparation for the changes it may bring.
“You have to talk about ‘The Terminator’ if you’re talking about artificial intelligence. I actually think that that’s way off. I don’t think that an artificially intelligent system that has superhuman intelligence will be violent. I do think that it will disrupt our culture.”— Gray Scott
Peter Diamandis, founder and chairman of the X Prize Foundation, warns against government regulation of AI, as research may leave the country, highlighting the global nature of AI development and the need for international cooperation in regulation.
“If the government regulates against use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.”— Peter Diamandis
Jeff Hawkins, a computer scientist and entrepreneur who founded the company Numenta, emphasizes the importance of understanding how AI processes and represents information to advance its development. He emphasizes the importance of understanding how AI represents and processes information for its development and advancement.
“The key to artificial intelligence has always been the representation.”— Jeff Hawkins
Colin Angle is the CEO and co-founder of iRobot, a company that designs and builds robots. He expresses his curiosity and excitement about how society will deal with the rise of artificial intelligence, and he believes that it will be a cool experience to witness its advancement.
“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.”— Colin Angle
Eliezer Yudkowsky, a research fellow at the Machine Intelligence Research Institute and a notable figure in the field of artificial intelligence, emphasizes the transformative potential of technologies that can give rise to smarter-than-human intelligence, such as AI, brain-computer interfaces, or human intelligence enhancement through neuroscience. He believes that these technologies have the power to change the world more than any other innovation.
“Anything that could give rise to smarter-than-human intelligence — in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement — wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.”— Eliezer Yudkowsky
Diane Ackerman, a poet, essayist, and naturalist who has written extensively about the relationship between humans and nature, acknowledges that robots and AI are growing at a fast pace and that they are becoming more human-like in their interactions. She emphasizes that robots can elicit empathy and affect our mirror neurons, which are the brain cells that enable us to understand and empathize with others.
“Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.”— Diane Ackerman
Sybil Sage is a fictional character from a New York Times article that explores the role of voice-activated assistants, such as Alexa, in our lives. Sage personifies the perfect digital assistant who is always ready to serve without any complaints or excuses, unlike human partners. She highlights the convenience and reliability of AI assistants, which have become ubiquitous in our daily lives.
“Someone on TV has only to say, ‘Alexa,’ and she lights up. She’s always ready for action, the perfect woman, never says, ‘Not tonight, dear.’” — Sybil Sage, as quoted in a New York Times article
Alan Kay is a computer scientist known for his contributions to the development of object-oriented programming and graphical user interfaces. He suggests that artificial intelligence should not make us feel inferior because even something as simple as a flower can make us feel inferior. In Kay’s opinion we should not compare ourselves to AI, as we are all unique and have our own strengths.
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” — Alan Kay
Ray Kurzweil, a futurist, inventor, and author who is known for his predictions about the future of technology, believes that artificial intelligence will eventually surpass human intelligence by 2045, which he refers to as the “Singularity.” He believes that AI will continue to develop at an exponential rate and that it will have a significant impact on the future of humanity.
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” — Ray Kurzweil
Sebastian Thrun, a computer scientist and entrepreneur, quote suggests that artificial intelligence is not just a technology but also a discipline that seeks to understand human cognition and intelligence. This implies that AI can be used to enhance our understanding of ourselves as human beings, rather than just creating machines that can do tasks more efficiently.
“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It’s really an attempt to understand human intelligence and human cognition.”— Sebastian Thrun
Alan Perlis, an American computer scientist and pioneer in programming languages, statement reflects the powerful impact that AI can have on human perception and understanding. The rapid development and progress of AI can create a sense of awe and wonder that is similar to religious experiences, prompting some to believe in a higher power.
“A year spent in artificial intelligence is enough to make one believe in God.”— Alan Perlis
Gray Scott, a futurist and techno-philosopher, warns that by 2035, emphasizes the speed of technological progress and the potential for AI to surpass human intelligence by 2035. He suggests that we must prepare ourselves for a world where AI plays a significant role in decision-making and human activities.
“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.”— Gray Scott
Spike Jonze, a film director and screenwriter, poses a thought-provoking question about the nature of AI and its relationship to human intelligence. He suggests that AI is different from human intelligence, but the comparison implies that AI is inferior to human intelligence.
“Is artificial intelligence less than our intelligence?”— Spike Jonze
Eliezer Yudkowsky, an American AI researcher and writer, cautions that the greatest danger of AI is that people may conclude too early that they understand it. This statement highlights the complexity and unpredictability of AI and its potential implications for society.
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”— Eliezer Yudkowsky
Jean Baudrillard, a French philosopher and cultural critic, critiques artificial intelligence by claiming that it lacks artifice and therefore intelligence. This statement suggests that AI is limited by its inability to truly understand and replicate human consciousness and creativity.
“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.”— Jean Baudrillard
Tom Chatfield, a British author and commentator on technology and culture, suggests that the real danger in the era of big data is not artificial intelligence, but rather “artificial idiocy.” This statement warns against blindly trusting and relying on algorithms without understanding their limitations and potential biases.
“Forget artificial intelligence — in the brave new world of big data, it’s artificial idiocy we should be looking out for.”— Tom Chatfield
Steve Polyak, an American computer scientist and entrepreneur, argues that before we work on creating artificial intelligence, we should focus on addressing natural stupidity. This statement suggests that human intelligence should not be underestimated, and that we should prioritize addressing fundamental human problems before pursuing advanced technological solutions
“Before we work on artificial intelligence why don’t we do something about natural stupidity?”— Steve Polyak
Your Opinions Matter, Chime In
So, how would you weigh in? What’s your opinion about artificial intelligence? Share in the comments below!
Authorship Disclaimer
| 2023-03-18T00:00:00 |
2023/03/18
|
https://gesikowski.medium.com/ai-the-good-the-bad-and-the-transformative-7c9661267f63
|
[
{
"date": "2023/03/01",
"position": 64,
"query": "AI unemployment rate"
}
] |
Post-COVID, Unemployment Systems Take Stock and Go ...
|
Post-COVID, Unemployment Systems Take Stock and Go Modern
|
https://www.govtech.com
|
[
"Thad Rueter Writes About The Business Of Government Technology. He Covered Local",
"State Governments For Newspapers In The Chicago Area",
"Florida",
"As Well As E-Commerce",
"Digital Payments",
"Related Topics For Various Publications. He Lives In Wisconsin."
] |
“Despite our unemployment rate returning to historically low levels, Kansas ... Do Not Sell My Personal Information | Privacy & AI · gov-footer-logo-2024 ...
|
NEW JERSEY
Indeed, technology upgrades are a dire need across our government, and must be undertaken in a broad and holistic fashion.
KANSAS
CALIFORNIA
State unemployment offices came under tremendous pressure during the pandemic — not only because of the mass of people trying to access benefits, as well as significant fraud, but also because of often sluggish and even outdated technology.Now, with that pandemic pressure finally easing, officials across the country are tackling scores of updates and similar projects. A look at the challenges and issues in three states foreshadows the future of unemployment services, which are among the main sources of connection between citizens and their governments.With unemployment claims back down to pre-pandemic levels in New Jersey, the state’s Department of Labor and Workforce Development (NJDOL) in January looked back on recent service improvements. The department launched a UI system with a mobile-responsive, plain-language application, described as the “first of many planned improvements.”That work has resulted in reducing the average time to complete an initial UI application by 47 minutes, the department said in early January. Emails to claimants have been “rewritten” in the interests of clarity, with feedback indicating “the new emails are easier to follow and less confusing than similar communication received in the past.” Also on tap is a redesign of the self-service claim status page, upgrading the mainframe and what the department called a new claimant intake page.New Jersey also provides an example of how much other work is involved in making sure unemployment departments can function more efficiently for various segments of the population.“NJDOL will continue to call for federal action to reform the underlying unemployment laws and regulations that bog down so many New Jersey workers when assistance is most needed,” the department said. “Additionally, NJDOL will continue its work developing groundbreaking digital workforce development tools to better serve New Jersey employers, workers and job seekers.”Of course, the New Jersey unemployment office, like so many others, continues to attract complaints about outdated technology. Last year, for instance, state Sen. Andrew Zwicker criticized the use of COBOL programming language for UI benefits and also highlighted how the unemployment department is a signal post of sorts for broader technological upgrades in state government.“Nor is the computer infrastructure problem exclusive to unemployment claims or the Department of Labor,” he wrote. “Indeed, technology upgrades are a dire need across our government, and must be undertaken in a broad and holistic fashion.”In Kansas, meanwhile, state officials are in the midst of a $48 million modernization of the unemployment insurance system, work that kicked off in June 2022 and for which Kansas plans to use $4.5 million in funding from the U.S. Department of Labor, according to a statement from Gov. Laura Kelly.“Despite our unemployment rate returning to historically low levels, Kansas is still using antiquated equipment to work through pandemic-related claims, claims maintenance adjustments, overpayments, and fraud identification and migration,” she wrote in a letter to the U.S. Department of Labor seeking funding help.The state’s UI technology depends on a mainframe system that dates to the bell-bottom days of the 1970s. The last modernization push began in 2005 and lasted until 2011. Now the state, with the help of Tata Consultancy Services, is determined to bring more online self-service, automation, data access, partner integration and other tools to help applicants, UI recipients, businesses and Kansas Department of Labor employees.Fraud prevention and cybersecurity will also play big roles in the planned updates.On the West Coast, California was battered by billions in fraud perpetrated by people that took advantage of relaxed eligibility rules used to speed economic relief during the pandemic. It happened in a lot of states, in fact, though the scale was uniquely large given California’s population of nearly 40 million residents.Rita Gass, chief information officer for the California Employment Development Department (EDD), offered an almost dizzying update on how that state is improving its unemployment system. In fact, Gass counted some 200 IT projects over the last 18 months in her department — a good indication of all the work that needs to be done.For her agency, those changes include creating a cybersecurity division — a reflection of the increasingly tight focus at all levels of government on defending against hackers and other criminals. The general idea is to not only increase digital protections but do so in a way that consolidates and unifies cybersecurity functions across multiple areas of EDD.“Existing systems are stable and able to continue to process claims and provide service,” Gass said via email. “However, the customer-centered, advanced fraud mitigation and security enhancements EDD wants to achieve will require a modern platform,” and those are on the horizon with the state’s modernization push, EDDNext.The state has budgeted for new positions, including in cybersecurity and anti-fraud efforts, and officials expect that to grow as EDDNext continues. It’s a complex project that involves integrating three systems into one.Gass offered greater detail about planned improvements underway during the project’s first phase:“We are reconfiguring call processes and incorporating multilingual functionality, building a single sign-on account for all our benefit programs, researching with customers the best way to improve our benefit applications and forms and designing enhancements, and strengthening our training materials to better support and empower staff to deliver excellent customer service.”
| 2023-03-06T00:00:00 |
2023/03/06
|
https://www.govtech.com/computing/post-covid-unemployment-systems-take-stock-and-go-modern
|
[
{
"date": "2023/03/01",
"position": 71,
"query": "AI unemployment rate"
}
] |
The impact of Fintech on inflation and unemployment
|
The impact of Fintech on inflation and unemployment: the case of Asia
|
https://www.emerald.com
|
[] |
by Y Ben Romdhane · 2024 · Cited by 35 — The following figure shows that Cambodia records the highest unemployment rate in Asia with a percentage of 8.34%, while Nepal presents the lowest rate with a ...
|
Several motivations are behind our action research. First, Fintech emerges as a value driver through digital innovation with new types of financial services. In this sense, several researchers argue that financial technologies are living organisms, given that this domain is by nature flexible, changing and not stable. Moreover, these Fintechs have the potential to carry out structural changes, modify some aspects of the system under investigation, and transform business models. Finally, Fintech is not a clearly understood notion by academia and the media. Despite the extensive research on digital technologies in financial services, only a few academic researchers have investigated the Fintech industry and its impact on the Asian economy.
The empirical investigation aims at testing the validity of the assumption of the positive role of digital transformation in the financial sector for the relationship between Fintech and economic development, approximated by inflation and unemployment. We also test the impact of traditional determinants of both economic indicators side to side to Fintech measures. We start with a descriptive analysis to better understand the state of the art in Fintech inclusion in Asia. Then, we test the potential impact of Fintech and traditional determinants on economic indicators such as inflation and the unemployment rate. We use general least square regression to anticipate the proxies’ coefficients. We approximate Fintech through mobile money, the use of the internet to make digital transactions and payments, making digital payment and making or receiving digital payment. On the other hand, we approximate degree of financial inclusion through the banking system by the variable bank account, credit card and debit card ownership. We also estimate social welfare using the unemployment rate and consumer price index. Finally, to assess the effect of economic indicators, we include other control variables such as exchange rate, interest rates, human capital index, GDP growth, FDI, gross fixed capital formation and broad money.
Before rising to a peak of 8.3% in August 2018, the inflation rate in Eastern Asia was the lowest in Asia from January 2014 to January 2018. This is explained by the fall in agricultural commodity prices. On the other hand, these factors are significant in explaining food inflation trends in the region and country-specific. More importantly, the overall food inflation rate in the area can be explained by the slow evolution of food prices in China and India, which are considered the main drivers in Eastern and Southern Asia. In contrast, Turkey was primarily responsible for the rise in Western Asia’s inflation rate due to an increase in nonprocessed food prices, particularly fresh fruit and vegetable prices. The same growth was also captured in red meat and white meat, as well as in rice and legumes (see Figure 2 ).
Food inflation registered an upward trend in 2018, from 3.0% in January to 5.4% in August; it decreased significantly to 1.1% in December 2018. The same pattern has been observed in South and Southeast Asia. However, Western Asia has experienced consistent high food inflation of 7.6% since February 2015, rising to 10% in December 2017.
We now turn to analyzing inflation rates in Asia. Inflation is an extended, sustained increase in the general price level of goods and services in an economy over some period of time. When the price level rises, each currency unit buys fewer goods and services. As a result, inflation reflects a decrease in purchasing power per unit of money – a loss of real value in the economy’s medium of exchange and unit of account. A chief measure of price inflation is the inflation rate, which is the annualized percentage change in a general price index (usually the consumer price index) over time. During the last five years, the food inflation rate fell from 5.2% in 2014 to 3.6% in 2018. There has been a slowdown in food inflation, which has moved almost continuously since 2016, from a peak of 5.4% in April 2016 to a low of 0.9% in June 2017.
Consequently, this rate keeps rising. The unemployment rate has declined in Osaka since it is considered a flourishing labor hub and is even facing job scarcity with a rate of 3.72% of people searching for jobs. Seoul has a current unemployment rate of 3.48%, explained by the slowdown in the economy and the difficulty of growing businesses. Most jobseekers have a degree or diploma and frequently fail to secure desired employment. Shanghai and Japan’s busy capital, Tokyo, suffer from heavy populations despite solid technological advancements. As a modern city, Kuala Lumpur is also witnessing slow job growth and a rise in unemployment. India’s metro cities like Kolkata, Mumbai and Bangalore still struggle with staggering employment opportunities. This is due to the global economic slowdown. Finally, there is no doubt that high unemployment damages economies and disrupts social structures. So, it is a concern for the government to generate jobs for this demographic.
This could be explained by persistent political unrest and a lack of investment. India, the capital of New Delhi, has an unemployment rate of approximately 4.38%. This rate can be partly explained by the fact that people are flocking from all over the country to search for jobs.
On the other hand, Kathmandu has 8.34% of the population still looking for work and faces the most pressing issue of youth unemployment. Besides the lack of skills among job seekers, the leading cause of unemployment is the lack of promotion of agro-based industries and the agricultural sector. Well-known for its high standard of living, Brunei’s capital has the second-highest rate of unemployed people. The public sector can’t offer enough jobs for the growing number of workers demanded yearly. In the Philippines, the drug war prevents the government from creating jobs for the ever-increasing population. Even though in Jakarta, the inactive population is declining, there are still 6.26% of people waiting to get jobs. In Pakistan, the rate is higher, with Karachi having a 6.12% unemployment rate despite being the country’s leading industrial base.
A global survey from 2007 to 2017 shows that, on average, the unemployment rate is about 5.39%, slightly lower than the world average (5.9% in 2011, 2014 and 2017) (see en.statista.com ). But we notice the high standard deviation, reflecting the remarkable dispersion of the statistical sample values. The following figure shows that Cambodia records the highest unemployment rate in Asia with a percentage of 8.34%, while Nepal presents the lowest rate with a rate of 1% in 2017. Generally, big and smaller cities suffer from the same expected trend of the scarcity of reasonable jobs. The global recession hitting nearly every country due to growing political instability is behind the challenge of unsustainable employment. Concerning other countries, and according to the 2017 Global Least & Most Stressful Cities Ranking report, the unemployment rate is about 4.81% in the Bangladesh capital. According to some researchers, this rate is explained by the scarcity of skills and education (see Figure 1 ).
The GLS method is to minimize the sum of the random squares. The GLS estimator remains one of the most frequently used. It has many uses, and more precisely in the description of the data. They show which variables best reflect the variability of a variable of interest.
Because Fintech is considered a new and fast-growing financial service, there is a lack of understanding studies in this domain due to new and emerging phenomena. In our research, we intend to examine the potential impact of Fintech on economic development. We preciously study the two most common indicators: unemployment measured by the annual rate of unemployment and inflation measured by the annual price index of consumption. To deal with it, we gathered data on 17 Asian countries from the 2017 Global Findex data during 2011–2014–2017 using GLS. This method of linear regression is a generalization of the ordinary least squares (OLS) estimator. It deals with non-BLUE (best linear unbiased estimator) estimators (one of the main assumptions of the Gauss-Markov theorem). It is the case when homoskedasticity and the absence of serial correlation hypotheses are violated. In such situations, the GLS estimator is BLUE. Specifically, GLS models allow us to completely remove the spatial correlation of residuals ( Mediero & Kjeldsen, 2014 ). The GLS regression procedure accounts for differences in available record lengths and spatial correlation in concurrent events by using an estimator of the sampling covariance matrix ( Griffis & Stedinger, 2007 ). They proved that the GLS model is more appropriate than the OLS analysis. Thus, the GLS model is adapted to our estimation.
Model specifications and results
In the present study, we intend to test two principal hypotheses through linear models as shown below:
(1) U P i , t = β 0 + β 1 B A i , t + β 2 I U i , t + β 3 D C i , t + β 4 D P i , t + β 5 R D P i , t + β 6 M M A + β 7 G F C F i , t + β 8 H C I i , t + β 9 G D P A i , t + β 10 G D P C i , t + β 11 F D I i , t + ε i , t
(2) I N F i , t = β 0 + β 1 B A i , t + β 2 I U i , t + β 3 D P i , t + β 4 C C i , t + β 5 M M A i , t + β 6 R D P i , t + β 7 B M i , t + β 8 G D P A i , t + β 9 G D P C i , t + β 10 R I i , t + β 11 D I R i , t + β 12 L I R i , t + β 13 E X i , t + β 14 D C i , t + ε i , t
β 0 : intercept or constant amount;
β i : coefficients of the explanatory variables;
ε: error term.
We tested the two following hypotheses:
H1. In addition to the traditional determinants, Fintech, as an innovative and sophisticated technology, enables job creation through new investment opportunities.
H2. In addition to the traditional determinants, Fintech, as a crowdfunding technology, marketplace lending (MPL), and peer-to-peer (P2P) lending process, could increase monetary speed and spread inflation.
To test these two hypotheses, we employed the GLS since this technique allows estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. Alexander Aitken first described the GLS technique in 1936. In fact, in statistics, GLS is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, OLS and weighted least squares can be statistically inefficient or give misleading inferences.
The GLS estimator is unbiased, consistent, efficient and asymptotically normal with:
E( B ^ )/X) = β Cov ( B ^ )/X) = (XT Ω−1 X)−1.
GLS is equivalent to applying OLS to a linearly transformed data version. For instance, factor Ω = CCT, for instance, uses the Cholesky decomposition. Then if we premultiply both sides of the equation,
y = X β + ε b y C − 1
We get an equivalent linear model:
Y * = X * β + ε *
Y* = C−1Y, X* = C−1X, and ε* = C−1 ε
In this model, Var (ᵋ*/X) = C−1 Ω (C−1) T = I, where I is the identity matrix. Thus, we can estimate β efficiently by applying OLS to the transformed data and minimizing (Y* − X* β) T (Y* − X* β) = (Y−X β) T Ω−1 (Y−X β).
This has the effect of standardizing the scale of errors and “de-correlating” them. Since OLS is applied to data with homoscedastic errors, the Gauss–Markov theorem applies, and therefore, the GLS estimate is the BLUE for β.
Before running our models, we present the Pearson correlation results for all the variables. Generally, pairwise correlation coefficients are low for the two models. A high correlation is considered when the correlation coefficient exceeds 0.80. Gujarati (2003) points out that multicollinearity may be problematic when the correlation exceeds 0.80 (as cited by Kennedy, 1992). The low correlation coefficient indicates the absence of multicollinearity problems in the analysis.
We analyze the potential impact of Fintech on the unemployment rate, which is the most pleaded social cause, especially among the young and recently graduated. The regressions explanatory power (R-squared) is relatively high 0.5389, 0.2757 and 0.5191, which indicates that the explanatory variables included in the two models (1 and 3) explain more than 50% of the variation in unemployment. Also, the Fisher test confirms the global significance of the results, while the Breusch-Pagan/Cook-Weisberg test for heteroscedasticity concludes without such a problem (see Table 3). Our empirical results allow us to confirm the first hypothesis through the estimation of the models 1 and 3 where we respectively estimate the combined effect of both Fintech’s measures and the traditional determinants of unemployment and then the isolated effect of Fintech.
Regarding the analyses of the results of the traditional determinants of the unemployment, we find that Annual GDP growth has a negative impact at 5% while Annual per capita GDP growth has a positive impact at 10%. On the other hand, we find that Human capital index has a negative effect on unemployment at 10%. We can conclude that spending on health is necessary to improve labor productivity. As a result, higher levels of employment opportunities result in both strong human capital and better economic growth rates.
Furthermore, the results emphasize the positive impact of Fintech on social indicators through job opportunities. But we notice some divergence in the causality sense between Fintech proxies and the unemployment rate. While opening a bank account and making or receiving digital payments are all negative and significant at 1%, and 5%, respectively, the variables “internet use to pay” and “making digital payments” are positive and significant at 10% and 5%, respectively.
These disparities could be explained in part by the nature of Fintech applications. This means that the unemployment rate will significantly decrease unless there is the active use of these financial technologies, like making digital transactions. Fintech, being a Greenfield industry for expanding new innovative ideas and start-ups, allows job creation and consequently participates in reducing the unemployment rate. Kammoun et al. (2020) confirmed our findings. They found that financial and investment freedom flourishes and expands whenever there is an expansion of Fintech. This has a direct impact, especially on hiring young graduates. Nepote-Cit, Ruberti, and Tran (2018) have shown that technological development based on application innovations such as Fintech supports investment and therefore lower unemployment rates. Thus, our results have implications for decision-makers like governments and entrepreneurs. They are invited to invest in digital solutions since the digitization of processes saves time. It will allow the management of low-added-value tasks, often time-consuming and avoid the phenomenon of “re-work.” These types of investments need a qualified workforce. On the one hand, digitization responds to the growing needs of large companies to increase efficiency. On the other hand, it responds to customers’ expectations (through automated appointment systems, online information and sales platforms, interactive terminals and tablets, etc.) and employees’ (e-learning, intelligent production of information documentation, tablets, mobiles, collaborative project monitoring tools, etc.).
Emerging markets represent the greatest opportunities for Fintech companies. “Southeast Asia is particularly attractive because it has a very tech-savvy population and governments that welcome new technologies,” said Stephania Barbaglio, director of Cassiopeia. This service agency works on various projects in emerging markets. Asian nations are reportedly among the hottest spots for Fintech opportunities. Growing economic activity and government incentives are attracting FinTech solutions to multiple sectors (see Figures A1 and A2 in appendix). Reports show an optimistic outlook for companies in digital banking and payments, artificial intelligence (AI) and blockchain.
A report from Deloitte published on December 31, 2018, estimated that Fintech investments in Southeast Asian countries in 2018 have exceeded the $5.7 bn invested in 2017 by up to 30%. Google’s economy SEA (Southeast Asia) report anticipated the Internet economy in Southeast Asia2 to have increased by 44% in 2018. This robust growth will continue as the Fintech market is projected to reach US $72 bn by 2020. Investment in technology is substantial in the area: technology firms represented 40% of total private equity deals in Southeast Asia in 2017. Over $2 bn was invested in Asian tech companies in Q1 2018, and more than 60% of Southeast Asian investors say technology was their focus area in 2018–19. Fintech is the largest subsector, followed by AI and Blockchain.
Considering the impact of Fintech and fundamental determinants on inflation, we globally prove the relatively high explanatory power (R-squared) of the regression 1 and 3 (0.5144) and (0.4091) where we estimate both the combined effect of Fintech and traditional determinants of inflation and the isolated effect of Fintech. Such values indicate that the explanatory variables included in our models explain more than 40% of the variation in inflation. On the other hand, the Fisher test confirms the global significance of the results. In contrast, the Breusch-Pagan/Cook-Weisberg test for heteroscedasticity concludes for the absence of such a problem.
Considering the effect of traditional determinants of inflation, we find in the first and second models that exchange, real interest rate, deposit interest rate, annual GDP growth and broad money of GDP have negative impact on inflation. These findings allow us to draw interesting implications. Firstly, inflation targeting should be seen as an intermediate objective of monetary policy because economic activities are not significantly inflationary. Secondly, the inflation targeting strategy can be applied in the short to medium term provided that there is a very strong correlation between inflation and real activity in order to alternate inflation targeting with the incentive to finance investment projects, which will promote the development of an industrial fabric prior to monetary sovereignty. To improve the economic effect of monetary policy, it is necessary to combat the high variability of the interest rate, which is a source of unexpected and severe inflation. The inability to forecast could cause an economic recession.
Forecasting inflation is challenging in emerging markets, where trade and monetary regimes have shifted. Particularly in Asian countries, we find that passive digital transactions measured by Made digital payments, Internet Use to Pay and debit card increase significantly and respectively by 10%, 10% and 5% the level of inflation in Asian countries. But, financial inclusion through the access to bank accounts reflecting the ability of individuals and businesses to access a range of financial products and services has a reducing impact on inflation at the level of 1%. First, the relationship between mobile money and inflation is highlighted by Aron, Muellbauer, and Sebudde (2015). Their sample concerns Uganda from 1994 to 2003 and uses multivariate models across equilibrium correction terms. They discovered that the domestic money supply had no effect on food and fuel inflation but had a negligible impact on nonfood inflation. They also prove that these models could be helpful for short-term inflation forecasting. But they couldn’t find any serious evidence of a link between mobile money and inflation. As a result, the money supply grows during booms much faster than output. They can then be countered by using a contractionary policy, thus reducing inflation.
Besides, the results prove that the active use of Fintech through the double sense of transactions (selling and buying) approximated by made or receive digital payment allow to control inflation through a significant reducing effect at 10%. In the same line of thoughts, we interestingly find that Debit card use increases inflation while the credit card has a negative effect.
According to Anagnostopoulos (2018), “The underlying consumer impact, on the one hand, is expected gains in overall consumer welfare due to the reduced costs of banking and investment; increased access and convenience; and personal banking offered by the new contestants.” Precisely, we find that having a bank account and actively using digital technologies decreases the inflation rate. So, the passive use through the simple action of buying or consuming as Internet use to pay, made digital payment and Debit card have a positive and significant impact at 10% level. Those findings could be explained, in part, by the divergence of results through a literature review concerning the impact of Fintech on inflation. To illustrate the behavior of inflation, we should consider that mobile money is a recent financial innovation offering financial transaction services via a mobile phone, including unbanked people. Mohamed Sheikh, Oyagi, and Tirimba (2015) studied the relationship between effective inflation control and mobile money. They suggest several recommendations for mobile money transfers, such as the use of carefully crafted policies and procedures. Fintech could be an inflation control tool for central banks. In the same vein, Walker (2016) postulates that mobile phones have an important impact on micro and macro levels. Several studies support that financial innovations reduce the incompleteness of markets. This meant that monetary authorities could usefully move from headline inflation to core inflation (see Table 4).
Compatible with our findings, mobile money does not lead to high inflationary risks. Mawejje and Lakuma (2017) found that active use of Fintech has a reducing effect on inflation. This result is also explained by Nampewo and Opolot (2016), who found that there is an increase in money speed when mobile money does not increase value-added. Mobile money can affect interest rates as it leads to the creation of credit by commercial banks. Another explanation has been proposed by Erosa and Ventura (2002). According to the authors, financial innovation determines a household’s portfolio choice of monetary and nonmonetary assets and liabilities, thereby reducing the cost of holding nonmonetary debt. The empirical studies by Mulligan and Sala-i-Martin (2000) and Attanasio, Guiso, and Jappelli (2002) corroborate their findings. These results are confirmed with the study by Narayan and Sahminan (2018) which found that digitalization plays an important role in lowering inflation, as it reduces costs. On the other hand, Ben Romdhane Loukil et al. (2021) confirmed that mobile money has a positive effect on rising inflation for MENA countries. Overall economic activity will be supported, since mobile money leads to economic efficiency through a reduction in transaction costs and a better allocation of resources and credit.
The table indicates that if inflation is coming from external trade (imported inflation), it is exchange rate stabilization that will be important, provided that the currency is not convertible at a known and fixed parity (the need for its own currency). In this case, policymakers need to prevent inflation from becoming excessive by addressing existing and potential supply bottlenecks and correcting sectoral imbalances that could reinforce inflationary pressures.
Our results motivate policymakers to support and encourage the spread of mobile phones and active digital transactions in the countries of the Asian Community. The dependence of financial institutions on third-party data service providers (for example, data provision, cloud storage and analysis, and physical connectivity) for primary operations is currently estimated to be low. However, this deserves continued attention from the authorities.
Consequently, these new entrants to the Fintech financial services sector could significantly change the world of financial service providers since competition in the areas of loans, payments, insurance, trade and other services can create a more efficient and resilient financial system. The government is partnering with big banks and Fintech to speed up payments. Ultimately, this increased collaboration could lead to a more seamless distribution of government benefits, greater access to lending programs, and even a digitized currency. “Public-private partnerships are typically finance-type arrangements, like building a toll road or bridge,” said Miguel Gamiño, Mastercard’s head of global cities. “The pandemic forced some conversations to happen more quickly than what would’ve happened in a pre-COVID world, and now we’re continuing to develop beyond emergency scenarios.”
However, increased competition could also pressure financial institutions’ profitability, leading to additional risk-taking by incumbents to maintain margins. Further research could be applied to understand these profound impacts better.
| 2023-03-01T00:00:00 |
https://www.emerald.com/insight/content/doi/10.1108/agjsr-08-2022-0146/full/html
|
[
{
"date": "2023/03/01",
"position": 100,
"query": "AI unemployment rate"
}
] |
|
Is Automation Killing the Human Touch in the Workplace? ...
|
Is Automation Killing the Human Touch in the Workplace? Learn How Employee-Led Software is Fighting Back!
|
https://www.linkedin.com
|
[
"Signity Software Solutions Pvt. Ltd.",
"M. Nadia Vincent",
"Mba",
"Muz Choudhury",
"Fire Service",
"Boxer",
"Life Coaching",
"I Have Also Developed Many Properties In South Manchester",
"Being A Mentor To Many Within My Community. Life Started As An Artist",
"Mentor"
] |
Statistics show that automation technologies are rapidly accelerating, with an estimated 85 million jobs being displaced by automation by 2025. McKinsey ...
|
As technology continues to advance at an unprecedented pace, businesses are relying on software more than ever before to automate tasks, increase efficiency and reduce costs. From machine learning algorithms to robotic process automation, the role of software in the workplace is all about speed and accuracy. However, this increased focus on automation comes with a risk of losing the human element in the workplace.
While workplace software has undoubtedly made many tasks faster and more streamlined, it has also made them feel impersonal and robotic, leaving employees feeling disconnected and disengaged. As we move towards an age of automation, we must consider the importance of employee-led software that takes into account the importance of human interaction and creates a more personal and engaging experience.
Statistics show that automation technologies are rapidly accelerating, with an estimated 85 million jobs being displaced by automation by 2025. McKinsey & Company also reports that up to 45% of current work activities can be automated using existing technologies. These numbers highlight the trend of companies prioritising automation to increase efficiency, reduce costs, and stay competitive in the market.
However, the importance of employee experience cannot be overlooked. A study by Deloitte found that only 22% of executives said they were excellent at building a differentiated employee experience. Moreover, by 2025, 75% of the workforce will be comprised of digital natives who have grown up with technology and have high expectations for digital experiences at work, according to a study by Gartner.
These statistics emphasise the growing need for companies to prioritise the digital employee experience to attract and retain top talent. Accenture found that 80% of executives believe that AI can improve the employee experience by automating tasks and providing personalised recommendations.
However, while digital employee experience initiatives can have a significant impact on productivity and employee satisfaction, they must be designed with the human element in mind. PwC found that 75% of employees still prefer to interact with a human rather than a chatbot or virtual assistant. This highlights the importance of finding the right balance between automation and human interaction.
Real-life examples of automation technologies being implemented to streamline processes can be seen in the retail and financial services industries. Walmart and Amazon are using autonomous robots to monitor inventory levels, restock shelves and speed up the delivery process, while banks are using machine learning algorithms to detect fraudulent transactions and improve customer service through chatbots and automated call centres.
However, these initiatives risk losing the human element, which is essential to creating a fulfilling and productive workplace. By prioritising employee-led software that fosters collaboration and engagement, companies can create a more welcoming and engaging workplace.
| 2023-03-01T00:00:00 |
https://www.linkedin.com/pulse/automation-killing-human-touch-workplace-learn-how-software-hanley
|
[
{
"date": "2023/03/01",
"position": 7,
"query": "job automation statistics"
}
] |
|
[Q] How do you think AI will impact the role of staticians in ...
|
The heart of the internet
|
https://www.reddit.com
|
[] |
How do you think AI will impact our role as staticians? Do you think it will lead to new statistical methods that will improve our work? Will AI make it easier?
|
I ask as an undergraduate in statistics, close to finish it
How do you think AI will impact our role as staticians? Do you think it will lead to new statistical methods that will improve our work? Will AI make it easier? Will it take our jobs (i hope not lol)?
| 2023-03-01T00:00:00 |
https://www.reddit.com/r/statistics/comments/11ynaey/q_how_do_you_think_ai_will_impact_the_role_of/
|
[
{
"date": "2023/03/01",
"position": 22,
"query": "job automation statistics"
},
{
"date": "2023/03/01",
"position": 53,
"query": "AI labor market trends"
}
] |
|
How will Chat GPT & generative AI impact our jobs?
|
How will Chat GPT & generative AI impact our jobs?
|
https://abhanaismith.medium.com
|
[
"Abha Malpani Naismith"
] |
Research shows that only 10% of jobs will completely disappear, because those jobs can be fully automated. Other jobs will change, as we will have to learn how ...
|
How will Chat GPT & generative AI impact our jobs? Abha Malpani Naismith 7 min read · Mar 26, 2023 -- Listen Share
I am a writer by heart and a communicator by profession. Both are industries that generative AI has disrupted like a tornado over the past few months.
Am I going to be replaced in the workforce by generative AI?
When I asked this question to the Notion AI tool, this is the answer I got:
“It’s difficult to predict the future, but it’s possible that AI could replace certain aspects of your job in the near future. However, AI is not capable of replicating the creativity, empathy, and human touch that comes with being a writer and communicator. It’s important to stay up-to-date with new developments in the industry and continue to develop your skills and expertise to stay relevant in the field.”
Please allow me to flex my human muscle and add some value to this answer.
Jobs lost, jobs gained, jobs changed
The best answer I found to this question is from James Manyika, now SVP at Google and formerly at Mckinsey, who has led detailed research into the impact of AI and jobs automation:
“I would summarize it with this notion of jobs lost, jobs gained, and jobs changed — because I think all those three things will happen. How much they happen is entirely a function of sectors, of job occupation categories, of the progress we’ll see in AI, robotics, and other automation systems,” he says in a recent interview.
Research shows that only 10% of jobs will completely disappear, because those jobs can be fully automated.
Other jobs will change, as we will have to learn how to work with AI and machine learning. Research shows that of 2000+ work activities under 800+ occupations listed by the Bureau of Labor Statistics, about 60% of those occupation categories will see 1/3rd of their constituent tasks automated in the coming decades.
And, new jobs — occupations that don’t yet exist but will exist because of the impact of technology — will be gained.
So in my case, as a writer and communications professional, my job will change; or the way I do my job will change, rather. And this will likely be the case for majority of us.
We need to learn how to maximize the use of generative AI tools so that we can do our jobs better and faster.
Thanks for reading Working Mums Club! If you like what you read, subscribe for free to receive new posts and support my work.
Here are some ways to use generative AI to be more productive in your job:
If you have a job where you are writing lots of emails: Entering simple prompts in tools like Chat GPT or Notion AI and letting it write the emails for you is a huge time-saver.
Here is an example. I put this prompt in Notion’s AI:
Write me an email to procurement expressing my disagreement with the supplier they have asked us to go with.
This is the unedited outcome:
Dear Procurement Team,
I would like to express my disagreement with the decision to choose [Supplier Name] as our supplier for [Product/Service]. After reviewing their proposal, I believe that there are better options available in terms of quality, price, and overall value.
I understand that the procurement process can be complex, and I appreciate the effort that the team has put into evaluating different suppliers. However, I strongly believe that we should not compromise on the quality of the products/services we procure, as this can have a significant impact on our business operations and reputation.
I would like to request that we revisit our decision and consider other options before moving forward with [Supplier Name]. I am happy to provide additional information and support in this process.
Thank you for your understanding.
Best regards,
[Your Name]
Pretty amazing, right? It took 1 second for the tool to give me this, and it actually does not need edits!
If you have a job where you need to write a lot: It is remarkably easy how quickly you can write using generative AI. So if you are in marketing or communications, and need to write something quickly in which you do not need to put your heart and soul, use generative AI.
Recommended tools: I personally use Jasper.ai and I think it’s brilliant, although not cheap. Some other tools you can try are the Notion AI tool, copy.ai, and of course Chat GPT.
I f you need quick c ontent summaries: Say you have not had the time to read a report before a meeting, copy-paste the text of it into an AI tool, and it will give you a short summary of the report in a second. At least you will have some idea of it before your meeting. Similarly, if you need to give your boss or management a summary of something quickly, use the tool and you will be able to whip it up in less than a few minutes
c Say you have not had the time to read a report before a meeting, copy-paste the text of it into an AI tool, and it will give you a short summary of the report in a second. At least you will have some idea of it before your meeting. Similarly, if you need to give your boss or management a summary of something quickly, use the tool and you will be able to whip it up in less than a few minutes Idea generation: Every job requires coming up with ideas. AI tools can make things faster, imagine it as a team you are brainstorming with.
Here is a prompt I put into Notion’s AI:
Give me some ideas on how to encourage my team to recycle.
Below is what I got, unedited.
Encouraging your team to recycle is an important way to reduce waste and promote sustainability. Here are some ideas to consider:
Provide clear and visible recycling bins throughout the workspace, making it easy for team members to recycle.
Offer incentives for team members who consistently recycle or come up with innovative ways to reduce waste.
Educate your team on the importance of recycling and its impact on the environment.
Consider implementing a composting program in your workplace to reduce food waste.
Partner with local recycling programs or organizations to promote sustainability efforts beyond the workplace.
Not a bad place to begin, hey?
Social media content creation: Tweets, Linkedin posts, Instagram captions, it will write anything for you. It will even create campaigns for you!
Prompt: Write me a Linkedin post calling for applications for the job of a social media executive with 3 years of work experience.
See unedited outcome below.
We’re hiring! We’re looking for a social media executive with at least 3 years of experience to join our team. Do you have a passion for social media and a track record of creating engaging content? Apply now and help us take our social media presence to the next level! #socialmedia #hiring #executiveposition
Improve your writing: Look at all the things Notion’s AI tool can help you with!
Image & video creation for marketing: You can write the description of an image you want and generative AI will create it for you. Same goes for videos — I haven’t tried this yet so not sure of the quality.
Prompt (Jasper Art): Create an image of poodle that is wearing a hat and eating an ice cream
Voila! Who needs photoshop?
Text to audio creation for voice-overs: You no longer need to schedule and pay $$$ for voice-overs for any video, recordings or presentations.
Try https://murf.ai/ to explore AI voices for any requirement.
I f you are a designer, learn to use tools to design better : You can create a brand identity you love with the power of AI. Try https://looka.com/ to make a logo and brand book in minutes.
: You can create a brand identity you love with the power of AI. Try https://looka.com/ to make a logo and brand book in minutes. Edit images without photoshop: We all have situations at work or with our kids where we could do with a quick edit of an image. This tool helps you remove any unwanted objects or people from photos in seconds https://cleanup.pictures/.
I could go on and on. Here is a great resource of more tools that you can start with. Best thing is, most of them are free to try.
Limitations of generative AI
The quality and accuracy of what AI and machine learning tools are hit and miss. Chat GPT for example, works on pre-existing specific data-sets; it can only offer advice and knowledge on issues within its domain. It is often only as good as the prompt you give it and it still cannot answer all questions.
For example, it can only think under pre-specified parametres. Stephen Pinker has a great article on this that I recommend you read when you have time. An example from the article:
When I asked ChatGPT, “If Mabel was alive at 9 a.m. and 5 p.m., was she alive at noon?” it responded, “It was not specified whether Mabel was alive at noon. She’s known to be alive at 9 and 5, but there’s no information provided about her being alive at noon.”
Worth reading on the limitations of AI is the paperclip maximizer, a thought experiment by Nick Bostrom in 2003 demonstrating the extreme intelligence of AI as well as its profound stupidity. A fun and easy to read version of it is here.
Anyway, this is an unintended super long article which I clearly got carried away writing, so my last words on it for now :)
We must learn to work with new technologies
Being able to do things quickly and communicate effectively is a fundamental requirement no matter what job you are doing.
We need to learn to work with the tools that are available to us or we will become redundant.
| 2023-03-26T00:00:00 |
2023/03/26
|
https://abhanaismith.medium.com/how-will-chat-gpt-generative-ai-impact-our-jobs-e056757c69f6
|
[
{
"date": "2023/03/01",
"position": 33,
"query": "job automation statistics"
},
{
"date": "2023/03/01",
"position": 8,
"query": "ChatGPT employment impact"
}
] |
Industrial Robots and Regional Fertility in European ...
|
Industrial Robots and Regional Fertility in European Countries
|
https://pmc.ncbi.nlm.nih.gov
|
[
"Anna Matysiak",
"Interdisciplinary Centre For Labour Market",
"Family Dynamics",
"Faculty Of Economic Sciences",
"University Of Warsaw",
"Warsaw",
"Daniela Bellani",
"Department Of Political",
"Social Sciences",
"University Of Florence"
] |
by A Matysiak · 2023 · Cited by 27 — Our findings suggest robots tend to exert a negative impact on fertility in highly industrialised regions, regions with relatively low educated populations.
|
Abstract In this study, we examine whether the long-term structural changes in the labour market, driven by automation, affect fertility. The adoption of industrial robots is used as a proxy for these changes. It has tripled since the mid-1990s in the EU, tremendously changing the conditions of participating in the labour market. On the one hand, new jobs are created, benefitting largely the highly skilled workers. On the other hand, the growing turnover in the labour market and changing content of jobs induce fears of job displacement and make workers continuously adjust to new requirements (reskill, upskill, increase work efforts). The consequences of these changes are particularly strong for the employment and earning prospects of low and middle-educated workers. Our focus is on six European countries: Czechia, France, Germany, Italy, Poland and the UK. We link regional data on fertility and employment structures by industry from Eurostat (NUTS-2) with data on robot adoption from the International Federation of Robotics. We estimate fixed effects linear models with instrumental variables in order to account for the external shocks which may affect fertility and robot adoption in parallel. Our findings suggest robots tend to exert a negative impact on fertility in highly industrialised regions, regions with relatively low educated populations and those which are technologically less advanced. At the same time, better educated and prospering regions may even experience fertility improvements as a result of technological change. The family and labour market institutions of the country may further moderate these effects. Supplementary Information The online version contains supplementary material available at 10.1007/s10680-023-09657-4. Keywords: Fertility, Employment, Industrial robots, Technological change, Europe
Introduction Over the last two decades, technological advancements in production, including cutting-edge industrial robots, have tremendously transformed the labour markets in advanced market economies, creating new career opportunities, but also inducing fears of job displacement (OECD, 2019). Only in the EU, the stock of industrial robots per 10.000 manufacturing workers has tripled since the mid-1990s reaching 114 in 2019 (International Federation of Robotics, 2020). Because of the scale and speed of automation and its possible consequences for workers, there has been an explosion of studies on how technological advancements in production affect employment (Acemoglu & Restrepo, 2020; Graetz & Michaels, 2018), wages (Dauth et al., 2021), social and economic inequalities (Aksoy et al., 2021; de Vries et al., 2020) and more recently workers’ physical and mental health (Abeliansky and Beulman, 2019; Gihleb et al., 2022). With this study, we contribute to this discussion by examining how automation, and more specifically the adoption of industrial robots, influences fertility, an outcome which so far has been largely neglected in the scientific debate. In our view, automation may affect fertility since it alters the conditions of participating in the labour market and with it the economic well-being of the family and the strategies of its adult members adopted to combine paid work with care. Past research has clearly demonstrated that individuals tend to postpone or even abstain from having children during economic downturns (Cherlin et al., 2013; Sobotka et al., 2011), usually in response to an increase in unemployment and growing instability of employment (Adsera, 2004; Bellani, 2020; Matysiak et al., 2021; Schneider, 2015). The feeling of economic uncertainty may also hinder fertility decisions irrespective of the real economic conditions (Vignoli et al., 2020). Notably, fertility usually declines more strongly in response to worsening of employment prospects for men and young workers as well as in countries offering weaker social protection in case of a job loss (Alderotti et al., 2021; Comolli, 2017). Past research has largely concentrated on examining fertility consequences of short-term changes in labour market conditions, caused by cyclical swings in the economy and reflected in upward and downward moves in (un)employment or work conditions. Much less has been done on how fertility reacts to long-term structural changes in the labour markets, driven, for instance, by globalisation or technological change. These changes may not necessarily affect (un)employment, but rather change the demand for workers’ skills. They may increase uncertainty, push workers into poorly paid low quality jobs or increase workers’ effort to catch up with quickly changing work guidelines and skill requirements (Autor et al., 2006; Green et al., 2022). In fact, Seltzer (2019) demonstrated that the cyclical approach performed very well in predicting a decline in fertility rates during the Great Recession in the USA, but completely failed in its aftermath when envisioning a fertility rebound. This study contributes to the discussion on labour markets and fertility by investigating how the long-term structural changes in the labour market, driven by robot adoption, affect regional fertility. Robot adoption mirrors technological innovation and is a marker of economic and labour market transformation (Dottori, 2021). Following the International Federation of Robotics, we define industrial robots as fully autonomous machines that do not require a human operator (Jurkat et al., 2022). So far, little attention has been paid to this topic in fertility research. A notable exception among the published papers is the study by Anelli et al. (2021) who investigated the effects of the adoption of industrial robots on marriage and fertility in the USA. Our focus is on Europe, where, despite large cross-country diversity, workers are much better protected against job loss or poverty (Esping Andersen, 1990). By exploiting variation in robot penetration across NUTS-2 regions, we examine how robotisation influenced fertility in six European countries, namely Czechia, Germany, France, Italy, Poland and the UK. These countries differ in the penetration of automation, labour market and family policy regimes and gender norms. They also constitute good cases for examination as they provide a reasonable number of NUTS-2 regions for obtaining robust empirical findings (with Czechia pooled together with Poland).
Literature Review Automation, Employment and Economic Uncertainty The fear that automation will lead to a massive job destruction has been a concern for at least two centuries since the first industrial revolution began (OECD, 2019). Even though the industrial revolution didn’t, in the end, lead to unemployment, but to an expansion of job opportunities and improvement in living standards, fear of automation persisted. In the twenty-first century, we are facing a new wave of anxiety that robots will take over our jobs—this time it is about cutting-edge industrial robots (Dekker et al., 2017). The adoption of robots and machines will indeed change the ways we work and change the demand for skills. Some jobs, in particular those which require performing routine tasks, will likely be destroyed or substantially changed (Acemoglu & Autor, 2011; Acemoglu & Restrepo, 2020). In the OECD countries, it was estimated that around 10–14% of jobs will be fully replaced by robots and for 25%—32% around 50–70% of tasks will be automated in the next two decades (Arntz et al., 2017; Nedelkoska & Quintini, 2018). Yet, automation does not only destroy jobs but also increases productivity and thereby facilitates job creation. The newly created jobs often require different skills, however. Most often there are non-routine highly cognitive skills which can be implemented in the expanding high tech sector, education or highly specialised customer service (Acemoglu & Autor, 2011). New jobs are also created in the lower-skill service sector (e.g. delivery workers, drivers), but they often offer poor social protection, are low paid and/or unstable (Autor, 2019). Empirical research demonstrated the effects of automation on labour market outcomes to be unequivocal and clearly depend on workers’ education and skills, the sector they are employed in and the overall economic and institutional environment. Automation seems to exert particularly negative effects on employment and/or earning opportunities of low-and-middle educated workers, both in the USA (Acemoglu & Restrepo, 2020) and in Europe, though in the latter to a lower extent (Graetz & Michaels, 2018). Robots usually destroy jobs in manufacturing (Jung & Lim, 2020) but create new jobs in the service sector (for the US case see Acemoglu & Restrepo, 2020; for the UK see Kariel, 2021). As companies which adopt robots increase their productivity, they can invest more resources into product development, sales and marketing. Robots are thus indirectly increasing demand for workers who can fill in the jobs in highly specialised customer service and product development, not even mentioning the high tech workers who are able to design and operate industrial robots. Indeed, it was demonstrated that highly educated workers, performing nonroutine cognitive tasks, usually benefit from the ongoing changes (de Vries et al., 2020). Automation is also more likely to bring increases in employment in companies and regions which are more technologically advanced and better prepared to embrace the benefits brought about by technological progress. It was demonstrated, for instance, that regions with higher shares of knowledge and creative workers are better able to adapt to changes driven by digitalisation and thus are less vulnerable to automation shocks (Crowley et al., 2021). Last but not least, the effects of robotisation on employment and earnings may differ across countries and depend on their institutional settings. The labour substituting effect of robots tends to be stronger in countries with higher labour costs (Bachmann et al., 2022; Jung & Lim, 2020) and is argued to increase with a decline in employment protection legislation (Traverso et al., 2022). Much less is known about how automation affects men’s versus women’s employment and earning opportunities, with few empirical findings suggesting mixed results. While Acemoglu and Restrepo (2020) find no gender differences in automation effects in the USA, Brussevich et al. (2019) argue that women in OECD countries may be more exposed to automation as they are more often employed in jobs which involve routine tasks (see also Piasna & Drahokoupil, 2017 for the same conclusions for the EU). Robotisation also seems to increase gender wage inequalities in Europe by disproportionately benefiting men in medium- and high-skill occupations (Aksoy et al., 2021). At the same time, however, there is evidence that young generations of women are moving away from the routine-intense jobs more quickly than men and take non-routine jobs in the service sector (Black & Spitz-Oener, 2010; Cortes et al., 2021) and that the pace of such job reallocation is faster in countries more advanced in robotisation (Aksoy et al., 2021). Overall, whether the new wave of automation will indeed lead to declines in employment is not yet clear. There is evidence, however, that it increases turnover in the labour market, requires readjustment from workers and increases uncertainty. The aforementioned studies by Arntz et al (2017) and Nedelkoska and Quinitni (2018) demonstrate that robots substantially change the task content of jobs, modifying the demand for skills and requiring employees to acquire new qualifications and follow new guidelines. A study from Norway found that around 40% of workers fear being replaced by a machine, which lowers their job satisfaction (Schwabe & Castellacci, 2020). Abeliansky and Beulman (2019) demonstrated negative effects of robot adoption on workers’ mental health in Germany. Robot adoption was also found to increase death rates due to substance and alcohol abuse (Gihleb et al., 2022; O’Brien et al., 2022). Finally, the fear of robots was found to be particularly pronounced among the blue collar workers, most exposed to negative effects of automation, and in countries with weaker safety nets (Dekker et al., 2017). Automation and Fertility A large body of the literature has provided evidence that weakening employment prospects, increase in unemployment and economic uncertainty lead to postponement of fertility or even lower fertility rates (Adsera, 2004; Comolli, 2017; Matysiak et al., 2021; Schneider, 2015). This is particularly true in countries offering weak safety nets for the unemployed (Mills et al., 2005). Growing instability of employment has also more negative consequences on fertility when it concerns men than women who, instead, may treat unemployment as an opportunity window for childbearing (Kreyenfeld & Andersson, 2014; Schmitt, 2012). These gender differences in the role of unemployment or precarious employment for fertility are, however, gradually in decline with an increase in women’s education, changing gender roles and growing instability of men’s employment (Oppenheimer, 1997). In a meta-study Alderotti et al. (2021) showed that in countries with high gender equality, such as Nordic Europe, or countries characterised by strongly unstable employment patterns among men, such as Southern Europe, women no longer use unemployment in order to have children. The same study showed that temporary contracts depress fertility more strongly if they are held by women than men. Past research on labour market and fertility has, however, largely relied on such labour market indicators, such as (un)employment rate, wages or proportion of persons on specific contracts (e.g. temporary or part time). These indicators excel in identifying short-term cyclical economic conditions, but are less able to capture long-term structural changes in the labour markets, driven for instance by globalisation or technological change. These changes may not necessarily affect (un)employment, but may require workers to adjust to the changing demand for skills. They may thus increase uncertainty and workers’ effort to adapt new work guidelines and protocols or undertake training. New employment opportunities may open in front of some workers, while others may be pushed into poorly paid low quality jobs (Autor et al., 2006; Green et al., 2022). In particular, Seltzer (2019) showed that the cyclical approach performed very well in predicting a decline in fertility rates during the Great Recession in the USA, but failed when envisioning a fertility rebound in its aftermath. Instead, fertility continued to fall despite a steep decline in unemployment in the post-crisis period (until the breakdown of the Covid-19 pandemic). This phenomenon was apparently driven by long-term structural changes in the labour market, caused by globalisation and technological change. These changes started already before the Great Recession but accelerated throughout it as companies which implemented labour replacing technologies during the economic crisis were most likely to survive it (Hershbein & Kahn, 2018). With time, the displaced workers found employment in the lower-skill service sector, which resulted in a decline in unemployment, but these jobs were of lower quality, at least in the USA (Seltzer, 2019). So far few studies have looked at how these long-term structural transformations in the labour market affect fertility. Among them the majority concentrated on changes caused by globalisation, in particular the detrimental role of import competition with China for employment opportunities of middle-skilled workers, mostly male, in goods-producing industries. Studies consistently showed that increased import competition led to a decline in fertility, largely by a declining marriage value of men (Autor et al., 2019; Giuntella et al., 2022; Piriu, 2022). Researchers’ interest in how technology-driven labour market changes affect fertility has been even scarcer. On one hand, it has been shown that technological complexity, that reflects the capacity to innovate, develop and create job opportunities, is positively associated with fertility (Innocenti et al., 2021). This is because it fosters a fertility-friendly context characterised by better employment prospects. On the other hand, however, technological upgrading driven by automation is likely to increase turnover in the labour market, increase uncertainty and force workers to re-skill, which, in turn, may decrease fertility. In the only published empirical study on the effect of robotisation on fertility, Anelli et al. (2021) demonstrate that an increase in the adoption of industrial robots in the USA led to an increase in cohabitation and divorce and a decline—though not significant—in the number of marriages. Their findings also point to a decline in marital fertility and an increase in out-of-wedlock births.
Country Context Our study is situated in six European countries, namely Czechia, Germany, France, Italy, Poland and the UK. This country choice is driven by the desire to cover European countries which represent different labour market and family policy regimes and which also differ in the advancement of robot adoption. At the same time, we faced data restrictions. Conducting a regional level analysis, we were restricted to the choice of only bigger European countries with a large number of NUTS-2 regions. Furthermore, due to the choice of the IV strategy (for details, see Sect. 5.2) we were not able to pool European countries into groups (except for Czechia and Poland). Among the selected countries France and UK have had the highest fertility for about four decades (with TFR oscillating between 1.7 and 2.0), though on a slow but gradual decline since the onset of the Great Recession. Germany and Italy had been the lowest low fertility countries (with TFR below 1.35) since the mid-1980s and Czechia and Poland since the late 1990s/early 2000s. However, while Germany and in particular Czechia experienced some increase in fertility over the last 15 years, Italy and Poland remained at the fairly low levels with TFR oscillating between 1.25 and 1.45 (Eurostat, 2022). The analysed countries also represent different welfare regimes which define the extent to which workers are protected against a job loss and supported in case of unemployment, all of which may matter for their fertility decisions (Adsera, 2005; Bastianelli et al., 2022). Germany and France are typically classified into the conservative/employment-centred regimes (Amable, 2003; Esping-Andersen, 1990; Walther, 2006), based on strong employment protection and coordinated bargaining systems which allow for a “solidaristic wage setting” (Amable, 2003: 15). The two countries tend to offer generous income support for the unemployed and institutional support in job search (Tamesberger, 2017). Employment protection is also high in Italy, but is strictly directed at protecting workers on permanent contracts, leaving workers on temporary contracts often trapped in the secondary labour market (Pinelli et al., 2017). The UK, instead, is an example of liberal welfare state (Esping-Andersen, 1990), with a very low employment protection and low public support for the unemployed, offered only to those in the highest need (Caroleo & Pastore, 2007). Finally, Czechia and Poland belong to the post-socialist transitional regime with strong market orientation, low levels of state intervention, weak unions and limited support for the unemployed (Visser, 2011), providing rather low support for the unemployed (Tamesberger, 2017). They also display much lower labour costs than the remaining countries (Eurostat, 2022). Family policies and the gender norms represent another element of the country context which may affect fertility responses to the changing labour market conditions. Whereas France stands out for its very good childcare coverage, Germany for a long time adhered to a modernised male breadwinner policy and only recently started to invest in childcare (Fagnani, 2012). Consequently, while it is common for mothers in France to work full time, many women in Germany switch to part-time jobs after they become mothers (Fagnani, 2007). In Italy, childcare is seen as a private issue, which results in strong gender inequalities both in paid and unpaid work (Menniti et al., 2015). Childcare provision in the UK is also weak and care usually has to be purchased on the market (Yerkes & Javornik, 2019). Mothers usually work part-time or make use of flexible work arrangements which are available in the UK on a wider scale than in other studied countries (Chung & Horst, 2018). Poland and Czechia also display low childcare provision (Szelewa & Polakowski, 2008). Interestingly, mothers usually return to full-time employment after birth though in Czechia much later than in Poland (Matysiak, 2011). Finally, the analysed countries differ in the robot penetration. The process of robot adoption in the old EU member states (Germany, France and Italy) and the UK started in the early 1990s (see Fig. 1). In all these countries, robots are predominantly employed in the automotive industry, apart from Italy where the allocation of robots across industries is more balanced with 26% in the metal, 17% in the automotive and 12% in the plastic and chemical industry (International Federation of Robotics, 2020). Germany is a clear leader in robot adoption worldwide (Dauth et al., 2021). It is followed by France and Italy where the robot penetration, measured by the number of robots per 10,000 employees, in 2019 was around half of that in Germany. Even lower penetration is observed in the UK which is an example of the Western European country with relatively slow adoption of industrial robots. The two post-socialist countries, Czechia and Poland, also display lower levels of robotisation, but the process of robot adoption started much later there, in the late 2000s. Robotisation in Czechia was very dynamic, due to the rapid development of its automotive industry, with the penetration rate surpassing the French one in 2017. The process in Poland was slower though gradual. Interestingly, in none of the studied countries did an increase in robot adoption go hand in hand with an increase in unemployment (see Fig. 2). Neither did robot penetration change during the Great Recession. Instead, we observed a gradual increase in robot adoption in all analysed countries alongside cyclical movements in unemployment. This observation confirms that robotisation does not necessarily reflect the same phenomenon as unemployment. Fig. 1. Open in a new tab Industrial robot penetration in 6 European countries by calendar year. Sources: International Federation of Robotics (IFR) and Eurostat. Calculated by summing robot stocks and employment for the following 1 digit industries: manufacturing, mining and quarrying, electricity, gas, water supply, and construction. Time series are constrained by data availability, as IFR publishes robot stock from 1993 onwards. Figure prepared by the authors in Stata Fig. 2. Open in a new tab Robot penetration (left y axis) vs unemployment (right y axis) by country in time. Note: Robot stocks are summed up for the following 1 digit industries: Manufacturing, Mining and quarrying, Electricity, gas, water supply, and Construction. Source: International Federation of Robotics and Eurostat. Figure prepared by authors in Stata
Research Objectives and Hypotheses In this study, we extend the work by Anelli et al. (2021) and examine the effects of long-term structural changes in the labour market, driven by adoption of industrial robots, on regional fertility rates in six European countries—Czechia, France, Germany, Italy, Poland and the UK. As we demonstrated in Sect. 2.1, automation may benefit certain groups of workers (e.g. highly educated, working in the service sector) and diminish the earning/employment opportunities of the others (e.g. low and middle educated workers in the manufacturing sector). We thus do not expect it to affect regional fertility rates in any uniform way. Instead, we anticipate the fertility effects of robot adoption to depend on the structural conditions of the regional labour markets. First, we expect robot adoption to exert more negative/less positive effects on fertility in those regions which used to have large employment in manufacturing before the onset of robotisation (H1). This expectation is formed due to the fact that industrial robots are largely employed in manufacturing, leading to a larger job destruction, turnover and uncertainty there rather than in the service sector. Second, we hypothesise that the negative (positive) fertility effects of robot adoption will be more (less) evident in regions where the proportion of men employed in manufacturing at early stages of automation was larger, making men more exposed to robotisation (H2). This is because fertility is less likely to decline in a reaction to a deterioration in women’s than men’s employment conditions. Next, we expect stronger fertility declines/weaker fertility increases in response to robot adoption in regions with a larger proportion of low and middle educated workers (H3) since they are the ones which are mainly negatively affected by automation, either by being at risk of job displacement or having to compete with displaced workers for jobs. Last but not least, we anticipate that fertility effects of robot adoption depend on the region’s capacity to embrace technological change. Consistently with past research showing that employment effects of robot adoption are weaker or even positive in regions which invest in modern technologies, we expect that fertility will be less likely to decline/more likely to increase in response to automation in technology- and knowledge-intensive regions (H4). Finally, fertility effects of robot adoption may also vary across the studied countries since they display substantial differences in welfare regimes, the gender normative context and penetration of automation. We abstain, however, from formulating specific hypotheses on the role of the specific cross-country differences for our findings since a comparison of only six countries which vary in numerous important dimensions precludes testing such hypotheses. We rather discuss our findings from the perspective of the cross-country differences presented in Sect. 3.
Methodology Data Our study is based on regional NUTS-2 data. The nomenclature of territorial units for statistics (NUTS) is a hierarchical system for dividing up the economic territory of the European Economic Area, the UK, and Switzerland for the purpose of data collection and socio-economic analyses. NUTS-2 regions are roughly equally populated, with population ranging from 0.8—3 million, and these are the smallest geographical units for which employment data are available in Eurostat for all 6 countries of our interest. We observe the countries fairly since the start of the robotisation till 2017. This means we cover the years 1997–2017 for the old EU member states and the UK and 2007–2017 for Czechia and Poland. Covering fully the 1990s for the old EU member states was not possible due to data availability. To measure fertility, we use TFR and the age-specific fertility rates for the following age groups: 20–24, 25–29, 30–34, 35–39, 40–44, 45 + . These data have been provided by Eurostat at the NUTS-2 level since 1990. They are computed by combining national statistics on births by mother’s age and population of women by age. They are fairly complete with some missing data in fertility of women aged 45 + (around 10% of all observations). We use simple linear interpolation to supply them. To measure worker’s exposure to automation we use data on industrial robot stocks provided by the International Federation of Robotics (henceforth: IFR). Industrial robots are defined by IFR as fully autonomous machines that do not require a human operator. Their main tasks are handling operations and machine tending (55% of all European robots fall into this category) and welding and soldering (22% of all European robots) (Jurkat et al., 2022). IFR provides annual data on the operational stock of industrial robots by country and industry since 1993. The industries are coded according to the International Standard Industrial Classification of all economic activities (ISIC, UN, 2008). The stocks of robots are provided by IFR at 1 digit level for all ISIC industries, and max 3 digits for manufacturing industries. The IFR data is complete. We utilise records at 1 digit for three following ‘heavy’ industries: mining and quarrying, electricity, gas, water supply, and construction. We utilise records at 2 digits for the remaining 13 manufacturing industries1 to match our regional employment structure data, which is also coded in 2-digit industry categories. We don’t include non-industrial categories such as Services, Public Administration, or Education, as those industries employ predominantly service, not manufacturing robots, and at a much smaller scale than robots operating in manufacturing or ‘heavy’ industries (Hajduk and Koukolova, 2015). The data on robots are linked to data on regional employment structures by industry using the methodology developed by Acemoglu and Restrepo (2020) and described in detail in Sect. 5.2. Eurostat has provided NUTS-2 regional employment structures by 2-digit industry codes classified according to Nomenclature of Economic Activities (NACE Rev. 1.2 before 2008, NACE Rev. 2 after 2008) since 1986. We reclassify these data to the ISIC classification to match them to robot stocks. Moreover, since our main covariate (explained in detail in Sect. 5.2) relies on summation of employment numbers over time, we impute missing records of the regional employment structure. Finally, changes in the past NUTS classifications require reclassifying regional codes to one, consistent version. Both reclassifications and the imputation are described in detail in the Appendix in Tables 6 and 7. Table 6. ISIC-NACE industry codes crosswalk for sectors used in our analysis Category IFR (ISIC) Regional employment (na112d) Regional employment (nace2d) All other manufacturing branches/other chemical products n.e.c 91, 20–21 30, 37, 23 32, 33, 19 Automotive/other vehicles 29–30 34, 35 29, 30 Basic metals 24 27 24 Construction F 45 41, 42, 43 Electrical/electronics 26–27 31, 32, 33 26, 27 Electricity, gas, water supply E 40, 41 35, 36 Food and beverages 10–12 15, 16 10, 11, 12 Glass, ceramics, stone, mineral products (non-automotive) 23 26 23 Industrial machinery 28 29 28 Metal products (non-automotive) 25 28 25 Mining and quarrying C 10, 11, 12, 13, 14 05, 06, 07, 08, 09 Paper 17–18 21, 22 17, 18 Pharmaceuticals, cosmetics 19 24 20, 21 Rubber and plastic products (non-automotive) 22 25 22 Textiles 13–15 17, 18, 19 13, 14, 15 Wood and furniture 16 20, 36 16, 31 Open in a new tab Table 7. NUTS-2 region splits/merges over years (1994–2017) 1994–1998 1999 2000–2001 2002–2003 2004 2005–2010 2011–2012 2013–2017 Action DE40 DE40 DE40 DE40 DE41 DE40 DE40 DE40 sum DE41 and DE42 to DE40 DE42 DEB1 DEB1 DEB0 DEB1 DEB1 DEB1 DEB1 DEB1 sum DEB1, DEB2, and DEB3 to DEB0 DEB2 DEB2 DEB2 DEB2 DEB2 DEB2 DEB2 DEB3 DEB3 DEB3 DEB3 DEB3 DEB3 DEB3 DED0 DED0 DED2 DED2 DED2 DED2 DED2 DED2 sum DED2, DED4, and DED5 to DED0 DED4 DED4 DED4 DED4 DED4 DED4 DED5 DED5 DED5 DED5 DED5 DED5 DEE1 DEE1 DEE1 DEE1 DEE1 DEE0 DEE0 DEE0 sum DEE1, DEE2, and DEE3 to DEE0 DEE2 DEE2 DEE2 DEE2 DEE2 DEE3 DEE3 DEE3 DEE3 DEE3 IT31 ITH1 ITH1 ITH1 ITH1 ITH1 ITH1 ITH1 sum ITH1 and ITH2 to IT31 ITH2 ITH2 ITH2 ITH2 ITH2 ITH2 ITH2 UKI1 UKI1 UKI1 UKI1 UKI1 UKI1 UKI3 UKI3 sum UKI3 and UKI4 to UKI1 UKI4 UKI4 UKI2 UKI2 UKI2 UKI2 UKI2 UKI2 UKI5 UKI5 sum UKI5, UKI6, and UKI7 to UKI2 UKI6 UKI6 UKI7 UKI7 PL12 PL12 PL12 PL12 PL12 PL12 PL12 PL91 sum PL91 and PL92 to PL12 PL92 Open in a new tab Besides fertility rates, Eurostat online database provides us also with NUTS-2 level controls by calendar year, as well as potential moderators, which we interact with our main explanatory variable in order to test our research hypotheses. We include the following set of controls at the regional level: share of population aged 15–24, share of population aged 25–49, share of population aged 50 + , share of highly educated (ISCED levels 5–8), ratio of share of highly educated women to share of highly educated men, the square of the latter and women’s economic activity rate. The variables denoting population structure by age are introduced to control for any variation in population exposed to childbearing. We also account for the population education level given the educational gradient in fertility (Wood et al., 2014). The share of highly educated women relative to highly educated men and the square of this ratio aim at capturing the difficulties to find a partner in regions with better educated female population (Bellani et al., 2017) given that partners tend to form unions if they have similar education levels (de Hauw et al., 2017). Finally, women’s economic activity rate is also tightly linked to fertility. The potential moderating variables are settled at the regional level as well. They are the initial (measured around the onset of robot adoption) proportion of workers employed outside of manufacturing (used to test H1), the initial proportion of women employed in manufacturing over the proportion of men in manufacturing (H2), proportion of highly educated persons (time-varying) (H3) and the proportion of workers employed in technology- and knowledge-intensive sectors (time-varying) (H4). The control and moderating variables are fairly complete. Any missing values were imputed via linear interpolation. This was done in 14% of cases for population structure by education, and max. 25% for employment data. There are no cases when the entire time series for specific regions are missing. After accounting for the NUTS reclassifications and excluding foreign territories (see Table 7 in the Appendix), we have data for 34 NUTS 2 regions in Germany, 22 in France, 20 in Italy, 35 in the UK, 16 in Poland, and 8 in Czechia. We pool the data for Czechia and Poland due to the smaller number of regions in the two post-socialist countries and their similarities when it comes to labour market and family policy institutions, economic developments and delayed start of automation in comparison with Western Europe. In total, we have 680 observations for Germany, 440 for France, 400 for Italy, 700 for the UK and 240 for Czechia and Poland jointly. Methods Our methodology relies on regressing fertility rates against workers’ exposure to robotisation as well as a set of control variables mentioned in Sect. 5.1, separately for Germany, Italy, France, the UK and the group formed by Czechia and Poland. We quantify workers’ exposure to robotisation following the methodology developed by Acemoglu and Restrepo (2020) and used, among others, in Dauth et al. (2021), Anelli et al. (2021), and O’Brien et al. (2022): E x p o s u r e t o r o b o t s r , t = ∑ i = 1 N e m p l r , i , t 0 e m p l r , t 0 r o b o t s i , t C e m p l i , t 0 1 where r o b o t s i , t C is the country-level stock of robots across industries in year t ; e m p l i , t 0 identifies the total number of workers (in 10 thousands) employed in sector i in t 0 , i.e. at the start of the robotisation (hereafter initial) and e m p l r , i , t 0 e m p l r , t 0 denotes the initial distribution of employment in industry i across regions. Effectively, r o b o t s i , t C e m p l i , t 0 captures robots adopted in industry i and country c replacing its initial employment, while e m p l r , i , t 0 e m p l r , t 0 disaggregates it onto regions. We set t 0 to 1994 for Western European countries and to 2004 for Czechia and Poland, as those are years when robotisation started in those respective countries (see Sect. 3). The measure defined in Eq. 1 is known as “shift-share instrument” or “Bartik instrument” (Goldsmith-Pinkham et al., 2020). While exposure to robots is already considered exogenous, as its variation relies on employment shares before robotisation had started, concerns about endogeneity of r o b o t s i , t C might still appear, i.e. when external factors affect both the robot adoption and fertility. These may be domestic or sector-specific shocks, such as policy changes. To address this issue, we follow Acemoglu and Restrepo (2020) and instrument the industry-specific stock of robots in country c r o b o t s i , t C with industry-specific stock of robots in other countries, which serve as a proxy for advancements in robotisation in developed economies.Choosing the right country for instrumenting robot adoption in Western European countries turned out to be challenging, however. The US’ industry-specific stocks of robots could not be used for this purpose since robots (relative to workforce) in that country were used on a smaller scale than in Western Europe (International Federation of Robotics, 2020)—thus the USA cannot be considered as a pioneer of robotisation which the Western European countries would follow. Some of the East Asian economies are more advanced in robotisation than Western Europe (e.g. South Korea), but they adopt robots in other industries than European countries. We are thus uncertain about whether Europe will follow their path. We adopt the strategy suggested by Dauth et al. (2021) who used industry-specific stocks of robots from several advanced economies as instruments of robot stocks in Germany (overidentified IV model). We thus build an overidentified model for each country with k = {Germany, France, UK, Italy, Spain, Sweden, Norway, Finland, United States of America} instruments. In models for Germany, France, UK, and Italy, we exclude the country of interest and the USA, and thus apply 7 instruments. In models for Poland and Czechia, all 9 instruments are applied. Those external instruments are likely relevant, as industrial robots are manufactured by only a few international companies, which set global trends in industrial robot adoption. Thus, robot adoption in one developed economy is a good proxy for robot adoption in another one, with a similar socio-economic context. The proposed set of instruments should also be valid, as there is no reason to expect that robot adoption in one developed economy has a direct influence on fertility rates in another one. To test the instruments’ relevance and validity of the overidentifying restrictions, we compute Kleibergen-Paap rk Wald F statistic, and Hansen J statistic (Kleibergen & Paap, 2006; Sargan, 1958; Wooldridge, 2010) and report it along with full model results in the Appendix (Tables 8, 9, 10, 11, 12). Even though this strategy for instrumenting our variable of interest resulted in relevant and valid instruments, it also has a drawback. Namely, we were not able to pool all European countries and estimate one model as that would leave us with collinear sets of instruments, which would be endogeneous and thus of little use. Table 8. Full basic model results for Germany (see Table 1 in Sect. 6.1) Covariate TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Exposure to robots − 0.000159 0.0000438 0.0000217 − 0.0000215 − 0.000110*** − 0.0000484*** − 0.00000125 Share of population aged 15–24 − 1.141** − 0.821*** − 0.424*** 0.710*** − 0.179** − 0.164*** − 0.0135*** Share of population aged 25–49 2.184* − 0.0586 − 0.579 1.537*** 0.978*** 0.0514 − 0.0156 Share of population aged 50 + 0.522 − 0.257* − 0.430** 1.106*** 0.141 − 0.167*** − 0.0225*** Share of highly educated population − 0.00173 − 0.00133** − 0.00335*** 0.000463 0.00182*** 0.000682*** 0.0000721*** Ratio of share of highly educated women to share of highly educated men − 0.651*** − 0.0780** − 0.140** − 0.223*** − 0.127*** − 0.021 − 0.00263* Square of ratio of share of highly educated women to highly educated men 0.522*** 0.0711*** 0.124*** 0.161*** 0.0980*** 0.0183*** 0.00183** Share of economically active women − 0.000723 − 0.000145 0.00125* − 0.000427 − 0.000933* − 0.000450** − 0.0000538*** Kleibergen-Paap rk Wald F statistic 347.778 347.778 347.778 347.778 347.778 347.778 347.778 Hansen J p-value 0.4002 0.3592 0.4523 0.0432 0.0281 0.2845 0.2264 Open in a new tab Table 9. Full basic model results for France (see Table 1 in Sect. 6.1) Covariate TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Exposure to robots 0.000026 − 0.000102 0.0000935 0.000118 − 0.0000115 − 0.0000356 0.00000348 Share of population aged 15–24 − 3.278 − 3.082*** − 0.0862 0.299 0.34 0.0384 0.026 Share of population aged 25–49 − 7.505*** − 3.952*** − 1.944** − 0.307 − 0.212 − 0.189 0.00224 Share of population aged 50 + − 6.268*** − 2.929*** − 1.067* − 0.579 − 0.645*** − 0.336*** − 0.0457*** Share of highly educated population 0.00187 0.000263 0.00035 0.000389 0.000697** 0.000121 0.0000129 Ratio of share of highly educated women to share of highly educated men 0.223 − 0.0326 − 0.106 0.17 0.147** 0.0642** 0.00710** Square of ratio of share of highly educated women to highly educated men − 0.109 0.0116 0.0409 − 0.0771* − 0.0637*** − 0.0274** − 0.00259** Share of economically active women − 0.00113 0.000105 0.000387 − 0.000880* − 0.000793*** − 0.000354*** − 0.00000468 Kleibergen-Paap rk Wald F statistic 1042.809 1042.809 1042.809 1042.809 1042.809 1042.809 1042.809 Hansen J p-value 0.6166 0.2884 0.3651 0.4868 0.4660 0.1540 0.8730 Open in a new tab Table 10. Full basic model results for Italy (see Table 1 in Sect. 6.1) Covariate TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Exposure to robots − 0.00118* − 0.000196 − 0.000898*** − 0.000116 0.00014 − 0.0000473 0.00000823 Share of population aged 15–24 − 9.167*** − 4.185*** − 3.894*** 0.368 − 0.686 − 0.461*** − 0.0355* Share of population aged 25–49 − 7.001*** − 1.823*** − 4.355*** − 0.810*** 0.14 − 0.167* − 0.02 Share of population aged 50 + − 6.623*** − 2.395*** − 3.063*** − 0.321* − 0.373** − 0.264*** − 0.0108 Share of highly educated population − 0.00181 − 0.000107 − 0.00257** − 0.00181* 0.000255 0.00118*** 0.000136* Ratio of share of highly educated women to highly educated men − 0.0448 − 0.109 − 0.0746 0.105 0.0448 − 0.0258 − 0.00212 Square of ratio of share of highly educated women to highly educated men 0.0536 0.0511* 0.0308 − 0.0308 − 0.00361 0.0113 0.000552 Share of economically active women 0.00296 0.00197*** 0.000202 − 0.000201 0.00083 0.0000921 0.0000639 Kleibergen-Paap rk Wald F statistic 175.284 175.284 175.284 175.284 175.284 175.284 175.284 Hansen J p-value 0.2683 0.3285 0.1599 0.7500 0.1742 0.5438 0.3200 Open in a new tab Table 11. Full basic model results for the UK (see Table 1 in Sect. 6.1) Covariate TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Exposure to robots 0.00168 − 0.000872 0.000793 0.00133 0.00109 0.000386* − 0.00000172 Share of population aged 15–24 0.555 − 2.063*** 0.595 2.364*** 1.35 0.16 − 0.0840*** Share of population aged 25–49 − 0.491 − 1.086* − 0.437 1.422 0.803 − 0.0106 − 0.0516* Share of population aged 50 + 2.041 0.0326 0.372 1.462** 0.918 0.032 − 0.0430* Share of highly educated population 0.000193 0.000994 − 0.000392 − 0.00058 − 0.000765** − 0.000104 0.0000304* Ratio of share of highly educated women to share of highly educated men 1.032*** 0.213* 0.241* 0.367*** 0.224** 0.0436 -0.000287 Square of ratio of share of highly educated women to highly educated men − 0.490*** − 0.107* − 0.105 − 0.169*** − 0.110*** − 0.0223 − 0.0000613 Share of economically active women − 0.00260* − 0.000123 − 0.000653 − 0.000624 − 0.000863** − 0.000277** − 0.0000203 Kleibergen-Paap rk Wald F statistic 137.303 137.303 137.303 137.303 137.303 137.303 137.303 Hansen J p-value 0.0363 0.0847 0.6383 0.0140 0.0815 0.1513 0.0684 Open in a new tab Table 12. Full basic model results for Poland and Czechia (see Table 1 in Sect. 6.1) Covariate TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Exposure to robots 0.000530 0.000104 − 0.000436 0.000501 0.000253* − 0.0000469 − 0.0000119 Share of population aged 15–24 − 1.043 − 0.893 0.506 2.092*** − 1.640*** − 0.801*** − 0.0889*** Share of population aged 25–49 − 6.482** − 2.873*** − 1.501 1.013 − 1.432*** − 0.683*** − 0.0819*** Share of population aged 50 + − 3.253*** − 1.658*** − 0.478 1.379** − 1.273*** − 0.616*** − 0.0698*** Share of highly educated population 0.00325 − 0.00108 0.000824 0.00320*** 0.000581 − 0.000123 0.0000267 Ratio of share of highly educated women to share of highly educated men 0.151 0.0122 − 0.263*** 0.166 0.171** 0.0228 0.00496* Square of ratio of share of highly− educated women to highly educated men − 0.0445 − 0.00413 0.104*** − 0.0581 − 0.0580** − 0.00915 − 0.00182* Share of economically active women 0.00692** 0.00185 0.00518*** 0.000411 − 0.000553 0.0000953 − 0.0000223 Kleibergen-Paap rk Wald F statistic 45.992 45.992 45.992 45.992 45.992 45.992 45.992 Hansen J p-value 0.1429 0.0456 0.0299 0.2654 0.1859 0.1341 0.1430 Open in a new tab Our model takes the following form: f e r t i l i t y r , t = α E x p o s u r e t o r o b o t s r , t - 2 + β C o n t r o l s r , t - 1 + η r + v t + ε r , t 2 where f e r t i l i t y r , t denotes regional total and age-specific fertility rates, α is our parameter of interest capturing the effect of workers’ exposure to robotisation on fertility in region r, η r corresponds to region individual effects and v t are time dummies. In order to test hypotheses H1-H4, we interact E x p o s u r e t o r o b o t s r , t - 2 with the potential moderators listed in Sect. 5.1. In all models, we control for a set of demographic and socioeconomic characteristics of a region, C o n t r o l s r , t - 1 , enumerated in Sect. 5.1, which may confound the effects of robot penetration on fertility. They are lagged by 1 year to avoid simultaneity issues. At the same time, we lag the exposure to robots by 2 years to account for the pregnancy and the fact that, once exposed to labour market changes, workers might take some time to decide whether to have a child or not. Equation 2 is estimated using the two-stage least squares approach with a fixed effects “within” estimator (Wooldridge, 2010). Standard errors are clustered at the region level to acknowledge for within-region dependence of the observations and robustify the model to serial correlation.
Results Our full model estimates along with the IV tests are displayed in Tables 8, 9, 10, 11, 12 in the Appendix (basic models as expressed by Eq. 2) and Tables 1–20 in the Online Supplementary Material (models with interactions). In all 175 regressions for the different countries and fertility rates, the instrument was relevant (as indicated by the Kleibergen-Paap rk Wald F statistic) and the overidentifying restrictions were valid with the Hansen J p-value exceeding the 5% significance level in 153 regressions, and the 1% in 8 cases. In 14 cases, it was not possible to conduct the Hansen J test, due to the fact that the number of clusters (regions) was smaller than the sum of the number of exogenous regressors and the number of excluded instruments (Baum et al., 2002; Frisch & Waugh, 1933). Those 14 cases correspond to the models for Italy and Czechia with Poland in which we introduced two interactions at once to test the H2. However, given that the overidentifying restrictions were valid in all other cases for those country samples, it is reasonable to assume that they are valid also in the remaining 14 cases. Overall Effects of Robot Adoption on Fertility We find few rather small effects of robot adoption on fertility (Table 1). Total fertility is affected significantly only in Italy. This effect is negative: An increase in workers' exposure to robots by 1 robot per 10.000 workers reduces the total fertility rate by 0.00118. This effect is entirely driven by the negative effect of automation on fertility at young ages, in particular in the 25–29 group. Apart from Italy, we also find negative fertility effects in Germany, the leader of robot adoption worldwide, for certain age-specific fertility rates. These effects are weaker and, in contrast to Italy, emerge only at older ages (i.e. for age groups 35–39 and 40–44). We do not find significant negative effects on fertility in other countries of our interest. In some of them, we even identify a significant positive influence of robots on fertility at higher ages. For instance, an increase in exposure to robots by 1 robot per 10.000 workers results in an increase in 35–39 fertility rate 0.00025 in Czechia and Poland and a gain in the 40–44 fertility rate by 0.00039 in the UK. We don’t observe any statistically significant findings for France. Table 1. Exposure to robots ( α ) coefficients from basic 2SLS models (Eq. 2) Country TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Germany − 0.00016 0.00004 0.00002 − 0.00002 − 0.00011*** − 0.00005*** − 0.000001 France 0.00003 − 0.00010 0.00009 0.00012 − 0.00001 − 0.00004 0.000003 Italy − 0.00118* − 0.00020 − 0.00090*** − 0.00012 0.00014 − 0.00005 0.00001 UK 0.00168 − 0.00087 0.00079 0.00133 0.00109 0.00039* − 0.000002 Czechia & Poland 0.00053 0.00010 − 0.00044 0.00050 0.00025* − 0.00005 − 0.00001 Open in a new tab Workforce Sectoral Composition Since robots are mostly employed in manufacturing, we hypothesised that the negative fertility effects will be most likely to emerge in regions with large manufacturing sectors (H1). The respective findings are presented in Table 2. The coefficients in rows entitled 'Exposure to robots' show the main fertility effects of robotization in regions with high initial employment in manufacturing and the interaction term beneath informs us about the extent to which the effect of robotization differs from the main effect in regions where the initial proportion of persons employed in manufacturing in the region was 1 pp lower. Table 2. Exposure to robots ( α ) and its interaction with the initial (start of observation period) share of workers employed in manufacturing Country Measure TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Germany Exposure to robots − 0.0022* − 0.0012*** − 0.00137*** 0.00037 0.00009 − 0.000029 − 0.00000 Exposure to robots # Initial share of workers out of manufacturing 0.00003** 0.00002*** 0.00002*** − 0.00001 − 0.00000 − 0.00000 0.00000 France Exposure to robots 0.00163 0.00062 0.00212 0.00013 − 0.00082 − 0.00045** − 0.00008* Exposure to robots # Initial share of workers out of manufacturing − 0.00002 − 0.00001 − 0.00003 − 0.00000 0.00001 0.000006* 0.000001* Italy Exposure to robots − 0.00264 − 0.00051 − 0.00201 − 0.00085 0.00069 0.00039** − 0.00013** Exposure to robots # Initial share of workers out of manufacturing 0.00002 0.000005 0.00002 0.00001 − 0.000007 − 0.00001** 0.000002*** UK Exposure to robots − 0.0223** − 0.00584 − 0.00094 − 0.00088 − 0.00384 − 0.00155 0.00012 Exposure to robots # Initial share of workers out of manufacturing 0.00031** 0.000065 0.00002 0.00003 0.00006 0.00003 − 0.00000 Czechia & Poland Exposure to robots 0.00627 0.00295*** − 0.00337** 0.00275 0.00251*** 0.00004 0.00001 Exposure to robots # Initial share of workers out of manufacturing − 0.00009* − 0.00005*** 0.00005** − 0.00004 − 0.00004*** − 0.00000 − 0.00000 Open in a new tab With few exceptions, our findings are largely consistent with our hypothesis H1. We observe a clearly negative effect of robot adoption on total fertility in those German regions which were initially highly industrialised. It is strongly driven by fertility reduction at young ages (20–24 and 25–29). This negative effect is significantly weaker in regions with a smaller initial proportion of workers employed in manufacturing. We also detect some negative fertility effects of robots in the French and British regions with initially large manufacturing sectors. In the UK, the negative effects on age-specific fertility in those regions are not significant but the negative effect on total fertility is significant. In France, they emerge at the highest reproductive ages: 40–44 and 45 + . In Italy, most of the effects in highly industrialised regions are insignificant except for those at higher reproductive ages where the pattern is unclear (positive effect of robot adoption in highly industrialised regions at ages 40–44 and negative at ages 45 +). Some inconsistency is also detected in Czechia and Poland though it seems that the effects of robot adoption there tend to be rather positive in highly industrialised regions: The main effects at all reproductive ages, but for 25–29, are positive though significant only at ages 20–24 and 35–39. Gender Composition of Manufacturing Workers Next, we expected that fertility effects of robot adoption will be more negative in regions where men were more exposed to automation than women (H2). The findings which allow to verify this hypothesis are presented in Table 3. The coefficients in rows entitled "Exposure to robots' display fertility effects of robotisations in regions with high initial employment in manufacturing where in addition employment in manufacturing was dominated by men. The following interaction terms inform us to what extent the effect of robotization differs from the main effect in regions where the initial proportion of persons employed in manufacturing in the region was 1 pp lower/initial ratio of women over men employed in manufacturing was by 1 pp. higher. Table 3. Exposure to robots ( α ), interaction of exposure to robots with the initial share of workers employed out of manufacturing and interaction of exposure to robots with the initial ratio of women’s to men’s employment share in manufacturing Country Measure TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Germany Exposure to robots 0.00079 − 0.00041 − 0.00018 0.00064 0.00053 0.00000 0.00001 Exposure to robots # Initial share of workers out of manufacturing 0.00001 0.00001** 0.00001** − 0.00001 − 0.00001 − 0.00000 − 0.00000 Exposure to robots # Initial ratio of women's versus men's share in manufacturing − 0.0035*** − 0.00093* − 0.00141*** − 0.00031 − 0.00052** − 0.00004 − 0.00001 France Exposure to robots 0.0049 0.00188* 0.00352** 0.0008 − 0.00098* − 0.00056** − 0.00012** Exposure to robots # Initial share of workers out of manufacturing − 0.00002 − 0.00000 − 0.00003 − 0.00000 0.00001 0.000005* 0.000001* Exposure to robots # Initial ratio of women's versus men's share in manufacturing − 0.00681** − 0.00307*** − 0.00292* − 0.00122 0.00049 0.00036 0.00011** Italy Exposure to robots 0.0144*** 0.00535*** 0.0067*** − 0.00056 0.00038 0.00116*** − 0.00014* Exposure to robots # Initial share of workers out of manufacturing − 0.00011** − 0.00004** − 0.00005*** 0.00001 − 0.00000 − 0.00001*** 0.000002** Exposure to robots # Initial ratio of women's versus men's share in manufacturing − 0.0137*** − 0.00462*** − 0.00693*** − 0.00039 0.00025 − 0.0006*** 0.00002 UK Exposure to robots − 0.0378*** − 0.0103 − 0.00187 − 0.00319 − 0.00486 − 0.00202 − 0.00024 Exposure to robots # Initial share of workers out of manufacturing 0.00042*** 0.0001 0.00003 0.00004 0.000069 0.000028 0.000001 Exposure to robots # Initial ratio of women's versus men's share in manufacturing 0.0187* 0.00487 0.00091 0.00303 0.00177 0.00073 0.00043** Czechia & Poland Exposure to robots 0.00195 0.00041 − 0.00436 0.00246 0.00178** − 0.00013 − 0.000023 Exposure to robots # Initial share of workers out of manufacturing − 0.00007 − 0.00003* 0.00005* − 0.00004 − 0.00003*** 0.000000 − 0.00000 Exposure to robots # Initial ratio of women's versus men's share in manufacturing 0.00402 0.00228*** 0.00099 0.0003 0.00063 0.00014 0.00002 Open in a new tab Apart from the UK and the cluster built by Czechia and Poland, we do not find evidence for hypothesis H2. Our findings even suggest the reverse, namely that robot adoption in Germany, France and Italy leads to stronger fertility decline in regions where the initial ratio of women’s to men’s employment share in manufacturing was larger. These negative effects, obtained net of the regional employment in manufacturing and women’s activity rate, are largely significant at young reproductive ages. Interestingly, in Italy and to some extent in France we even find traces of positive effects of robot adoption in regions with initially large manufacturing sectors which are dominated by men. The findings for the UK and Czechia and Poland are more consistent with our expectations. In the UK, the interaction between exposure to robotisation and the ratio of women’s and men’s employment in manufacturing is positive at all reproductive ages and significant in the models for the total fertility. At the same time, the effect of robot adoption on fertility in highly industrialised regions where employment in manufacturing is dominated by men is negative, suggesting that robotisation reduces fertility in such regions. In Czechia and Poland, the interaction between exposure to robotisation and ratio of women’s and men’s employment in manufacturing is positive at all reproductive ages (like in the UK), but significant only at ages 20–24. Educational Attainment of the Population Subsequently, we test the hypothesis that robots exert a more negative impact on fertility in lower educated regions (H3). We present our findings in Table 4 where the rows 'Exposure to robots' denote the fertility effects of robotisation in regions with low educated populations and the interaction term demonstrates how these effects differ across regions with an increase in the proportion of educated persons by 1 pp. Table 4. Exposure to robots ( α ) and its interaction with the share of the highly educated population (ISCED 5–8) Country Measure TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Germany Exposure to robots − 0.00161*** − 0.00027* − 0.00011 − 0.00045** − 0.00044*** − 0.00014*** − 0.00001** Exposure to robots # Share of highly educated 0.00005*** 0.00001** 0.00001 0.00002** 0.00001*** 0.000003** 0.0000003** France Exposure to robots 0.0015** 0.00058** 0.00105*** 0.00019 − 0.00027 − 0.00015** − 0.00001 Exposure to robots # Share of highly educated − 0.000054** − 0.00002*** − 0.00003** − 0.00000 0.00001 0.000004* 0.000000 Italy Exposure to robots − 0.00292* − 0.00102 − 0.00124** 0.0002 − 0.00016 − 0.0002* 0.00001 Exposure to robots # Share of highly educated 0.0001 0.00004* 0.00002 − 0.00001 0.00002 0.00001** − 0.00000 UK Exposure to robots 0.00026 − 0.00049 0.00171* 0.00063 0.00008 − 0.00016 − 0.00009** Exposure to robots # Share of highly educated 0.00003 − 0.000009 − 0.00002 0.00001 0.00002 0.00001 0.000002*** Czechia & Poland Exposure to robots − 0.00018 0.00039 − 0.00182*** 0.00023 0.00066*** 0.00002 − 0.00003*** Exposure to robots # Share of highly educated 0.000021 − 0.00002 0.00007** 0.00001 − 0.00002** − 0.00000 0.000001** Open in a new tab We find clear support for hypothesis H3 in Germany and Italy. There is some evidence for this hypothesis also in the remaining countries but for France where our findings suggest the opposite. In Germany, we identify a significantly negative effect of exposure to robots on fertility in regions characterised by lower educational attainment of the population: An increase in the exposure to robotisation by 1 robot per 10,000 workers leads to a decline in total fertility by 0.0016 there. Negative and mostly significant fertility effects are found at all reproductive ages. They clearly weaken with an increase in the proportion of highly educated individuals in a region. We find some traces of a similar pattern in Italy and Czechia and Poland, but the estimated effects are significant only at some ages and in Czechia and Poland some reversed findings are also obtained for the age group 35–39. The educational attainment of the regional population does not seem to matter for the effects of robotisation on fertility in the UK (except for highest reproductive ages where the findings are consistent with our expectations). Finally, in France we find that robotisation has a positive influence on fertility in regions with fairly low educated populations, which is in contrast to our hypothesis H3. Region’s Orientation at Investments in Knowledge and Technology Finally, we expected the fertility effects of robotisation to be less negative or more positive in regions which are better able to embrace technological change. We operationalise this ability with the regional investment in technology- and knowledge-intensive sectors, measured by its employment. Only a few findings are consistent with this hypothesis (Table 5). Table 5. Exposure to robots ( α ) and its interaction with the share of workers employed in technology- and knowledge-intensive sectors Country Measure TFR FR 20–24 FR 25–29 FR 30–34 FR 35–39 FR 40–44 FR 45 + Germany Exposure to robots − 0.00006 0.0001 0.00015** − 0.00003 − 0.00015*** − 0.00005*** − 0.000003* Exposure to robots # Share employed in technology and knowledge sectors − 0.00002 − 0.00001 − 0.00005*** 0.00001 0.00002* 0.00001 0.000001** France Exposure to robots − 0.00015 − 0.00019* 0.00006 0.00013 − 0.00004 − 0.00007*** − 0.000002 Exposure to robots # Share employed in technology and knowledge sectors 0.00007 0.00003 0.00002 − 0.00000 0.00001 0.00001* 0.000002 Italy Exposure to robots − 0.00116* − 0.00013 − 0.00117*** − 0.00017 0.00037*** − 0.00001 0.000002 Exposure to robots # Share employed in technology and knowledge sectors 0.000005 − 0.00002 0.0001 0.00002 − 0.00008 − 0.00001 0.000002 UK Exposure to robots 0.00161 − 0.0008 0.00122 0.00151 0.00071 0.00016 − 0.00001 Exposure to robots # Share employed in technology and knowledge sectors 0.00001 0.00000 − 0.00020* − 0.00005 0.00012 0.00007* − 0.00000 Czechia & Poland Exposure to robots 0.00119 0.00025 − 0.00047 0.00096 0.00039* − 0.00009** − 0.00003*** Exposure to robots # Share employed in technology and knowledge sectors − 0.00031 − 0.00004 − 0.00003 − 0.00022 − 0.00006 0.00002 0.000006** Open in a new tab On the one hand, we find the interaction term between exposure to robotisation and employment in technology- and knowledge-intensive sectors to be significantly negative at lower reproductive ages (25–29) in Germany and the UK. On the other hand, however, the interaction term turns often positive and significant at high reproductive ages. This latter finding emerges clearly in Germany, but also to a lower extent in France, UK and Czechia and Poland, suggesting fertility recuperation (or higher-order fertility) encouraged by increasing employment/earning opportunities and growing prosperity of the region.
Discussion Industrial robots substantially change the conditions of participating in the labour markets and thereby may also affect fertility. On the one hand, there is evidence that robots destroy jobs, increase turnover in the labour market and make workers adjust to the new demands in the labour markets (reskill, upskill or increase work effort to follow the new work guidelines or even keep the job). On the other hand, however, robots may also increase productivity and thereby contribute to the expansion of new jobs, in particular in regions with highly educated workforce open to technological innovations. In this study, we examined whether these long-term structural changes, driven by adoption of industrial robots, affect regional fertility rates in six European countries. We find that fertility effects of robot adoption are rather small and vary across regions, depending on workforce education, employment structure and region’s capacity to embrace technological change. Briefly, our findings suggest that robots tend to exert a negative influence on fertility in regions where substantial numbers of workers are exposed to losing their jobs due to automation, i.e. highly industrialised regions (except for Czechia and Poland) and regions with relatively low educated populations (except for France). We also find the fertility effects to be more negative in less technologically advanced regions where robotisation is unlikely to boost productivity and create new jobs. The negative fertility effects are clearly most evident at young ages, especially in regions with large manufacturing sectors and to some extent in regions with lower educated populations. This finding may suggest postponement of fertility to higher ages, though fertility recuperation at older ages does not emerge clearly from our study, except for regions which are strongly oriented at knowledge and technological innovations. These findings are consistent with past research, showing that highly educated individuals, whose skills are valued in the labour market, tend to postpone childbearing into higher ages (Kantorova, 2004; Neels & De Wachter, 2010), but tend to recuperate it so that educational differences in cohort fertility tend to be smaller or even disappear in better developed regions (Nisen et al. 2021). We also observe some country differences in fertility effects of robot adoption, but the pattern is not very clear. We see the negative effects of robots on fertility to be most pronounced in Germany, which is most advanced in automation among the studied countries. This is despite the strong employment protection in the country. We also observe some negative effects in Italy and less so in the UK. Robotisation in these two countries has progressed more slowly than in Germany, but employment protection is weaker there (in Italy low protection concerns disproportionately the young workers) and support for the unemployed is more limited. We also find the effects of robot adoption to be less disruptive for fertility and even to encourage it in Czechia and Poland. This finding is seemingly striking, but we explain it by the fact that robots are less likely to replace labour in countries with lower labour costs (Bachmann et al., 2022; Jung & Lim, 2020), which Czechia and Poland undoubtedly are in comparison with the Western European states. Moreover, we are puzzled by the fact that consistently with hypothesis H2 we find less negative effects of robot adoption in those British, Polish and Czech regions—where the ratio of women’s to men’s initial employment in manufacturing was higher—but not in Germany, France or Italy, even though the division of paid work between partners in Germany or Italy is not less asymmetric than in Poland or the UK (Matysiak & Steinmetz, 2008; Matysiak & Vignoli, 2013). One possible explanation for this finding might be related to the fact that women working in manufacturing moved out into the service sector much more quickly than men. Such a phenomenon was indeed observed in countries most advanced in automation (Black & Spitz-Oener, 2010; Cortes et al., 2021), which Germany, Italy and France indeed are. At the same time, the new jobs in the service sector turned out to be characterised by high insecurity and precarity with employers requiring from workers great deal of flexibility (Allen & Henry, 1997; Reimer, 1998). Finally, we find robotisation to exert most negative impact on fertility in regions with low-educated population in all analysed countries except for France. Past studies indeed showed that education is a weaker predictor of the realisation of fertility intentions in France than in Italy (Régnier-Loilier and Vignoli 2011) and that economic uncertainty is less disruptive for fertility in France than in Germany (Salles et al., 2016), likely because of the strong two-child family norm in France, less pronounced specialisation of partners in paid and unpaid labour and generous financial transfers to families, including the unemployment schemes (Pailhé and Solaz 2012). For these reasons, the French may be less sensitive to the risks resulting from long-term developments in the labour markets than other nations we studied. There is no doubt, however, that more in-depth insights are needed into the topic to corroborate our interpretations. Our study is not without limitations. Due to the anonymisation procedures at Eurostat, some of our data were missing and had to be imputed. As a result our main measure, exposure to robots, contains measurement error, which causes its increased variance in comparison with a perfect measurement. Thus, we expect all regression lines that we fitted to be biased towards 0 (regression dilution/attenuation; Fuller, 1987). Our measure of exposure to robotisation faces other problems as well. Although it is at the forefront of economic research on automation and employment (Acemoglu & Restrepo, 2020; Dauth et al., 2021), it assumes that regional employment structure by sector remains unchanged over time. This assumption is needed in order to keep exposure to robots exogeneous, as the regional employment shares by sector are measured before the start of robotisation. Furthermore, we were not able to include more countries into our study. The adopted instrumental variable strategy, which implied instrumenting robotisation in one European country with robot adoption in other European countries, left us with no possibility to pool all European countries. Comparing a greater number of countries was not feasible since we had to choose countries with a reasonably large number of NUTS-2 regions. Last but not least, our analytical strategy did not allow us to account for possible spatial spillovers which may take place if workers commute to jobs outside of the regions of their residence (Monte et al., 2018). According to our best knowledge, in econometric literature exploiting sectoral composition as a source of local labour demand shocks (Bartik shocks) and in particular discussing the exposure to robots, no solutions to the two above-mentioned issues have been offered so far. We underline them as important areas for future research. Despite these limitations and some inconsistencies, our findings suggest that long-term structural changes, driven by automation, can indeed affect fertility as it was proposed by Seltzer (2019). Nonetheless, it does not seem robotisation is primarily responsible for fertility declines observed in the aftermath of the Great Recession in most advanced countries. It exerts a negative influence on fertility in certain regions (highly industrialised or low/middle educated), but these effects are compensated by fertility increases in better educated and dynamically developing regions. It is likely that fertility is also affected by other components of structural labour market changes, driven by digitalisation, such as implementation of digital automats which also replace workers but are not classified as industrial robots, spread of remote work or increasingly widespread use of AI. Another possibility is that our study, conducted at the macro level, masks some important nuances such as differential effects of automation on workers’ fertility. These effects may certainly differ by workers’ gender and socio-economic status (education or occupation) or firm characteristics (firm’s capacity to retrain and retain workers). Fertility effects of automation may also depend on the labour market situation of the other partner and whether he or she is affected by automation as well. Future research should thus account for other aspects of long-term structural changes in the labour market, besides automation, and involve individual-level data in order to look more closely into specific circumstances of workers. More research is also needed to unravel the mechanisms which underlie these relationships. Several mechanisms are possible, among them certainly job displacement, job-related uncertainty or pressure to reskill and adapt to new work guidelines and ways of working. Finally, future research should more closely explore the cross-country differences in fertility effects of long-term labour market changes caused, among others, by automation. In particular, it is of vital importance to understand which specific public policies and other institutional factors may mitigate the negative consequences of automation on fertility. Being one of the first attempts to investigate the role of labour market changes, driven by automation, for fertility this single study is not able to address all these questions but certainly aims at stimulating future research on the topic.
Electronic supplementary material Below is the link to the electronic supplementary material. Tables 1-20 (located in the Word file) present full results from interaction IV models presented in the paper. They supplement the basic models presented in the Appendix. Tables 21-45 in Excel present full results from the basic and interaction OLS models.
Acknowledgements This research was possible thanks to financial supported granted by the Polish National Agency for Academic Exchange (Polish Returns Programme 2019) and two ERC Consolidators Grants “Globalisation- and Technology-Driven Labour Market Change and Fertility” (LABFER, grant agreement no 866207) “Economic Uncertainty and Fertility in Europe” (EU-FER, grant agreement no DLV-725961). We also acknowledge very useful comments and suggestions we received at various stages of this research from Wolfgang Dauth, Lucas van der Velde and Daniele Vignoli as well as the researchers from the Interdisciplinary Centre for Labour Market and Family Dynamics (LABFAM, University of Warsaw).
Appendix Reclassifying Industry The regional employment structure data are aggregates obtained from the European Union Labour Force Survey microdata. We reclassify them to 16 ISIS categories that we operationalise for the robot data using the correspondence table available through the online resources of the United Nations Statistics Division (see Table 6). As can be seen in the table, in some cases, it involves summing employment for 2 or 3 NACE categories to match the ISIC category. Imputing Regional Employment Structures Eurostat anonymises records where employment in a specific region, industry, and year was above zero but below 1,000 people, i.e. information is missing for such records. As a result, 50% of employment records were initially missing in the data. In the cases when only observations for specific years for a given region-industry are missing, we impute it by drawing a number between 0 and 1000 from a uniform distribution. In the cases when the entire time series for a given region-industry is missing, we impute it with median employment for that industry in the country, normalised to a 0–1000 range. Since our main explanatory measure (described in detail in the Sect. 5.2 in the main text) relies on a sum of employment over industries, it would be impossible to construct it without assumptions about the missing data. We decided to choose the imputation with median instead of the mean, to robustify the imputed data to extreme values in existing data. One should bear in mind that, after imputation, there is a measurement error in our regional employment data. Thus, the regression coefficient corresponding to our main measure will be downward-biased (regression dilution bias; Fuller, 1987). Reclassifying NUTS 2 Codes The NUTS classification of regions underwent a few reclassifications in its history. Eurostat usually publishes regional data for specific years for regions which were operative depending on then-current NUTS classification. To obtain a balanced panel, we reclassify all regional codes, which simply changed name, to the NUTS 2016 classification, using crosswalks available on the Eurostat web page. For the countries and time frame we consider in our analysis, there are eight cases when two or three regions split or merged resulting in changes in the NUTS classification (see Table 7). In those instances, we sum up/average (depending on a variable) data for the smaller regions to obtain consistent data for the larger region. We exclude 5 French overseas territories with distinct socioeconomic setups, not directly comparable to European regions (Guadeloupe, Martinique, French Guiana, La Reunion, and Mayotte).
Funding Anna Matysiak is a co-editor of the special issue “Families in the Changing World of Work”, a member of the Editorial Board of the European Journal of Population and Secretary-General and Treasurer of the European Association for Population Studies. All funding sources of the research were listed in the Acknowledgments and none of them has influenced the outcomes presented in this publication.
Declarations Conflict of interests The authors declare no conflicts of interest associated with this publication.
Footnotes 1 Automotive/Other vehicles, basic metals, electrical/electronics, food and beverages, glass, ceramics, stone, mineral products (non-automotive), industrial machinery, metal products (non-automotive), paper, pharmaceuticals, cosmetics, rubber and plastic products (non-automotive), textiles, wood and furniture, all other manufacturing branches/other chemical products not elsewhere classified. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
| 2023-03-28T00:00:00 |
2023/03/28
|
https://pmc.ncbi.nlm.nih.gov/articles/PMC10043858/
|
[
{
"date": "2023/03/01",
"position": 37,
"query": "job automation statistics"
}
] |
Unions and Automation Risk: Who Bears the Cost of ...
|
Unions and Automation Risk: Who Bears the Cost of Automation?
|
https://www.degruyterbrill.com
|
[
"Changkeun Lee",
"Olivia Hye Kim",
"Published By",
"Become An Author With De Gruyter Brill",
"Submit Manuscript",
"Author Information",
"Explore This Subject",
"For Authors"
] |
by C Lee · 2023 · Cited by 7 — Automation creates winners and losers. By examining establishment-level panel data, we explore how labour unions affect labor adjustment associated with ...
|
Automation creates winners and losers. By examining establishment-level panel data, we explore how labour unions affect labor adjustment associated with automation. Although automation can increase new hires of junior and unskilled production workers, the presence of labour unions neutralizes these effects. The results suggest that labour unions have incentives to protect incumbent workers negatively affected by automation.
1 Introduction The employment effect of automation is one of the most significant challenges of our time. While countervailing evidence exists on the overall effects, most studies commonly highlight the heterogeneous effects by task, occupation, age, and education (Acemoglu and Restrepo 2022; Arntz, Gregory, and Zierahn 2017; Battisti and Gravina 2021; Blanas, Gancia, and Lee 2019). This paper adds to the literature by considering an underexplored dimension through which automation can affect employment outcomes: labour unions. The canonical insider-outsider theory states that labour unions tend to protect the incumbent workers (Lindbeck 1988; Lindbeck and Snower 2001; Sanfey 1995). If automation threatens the employment of union members, labour unions have incentives to block or bargain over automation. Unions might do so to prevent technical change from weakening the coalition between workers with different skills (Acemoglu, Aghion, and Violante 2001). Therefore, when the management accepts the union’s demand to implement automation, the employment cost of automation may fall on those who may benefit from automation, including potential entrants. However, this question has rarely been addressed by economists. This is surprising given that the role of labour unions has regained research attention despite rapid de-unionization underway over the past three decades (Naidu 2022). To our knowledge, Belloc et al. (2022) is the only study investigating the question of unions and automation. They find that labour unions and automation risk have a negative relationship. However, their analysis does not reveal which group of workers bears the cost of automation risk within nations and firms. Our paper offers micro-level empirical evidence for how labour unions moderate the effect of automation on employment outcomes. Using a Korean establishment-level panel dataset, we compare the patterns of employment changes due to automation between establishments with and without unions. We find that automation could cause a new hire of young, unskilled workers though labour unions neutralize the effect to protect senior skilled workers. As a result, reallocation through hires and separations is also limited. Our results, consistent with the standard theory of labor unions, provide a useful insight into the effect of automation.
2 Data and Empirical Strategy We used the Workplace Panel Survey (WPS) for the empirical analysis, a biannual survey conducted by the Korea Labour Institute. The advantage of this panel dataset is that it collects various information from establishments on employment and industrial relations, such as workforce composition, human resources management, labour unions, and conventional firm performance outcomes. We limit the scope of our analysis to small and medium establishments with less than 300 employees for two reasons. First, small and medium firms account for 99.9 percent of the firms and 81.3 percent of the total employment in the Korean manufacturing sector. Second, automation and unionization may correlate with firm size. While the unions at large workplaces show powerful bargaining power, they do not represent the business population. Table 1 reports the descriptive statistics. Table 1: Descriptive statistics. Variable Mean SD Min Max Hiring rate 0.155 0.243 0 7 Separation rate 0.184 0.491 0 18 Junior workers (age < 35) 0.293 0.208 0 1 Production workers 0.484 0.277 0 1 Unskilled workers 0.068 0.175 0 0.971 New automation 0.181 0.385 0 1 Degree of automation at t − 1 0–20% 0.066 0.248 0 1 20–40% 0.118 0.323 0 1 40–60% 0.286 0.452 0 1 60–80% 0.285 0.452 0 1 80–100% 0.245 0.431 0 1 Has a union 0.238 0.426 0 1 Multi-unit status (=1 if part of multi-unit) 0.511 0.5 0 1 ln(wage per worker) 3.864 0.412 2.35 5.173 Ownership (=1 if domestic) 0.957 0.204 0 1 ln(total employment) 4.75 1.076 0 8.808 Observations 1864 We estimate the effect of automation on employment outcomes using the following specification, separately by union status, and compare the patterns. (1) y i , j , t = α 0 + α 1 A i , j , t + β X ′ i , j , t − 1 + λ i + θ j + δ t + ε i t Measuring automation is the biggest measurement challenge in this literature. Existing studies have employed various proxies, such as automation cost, imports of capital goods related to automation, and answers to survey questions on robot uses (Bessen et al. 2019; Domini et al. 2021; Koch, Manuylov, and Smolka 2021). Considering the high automation rate in South Korea, we focus on new automation compared to the previous period rather than automation levels. For automation, the survey includes a question that asks whether the establishment automated a part of the production process (“Compared to the previous year, is there any new automation that occurred in the product/service process?”). Using this question, we define A i,j,t as a binary variable equal to 1 if there was new automation at establishment i. For employment outcome y it , we consider the hiring and separation rates, the share of junior workers under 35 years old as well as skilled and unskilled production workers in wave t. Our outcome variables parallel Domini et al. (2021), who likewise examine the employment dynamics of automation. We also include several establishment characteristics in wave t − 1 as control variables, denoted by X ′ i , j , t − 1 . Since new automation has different impacts depending on the initial automation level, we used a dummy variable for each automation level category (0−20%; 20−40%; 40−60%; 60−80% and; 80−100%). We also include a multi-unit dummy, log wage per worker, ownership (domestic/foreign) dummy, and log employment. λ i , θ j , and δ t are the establishment, industry, and year fixed effects, respectively. Our results from the simple fixed-effects identification may not be sufficient to draw causal interpretations. In particular, the probability of automation may correlate with characteristics of firms such as sales or productivity. To deal with the issue of non-random selection of firms into automation, we follow Guadalupe et al. (2012), Koch, Manuylov, and Smolka (2021), and Domini et al. (2021) that combine a fixed-effect approach with a propensity score reweighting estimation. This allows us to control for time-varying characteristics through the propensity score (Guadalupe et al. (2012)). We suppose that the decision to automate is an outcome of a latent (unobserved) variable A ijt * so that Automation ijt = 1 if A * ijt ≥ 0 and Automation ijt = 0 if A * ijt < 0 . Therefore, we first estimate the probability ( p ̂ ) of observing new automation at time t, in the following pooled logit regression: (2) Automatio n ijt = α + β X i t − 1 + γ t + θ j + ε i t where X and other variables are defined the same as Eq. (1). Those variables closely match the ones used by Domini et al. (2021) and should control for the relevant observable differences in performance among firms. We find that all these variables positively impact the probability of automation. We then use the propensity scores obtained from the logit regressions to construct firm-specific weights: each automating (treated) firm has a weight equal to 1 / p ̂ , and each non-automating (control) firm has a weight equal to 1 / ( 1 − p ̂ ) , where p ̂ is the estimated propensity score. Finally, we estimate Eq. (2) using these weights.
3 Results Table 2 presents the fixed-effect estimation results. Column 3 shows that automation is associated with increases in the separation rate by about 6.1 percentage point, and Column 4 indicate that unions neutralize such effect. Table 2: Effects of automation and union on employment: fixed effect estimation. Variable (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Hiring rate Separation rate Junior workers Skilled production Unskilled production (under age < 35) workers workers No union Union No union Union No union Union No union Union No union Union New automation 0.047 −0.025 0.061b 0.029 0.086b −0.035 −0.170a 0.037 0.121b −0.060 (0.029) (0.028) (0.031) (0.051) (0.044) (0.043) (0.051) (0.047) (0.052) (0.056) Control variable o o o o o o o o o o Year fixed effect o o o o o o o o o o Region fixed effect o o o o o o o o o o Observations 1063 254 1063 254 1063 254 1063 254 1063 254 R-squared 0.067 0.122 0.077 0.110 0.065 0.053 0.117 0.228 0.102 0.113 Table 2 also indicates that automation may cause an occupational shift. Columns 5 shows that automation could give opportunities to junior workers below 35 years old. Among production workers, it appears to be related to a reduction in the share of skilled, likely senior, workers (Column 7) and an increase in unskilled workers that are likely junior (Column 9). However, such effects are not found at establishments with labour unions that have incentives to protect senior workers. Table 3 presents the estimation results from the propensity score reweighting. Controlling for observable and unobservable differences among firms that affect automation, we obtain consistent results; we find sharp differences in the employment outcome of automation again by union status. Columns 1 to 4 show that the unions suppress worker reallocation caused by automation. Without labour unions, automation is associated with an increase in the hiring rate (Column 1) and a decrease in the separation rate (Column 3) by about 5–6 percentage points. However, we do not find such patterns from establishments with unions (Columns 2 and 4). Table 3: Effects of automation and union: propensity score reweighting estimation. Variable (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Hiring rate Separation rate Junior workers Skilled production Unskilled production (under age < 35) workers workers No union Union No union Union No union Union No union Union No union Union New automation 0.056c −0.009 0.068b 0.028 0.082c −0.031 −0.154a 0.027 0.111b −0.045 (0.030) (0.023) (0.031) (0.042) (0.045) (0.037) (0.050) (0.038) (0.048) (0.043) Control variable o o o o o o o o o o Year fixed effect o o o o o o o o o o Region fixed effect o o o o o o o o o o Observations 1063 254 1063 254 1063 254 1063 254 1063 254 R-squared 0.081 0.165 0.096 0.087 0.126 0.101 0.155 0.289 0.145 0.212 Similar to the fixed-effect analysis results, Columns 5 to 10 show that automation is associated with an increase in the shares of junior and unskilled production workers and a decrease in the share of skilled, mostly senior, workers at non-union establishments. At unionized establishments, automation does not induce such compositional changes. Our results are consistent with the insider-outsider theory; it predicts that labour unions would protect incumbent workers, thus impede the creation of new jobs for potential labour market entrants (Lindbeck and Snower 2001; Sanfey 1995). Our analysis demonstrates that automation favors young, less experienced workers and reduce the importance of skilled production workers. South Korea provides a good example for the union’s impact in moderating the automation effect on employment. It has one of the highest robot density in the world (International Federation of Robots 2020). Labour unions are known to be militant and make strong demands (Bae et al. 2008). However, the union membership rate in Korea is still low, 14.2 percent in 2020, which contrasts with European countries, such as Germany (16.3%) or Norway (50.4%). This indicates that labour unions do not represent the overall workforce. Korean scholars have pointed out that senior male workers constitute the largest and powerful group in labour unions and put their interest first – retaining jobs at the cost of new labour market entrants’ job opportunities. The widening gap in idea between generations keep young workers from joining unions, accelerating the aging of unions (Chung 2015). Our results corroborate these observations.
| 2023-07-01T00:00:00 |
2023/07/01
|
https://www.degruyterbrill.com/document/doi/10.1515/bejeap-2022-0446/html?lang=en&srsltid=AfmBOor2OOS0m1snlTudcyT3IVnRxHgdyihAnRCxeWlbWgCehFkTLKvU
|
[
{
"date": "2023/03/01",
"position": 41,
"query": "job automation statistics"
}
] |
Tutorial | Automation scenarios
|
Tutorial | Automation scenarios #
|
https://knowledge.dataiku.com
|
[] |
From the Jobs ( Play button icon. ) menu in the top navigation bar, click Scenarios. · Click + New Scenario. · Name it Data Refresh . · Click Create.
|
Each row is a unique credit card ID with information such as the card’s activation month or the cardholder’s FICO score (a common measure of creditworthiness in the US).
Each row is a unique merchant with information such as the merchant’s location and category.
It also indicates whether the transaction has either been:
Each row is a unique credit card transaction with information such as the card that was used and the merchant where the transaction was made.
You can also download the starter project from this website and import it as a zip file.
From the project homepage, click Go to Flow (or type g + f ).
From the project homepage, click Go to Flow (or type g + f ).
If needed, change the folder into which the project will be installed, and click Install .
For those interested, this tutorial also includes an optional exercise for SQL triggers. To complete this, you’ll also need a supported SQL connection .
To reproduce the steps in this tutorial, you’ll need:
Understand how to use a combination of metrics, checks, and/or data quality rules to control the logic of a scenario’s actions.
In this tutorial, you will:
Scenarios are the main tool for automating actions in Dataiku, such as rebuilding datasets or retraining models. Let’s see how they work!
Create and manually run a scenario#
Consider the final dataset at the end of a pipeline. It may be the key input to a dashboard, webapp, or Dataiku app. It may be routinely shared with other Dataiku projects or exported to other software tools.
It’s a common need to automate the rebuilding of a dataset like this as new data becomes available at the start of a pipeline. To automate this task, create a scenario.
Note This example walks through rebuilding a dataset, but you may use the same logic, for example, when retraining a model or refreshing a model evaluation store.
From the Jobs ( ) menu in the top navigation bar, click Scenarios. Click + New Scenario. Name it Data Refresh . Click Create.
Tip This tutorial walks through creating a step-based scenario. Although this type of scenario can include custom Python and SQL steps, it’s also possible to create custom scenarios entirely in Python.
Add steps to a scenario# Assume tx_windows is the downstream dataset needing a rebuild. You’ll need to add that instruction to the scenario. Navigate to the Steps tab of the Data Refresh scenario. Click Add Step at the bottom left. Select Build / Train. Click + Add Item > Dataset > tx_windows > Add Item. Click Save (or Cmd/Ctrl + s ). Note Here, you’re automating an upstream build of tx_windows. Using the default build mode, the dependencies of tx_windows are built first. Alternatively, you could automate a downstream (left to right) build of tx by changing the build mode to Run recipes downstream of these items. In this case though, the other downstream endpoints (tx_distinct, etc.) would also be built.
| 2023-03-01T00:00:00 |
https://knowledge.dataiku.com/latest/automation/scenarios/tutorial-scenarios.html
|
[
{
"date": "2023/03/01",
"position": 60,
"query": "job automation statistics"
}
] |
|
Examining How Career Counselors Make Meaning of the ...
|
Learning to Stay Current in the Fourth Industrial Revolution: Examining How Career Counselors Make Meaning of the Impact of Automation and Artificial Intelligence (AI) on Occupations
|
https://search.proquest.com
|
[
"Bard Fabricant"
] |
by FB Fabricant · 2022 · Cited by 4 — The data included questionnaires and interviews with 15 participants who worked as career counselors and expressed that learning about the impact of automation ...
|
Access to the complete full text
This is a short preview of the document. Your library or institution may give you access to the complete full text for this document in ProQuest.
Alternatively, you can purchase a copy of the complete full text for this document directly from ProQuest using the option below:
| 2023-03-01T00:00:00 |
https://search.proquest.com/openview/4fec991a05e9b4fc377a16eb89f8ebf8/1?pq-origsite=gscholar&cbl=18750&diss=y
|
[
{
"date": "2023/03/01",
"position": 74,
"query": "job automation statistics"
}
] |
|
8 Machine Learning Jobs That Are in Demand - Dataquest
|
8 Machine Learning Jobs That Are in Demand – Dataquest
|
https://www.dataquest.io
|
[
"Brayan Opiyo"
] |
Explore the most in-demand machine learning jobs, including Machine Learning Engineer, Data Scientist, AI Research Scientist, and more.
|
8 Machine Learning Jobs That Are in Demand
Want to boost your career prospects? Learn machine learning. It's a powerful technology that can help you be a valuable asset to any team. With machine learning skills, you can collect, clean, analyze, and model data to uncover valuable insights and create impactful predictions.
In this article, we'll explore some of the top machine learning jobs available, discuss how to prepare for them, and help you determine if a career working with machine learning is right for you.
Why choose a career in machine learning?
Careers in machine learning are in high demand and poised for substantial growth, making this field an attractive choice for aspiring data professionals. As machine learning becomes more deeply integrated into business decision-making, the opportunities for those with the right skills are nearly endless.
Recent statistics paint a promising picture. The global machine learning market was valued at approximately \$19.2 billion in 2022 and is projected to reach around \$225.9 billion by 2030, reflecting a compound annual growth rate of about 36.2% (Fortune Business Insights report). In the United States, related roles are experiencing rapid expansion—with data science and machine learning positions projected to grow by roughly 36% from 2023 to 2033, significantly outpacing the average for most occupations (U.S. Bureau of Labor Statistics).
Pursuing a machine learning career offers several key benefits:
Job security: Sustained demand across industries ensures ample long-term opportunities.
Sustained demand across industries ensures ample long-term opportunities. Growth potential: Professionals can expect substantial career development and advancement.
Professionals can expect substantial career development and advancement. Strategic impact: Machine learning roles allow you to shape business decisions through data-driven insights.
Machine learning professionals play a vital role in transforming raw data into predictive models and actionable intelligence. By developing expertise in this powerful technology, you can position yourself for a rewarding career at the forefront of data-driven innovation.
Top 8 Machine Learning Jobs
Thinking about launching a career in the fast-growing field of machine learning? You've picked an excellent time! With businesses increasingly leveraging ML to drive innovation and efficiency, the demand for talented professionals is rapidly increasing. From tech giants to scrappy startups, companies are on the hunt for individuals with the right mix of technical skills and problem-solving prowess.
While AI is reshaping the machine learning landscape, it isn’t replacing human expertise. According to a 2025 analysis by McKinsey, companies are using AI to augment human decision-making and create new, strategic roles. Similarly, a 2024 Deloitte report highlights that businesses are increasingly investing in AI-driven initiatives that demand human creativity and strategic oversight, further fueling job growth in the field.
So, what are the most sought-after positions in the world of machine learning? We've crunched the numbers and analyzed hiring trends to identify the top 8 jobs you should know about:
In the coming sections, we'll shine a spotlight on each of these roles, exploring the day-to-day responsibilities, must-have skills, and typical career trajectories. Whether you're just starting out or looking to level up, you'll come away with a clearer sense of which path aligns best with your unique mix of interests and abilities. Let's jump in and explore!
1. Machine Learning Engineer
Machine Learning Engineers are the masterminds behind intelligent systems that learn from data. They live and breathe algorithms, using frameworks like TensorFlow and PyTorch to build sophisticated models that drive innovation.
On a typical day, they might be knee-deep in preprocessing data, designing neural networks, optimizing models, or collaborating with teams to bring ML to life in real-world applications.
Salary: \$134K - \$215K/yr (Glassdoor)
Responsibilities:
Design and deploy ML models to solve complex problems
Build deep learning models using cutting-edge frameworks
Collaborate on end-to-end ML pipelines
Optimize models for performance and scalability
Key Skills:
Strong programming skills like Python
Solid math foundation (linear algebra, calculus, statistics)
Experience with ML frameworks (scikit-learn, PyTorch)
Knowledge of ML ops best practices
Machine Learning Engineers are at the forefront of the AI revolution, crafting the intelligent systems that are reshaping industries. As businesses increasingly implement ML solutions, the demand for these skilled professionals is skyrocketing.
2. Data Scientist
Data Scientists are the detectives of big data, wielding the power of statistical analysis and machine learning to uncover insights hidden in vast datasets. They're the bridge between raw data and strategic decision-making.
Compared to other machine learning roles, Data Scientists often focus more heavily on the analytical side, leveraging their strong statistical foundations to drive research and uncover key patterns. They're the ones finding the needle of insight in the data haystack.
A day in the life of a Data Scientist might involve wrangling messy customer data, exploring variables to predict churn, building sophisticated prediction models, and translating complex findings into clear, actionable recommendations for stakeholders.
Salary: \$131K - \$211K/yr (Glassdoor)
Responsibilities:
Wrangle, preprocess, and analyze complex datasets to uncover insights
Train and validate machine learning models for prediction and inference
Communicate data-driven findings and recommendations to drive business strategy
Collaborate across teams to define data needs and guide analysis
Key Skills:
Strong foundation in statistics, probability, and data analysis techniques
Proficiency in languages like Python or R for data manipulation and modeling
Knowledge of supervised and unsupervised machine learning algorithms
Ability to clearly visualize and communicate complex data insights
In an increasingly data-driven world, Data Scientists play a pivotal role in helping organizations get the most out of their data assets. As companies seek to gain a competitive edge through data-informed strategies, the demand for skilled Data Scientists shows no signs of slowing down.
3. Software Engineer
Software Engineers are the builders behind intelligent applications, seamlessly integrating cutting-edge ML models into user-friendly software products. They bridge the worlds of software development and artificial intelligence to create tools that run on data.
Software Engineers differ from other ML roles by concentrating heavily on the entire software development process, from initial design to final deployment. They specialize in converting ML prototypes into production-ready applications that can make a real impact in the world.
On a typical day, a Software Engineer might be found preprocessing datasets, experimenting with model architectures, optimizing hyperparameters, and integrating trained models into software systems. It's all about striking the perfect balance between performance and usability.
Salary: \$179K - \$272K/yr (Glassdoor)
Responsibilities:
Design and develop ML-powered software applications from start to finish
Collaborate with cross-functional teams to understand product requirements
Maintain and improve codebases for efficiency and reliability
Integrate and optimize ML models for performance and scalability
Key Skills:
Strong programming skills in languages like Python, Java, or C++
Familiarity with ML libraries such as TensorFlow or PyTorch
Solid understanding of software engineering principles and best practices
Excellent problem-solving skills for debugging and optimization
As businesses increasingly seek to put machine learning into the hands of users, skilled Machine Learning Software Engineers are in high demand. By building the bridge between powerful ML models and intuitive applications, these professionals are driving the widespread adoption of artificial intelligence across industries.
4. AI Research Scientist
AI Research Scientists are the trailblazers of the machine learning world, constantly pushing the boundaries of what's possible with artificial intelligence. They're the ones behind the cutting-edge algorithms and approaches that are revolutionizing industries and shaping our future.
While other machine learning roles focus on applying existing techniques, AI Research Scientists are all about innovation. They go deep into the theoretical underpinnings of ML, dreaming up novel solutions to the most complex challenges. Most positions require an advanced degree and a proven track record of groundbreaking research.
AI Research Scientists spend their days immersed in the latest deep reinforcement learning research, crafting experiments to test promising new architectures, and working with colleagues to transform their discoveries into publishable papers. The role requires a balance of innovation, technical precision, and an unwavering commitment to pushing the boundaries of the field.
Salary: \$137K - \$228K/yr (Glassdoor)
Responsibilities:
Conduct groundbreaking research to push the frontiers of AI capabilities
Develop novel ML algorithms, architectures, and techniques to solve complex problems
Collaborate with top researchers to produce and publish innovative work
Communicate complex research findings to both technical and non-technical audiences
Key Skills:
Ph.D. or equivalent experience in Computer Science, Statistics, or related field
Deep theoretical understanding of machine learning, statistics, and mathematics
Exceptional research abilities and creative problem-solving skills
Strong programming skills for implementing and testing novel algorithms
As artificial intelligence continues to transform our world, the groundbreaking work of AI Research Scientists becomes increasingly vital. By constantly expanding the boundaries of what machine learning can achieve, these pioneers are not only advancing the field but also discovering new possibilities for how AI can benefit society.
5. Natural Language Processing Engineer
Natural Language Processing (NLP) Engineers are the language whisperers of the AI world, teaching machines to understand and communicate with humans. They're the geniuses behind the sophisticated language models that power everything from chatbots and virtual assistants to sentiment analysis and content moderation tools.
What sets NLP Engineers apart is their deep expertise in linguistic concepts and techniques. They're not just building generic ML models, but tailoring algorithms to navigate the complex nuances of human language. A strong foundation in both programming and linguistics is a must for success in this role.
On a typical day, an NLP Engineer might be found annotating massive text datasets, fine-tuning state-of-the-art transformer models for specific use cases, or integrating cutting-edge language understanding capabilities into user-facing products. They help close the gap between the way humans communicate and what machines can comprehend.
Salary: \$96K - \$149K/yr (Glassdoor)
Responsibilities:
Design and develop advanced NLP models and pipelines for various language tasks
>Implement sophisticated NLP techniques like parsing, tagging, and information extraction
Evaluate and optimize model performance using metrics like BLEU, ROUGE, and F1 score
Deploy and integrate NLP models into products like chatbots, voice interfaces, and text analyzers
Key Skills:
Deep expertise in natural language processing techniques, models, and algorithms
Strong programming skills, particularly in Python and popular NLP libraries like NLTK and spaCy
Experience with deep learning architectures and frameworks commonly used in NLP
Solid understanding of linguistic concepts such as syntax, semantics, and pragmatics
As businesses increasingly look to leverage the power of language data, the demand for skilled NLP Engineers is rapidly growing. By enabling machines to understand and engage with human language, these professionals are driving the development of AI systems that can communicate with us on our own terms.
6. Business Intelligence Developer
Machine Learning Business Intelligence (BI) Developers are the bridge between raw data and actionable insights, leveraging the power of ML to guide strategic decision-making. They're the wizards who transform complex data into clear, compelling visualizations that drive business value.
While other machine learning roles focus on building models, ML BI Developers specialize in putting those models to work. They're experts at using ML-powered insights to create interactive dashboards and reports that illuminate key trends and opportunities. SQL mastery and data visualization chops are the superpowers in this role.
On a typical day, an ML BI Developer might be found wrangling vast datasets, designing eye-catching visualizations to track critical metrics, or presenting game-changing insights to C-suite executives. They're great at transforming data into strategic ammunition that can give businesses a competitive edge.
Salary: \$105K - \$154K/yr (Glassdoor)
Responsibilities:
Design and develop innovative BI solutions and data pipelines powered by ML insights
Analyze complex data using advanced SQL techniques to uncover trends and opportunities
Create stunning interactive dashboards and reports to communicate key findings
Collaborate with cross-functional stakeholders to understand and meet evolving business needs
Key Skills:
Deep expertise in SQL, database management, and data warehousing techniques
Mastery of BI and data visualization tools like Tableau, Power BI, or Looker
Exceptional analytical and problem-solving skills to derive insights from complex data
Strong communication abilities to convey technical insights to non-technical audiences
In an increasingly data-driven business world, ML BI Developers are on the frontlines, using the power of artificial intelligence to inform strategic decisions. By uncovering the insights hidden in raw data, these professionals help organizations navigate complex challenges and seize emerging opportunities.
7. AI Product Manager
AI Product Managers are the visionaries who shape the future of machine learning products, guiding them from initial concept to successful launch and beyond. They're the strategists who coordinate the efforts of engineering, design, and business teams to bring game-changing AI innovations to market.
While other machine learning roles focus on the technical details, AI Product Managers take a higher-level view. They're less concerned with the specifics of model architectures and more focused on how ML can be used to drive user engagement and business growth. Strong leadership skills, domain expertise, and a knack for strategic thinking are the keys to success in this role.
An AI Product Manager's typical day involves specifying requirements for innovative ML features, optimizing resource allocation by prioritizing the development backlog, analyzing user data to pinpoint improvement opportunities, and presenting a persuasive product roadmap to senior management. The role focuses on guiding the team towards the most effective and lucrative machine learning applications.
Salary: \$199K - \$326K/yr (Glassdoor)
Responsibilities:
Define the overarching vision, strategy, and roadmap for cutting-edge AI/ML products
Coordinate the end-to-end development of ML-powered features from ideation to launch
Analyze product performance metrics and user feedback data to identify areas for optimization
Communicate product plans, progress, and results to cross-functional stakeholders and leadership
Key Skills:
Proven track record of successful product management for AI/ML-driven products
Solid understanding of machine learning concepts, techniques, and applications
Exceptional leadership, communication, and organizational abilities to drive complex initiatives
Strategic mindset and strong analytical skills to make data-informed product decisions
As businesses increasingly look to maximize the potential of artificial intelligence, the role of the AI Product Manager has become more critical than ever. By charting the strategic course towards new ML products, these visionary leaders are shaping the future of how we live, work, and interact with technology.
8. AI Engineer
AI Engineers are the architects who weave artificial intelligence into the fabric of our lives, bringing the power of machine learning to bear on real-world challenges. They're the masters of integration, working tirelessly to embed cutting-edge AI capabilities into the products and applications we use every day.
What sets AI Engineers apart is their end-to-end understanding of the AI solution lifecycle. They're not just building isolated models in a lab, but collaborating closely with cross-functional teams to align technical solutions with business objectives. From data wrangling to model deployment and beyond, they oversee the full journey from raw data to real-world impact.
On a typical day, an AI Engineer might be found preprocessing massive datasets to fuel machine learning models, optimizing algorithms for peak performance in production environments, or designing elegant APIs to integrate AI capabilities into existing software architectures. They tend to focus their time on connecting artificial intelligence with the tangible needs of the business world.
Salary: \$161K - \$267K/yr (Glassdoor)
Responsibilities:
Design and develop end-to-end AI solutions that align with business requirements
Integrate advanced AI capabilities like computer vision, NLP, and predictive analytics into products
Optimize machine learning models and pipelines for scalability, reliability, and performance
Collaborate with cross-functional stakeholders to understand needs and deliver impactful solutions
Key Skills:
Strong software engineering skills and experience with AI/ML development frameworks
Deep knowledge of machine learning algorithms, techniques, and best practices
Familiarity with data engineering, including data collection, cleaning, and preprocessing at scale
Excellent problem-solving skills and ability to translate business needs into technical solutions
As artificial intelligence continues to reshape industries across the globe, the demand for skilled AI Engineers is skyrocketing. By building the technological bridges that connect the world of AI research with the realm of practical application, these innovators are unlocking the vast potential of intelligent machines to transform the way we live and work.
How to Prepare for High-Paying Machine Learning Roles
Ready for a lucrative career in machine learning? To set yourself up for success, focus on three key areas: developing core technical skills, applying your knowledge through hands-on projects, and staying up-to-date with the latest advancements in the field.
1. Build a Strong Foundation
To excel in machine learning, you need a solid grasp of the fundamentals. Start by mastering programming languages like Python or R, and familiarize yourself with essential libraries like scikit-learn, TensorFlow, and PyTorch. Develop a deep understanding of probability, statistics, and linear algebra - the mathematical foundations of ML.
Looking for a comprehensive learning path? Check out our Machine Learning Engineer track. It covers topics you need to know such as:
Data preprocessing and feature engineering
Supervised and unsupervised learning algorithms
Deep learning and neural networks
Model evaluation and optimization techniques
2. Work on Real-World Projects
Theory is important, but the real magic happens when you apply your skills to solve actual problems. Engage in hands-on machine learning projects that mimic real-world scenarios. Tackle challenges like:
Building a recommendation system for an e-commerce site
Developing a predictive maintenance model for industrial equipment
Projects not only reinforce your understanding of ML concepts but also help you develop valuable problem-solving skills. Participate in Kaggle competitions, contribute to open-source projects, and engage with the vibrant ML community to learn from experts and peers alike.
3. Stay on the Cutting Edge
Machine learning is a constantly evolving field, with new breakthroughs happening all the time. To stay competitive, you need to keep your finger on the pulse of the latest advancements and best practices.
Make a habit of reading influential publications like JMLR, following industry leaders on social media, and attending conferences and workshops. Engage in continuous learning through online courses, research papers, and side projects.
The key to success in high-paying machine learning roles is to build a strong foundation, apply your skills through real-world projects, and commit to ongoing learning. By focusing on these three areas, you'll position yourself for a thriving career at the forefront of artificial intelligence and data science.
How to Determine if a Machine Learning Role is Right for You
Here's how to assess if an ML role aligns with your skills, interests, and aspirations.
Machine Learning Roles at a Glance
Machine Learning Engineer: Builds and deploys ML models to solve real-world problems
Builds and deploys ML models to solve real-world problems Data Scientist: Analyzes complex data to uncover insights and inform business decisions
Analyzes complex data to uncover insights and inform business decisions Software Engineer: Develops and maintains software systems and applications
Develops and maintains software systems and applications AI Research Scientist: Conducts cutting-edge research to advance the field of AI
Conducts cutting-edge research to advance the field of AI Natural Language Processing Engineer: Develops models and algorithms to process and analyze human language
Develops models and algorithms to process and analyze human language Business Intelligence Developer: Creates tools and systems to analyze business data and support decision-making
Creates tools and systems to analyze business data and support decision-making AI Product Manager: Defines the strategy and roadmap for AI-powered products and features
Defines the strategy and roadmap for AI-powered products and features AI Engineer: Designs and implements AI systems and solutions
Key Questions to Consider
To determine if an ML role is a good fit, ask yourself:
Are you fascinated by the potential of artificial intelligence to transform industries?
Do you have a strong foundation in math, statistics, and programming?
Are you a creative problem-solver who enjoys tackling complex challenges?
Can you effectively communicate technical concepts to non-technical stakeholders?
Are you committed to continuous learning in a rapidly evolving field?
Succeeding in machine learning roles requires a unique blend of technical skills, problem-solving abilities, and business acumen. According to reports by Deloitte and McKinsey, the most successful AI initiatives are those that align closely with strategic business goals and involve cross-functional collaboration across the organization.
Ultimately, you need to consider how your skills, passions, and career aspirations align with the responsibilities and opportunities of an ML role. Reflect on your motivation to work with data and algorithms to drive innovation and solve real-world challenges.
Next Steps
Machine learning is revolutionizing industries worldwide. As businesses increasingly leverage AI, the demand for skilled ML professionals is rapidly growing. If you're passionate about using data to solve complex problems, now is the ideal time to start a career in machine learning. Dataquest offers comprehensive Machine Learning and Generative AI Fundamentals paths to help you become job-ready.
To excel in your machine learning journey, continuously update your knowledge, engage with the ML community, and apply your skills through real-world projects. Whether you're a student or a professional, developing machine learning expertise can elevate your career and place you at the forefront of the AI revolution.
| 2025-02-25T00:00:00 |
2025/02/25
|
https://www.dataquest.io/blog/machine-learning-jobs-in-demand/
|
[
{
"date": "2023/03/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2023/04/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2023/07/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2023/10/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2023/12/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2024/03/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2024/05/20",
"position": 4,
"query": "machine learning job market"
},
{
"date": "2024/07/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2024/11/01",
"position": 4,
"query": "machine learning job market"
},
{
"date": "2025/01/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2025/02/01",
"position": 3,
"query": "machine learning job market"
},
{
"date": "2025/03/01",
"position": 3,
"query": "machine learning job market"
},
{
"date": "2025/04/01",
"position": 4,
"query": "machine learning job market"
},
{
"date": "2025/05/01",
"position": 3,
"query": "machine learning job market"
},
{
"date": "2025/06/01",
"position": 3,
"query": "machine learning job market"
}
] |
2025 Machine Learning Industry & Career Guide
|
2025 Machine Learning Industry & Career Guide
|
https://onlinedegrees.sandiego.edu
|
[] |
Indeed lists machine learning engineer as #8 among The Best Jobs in the U.S., citing a 53% growth rate since 2020.
|
What is Machine Learning?
First, let’s explore the definition of machine learning, which is categorized as a subsection (or type) of artificial intelligence. Machine learning is “all about extracting valuable information from data” — which could be anything from text and video to images and sound. As the name implies, the ability to learn is what separates machine learning from other AI subsets.
“Machine learning is a branch of AI that gives computer systems the ability to automatically learn and improve from experience, rather than being explicitly programmed. In machine learning, computers use massive sets of data and apply algorithms to train on and make predictions,” according to TechRepublic.
When it comes to real-life applications, tech site ZDNet reports, “From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial intelligence – helping software make sense of the messy and unpredictable real world.”
According to MIT Technology Review, “Machine learning is the process that powers many of the services we use today — recommendation systems like those on Netflix, YouTube and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa. The list goes on.”
What Does a Machine Learning Professional Do?
There are several types of machine learning professionals, and some job responsibilities may overlap with other roles, depending on the size of an organization. In general, working with machine learning deals with complex challenges and often includes the following job duties:
Continuously monitoring and updating ML models to adapt to new data
Researching and developing ML algorithms
Implementing ML algorithms in various applications
Running experiments and tests on AI systems to evaluate functionality and efficiency
Designing and developing scalable ML systems
Performing statistical analyses to refine models and improve accuracy
Collaborating with data scientists and software engineers to integrate ML models into production environments
Analyzing large datasets to extract meaningful patterns and insights
Continuously monitoring and updating ML models to adapt to new data
“People interested in machine learning are often problem solvers,” according to U.S. News & World Report. “They are motivated by the challenge of finding patterns that others can’t see. They develop tools that can sift through huge datasets and find the commonalities. They want to turn the unknown into the known and help make better decisions and produce better results.”
One of the top roles among AI professionals is the machine learning engineer, whose responsibilities include building and managing platforms for ML projects. As Discover Data Science explains, machine learning engineers may often double as data scientists at smaller organizations.
“The responsibilities of a machine learning engineer will be relative to the project they’re working on,” according to Springboard.com. “However, if you explore the job postings, you’ll notice that for the most part, machine learning engineers will be responsible for building algorithms that are based on statistical modeling procedures and maintaining scalable machine learning solutions in production.”
Read on for insights into key machine learning skills and the machine learning engineer career path.
Get Your Free Checklist: 8 Key Factors to Consider Before Choosing an AI Master’s Program (PDF)
Skills for Machine Learning Professionals
According to TechRepublic, machine learning engineers must be skilled in the following:
Computer science fundamentals
Programming
Math and statistics
Data science
Deep learning
Problem solving
Software engineering and system design
Artificial intelligence
Soft skills are also extremely valuable in machine learning positions. These include:
Teamwork
Good communication
Organization
Strong work ethic
How to Start a Career in Machine Learning
Undergraduate degree
The first step toward establishing a career in machine learning is to secure an undergraduate degree in computer science, mathematics, data science, computer programming or related field.
Experience
As described by Discover Data Science, some machine learning positions may require previous experience in an engineer or artificial intelligence-specific role. For example, it’s important to know that a machine learning engineer is not an entry-level position, but one that often requires experience in data science and software engineering — plus an advanced degree. For a career in machine learning, you may want to start in a software engineer, software programmer, software developer, data scientist or computer engineer role.
Advanced degree
For most machine learning careers, an undergraduate degree will not suffice. A master’s in computer science, software engineering, artificial intelligence or machine learning can expand your knowledge and skills, and advance your career. Core machine learning courses may include computer science, high-level mathematics, applied machine learning, statistics, research, data analysis, artificial intelligence and more.
An advanced degree in artificial intelligence can prepare students for a wider range of AI- and ML-related opportunities.
8 Career Paths in Machine Learning
Figures listed are accurate as of February 2025. Salary data fluctuates frequently.
Machine Learning Job & Salary Outlook
The job outlook for machine learning — and artificial intelligence in general — is extremely promising. A February 2025 LinkedIn search for “machine learning engineer” returned over 2,800 results, including positions at companies such as Amazon, Coinbase, TikTok, Pinterest, Ford Motor Company, Netflix, Spotify and Adobe.
Machine learning and AI specialists are making a huge impact on the global workforce. Indeed lists machine learning engineer as #8 among The Best Jobs in the U.S., citing a 53% growth rate since 2020. With computer and information technology jobs projected to grow much faster than the average for all occupations from 2022 to 2032, the demand for AI and machine learning specialists is expected to rise by 40% — or 1 million jobs — over the next five years, according to the World Economic Forum’s Future of Jobs Report 2023. This significant growth underscores a bright machine learning jobs future.
This high demand for machine learning talent translates into competitive salaries. According to Glassdoor, the average annual salary for a machine learning engineer in the United States is $168,730, with salaries ranging from $135,000 to $215,000.
Companies Hiring Machine Learning Professionals
On LinkedIn, the long list of companies seeking machine learning professionals features some of the biggest names in business, as well as companies serving a wide variety of industries. Here is a sample:
Google Workday Amazon Capital One Brown University Microsoft Liberty Mutual Spotify Meta TikTok Boston Children’s Hospital Reebok Wayfair Mayo Clinic Major League Soccer National Football League Massachusetts Institute of Technology CarMax FDA Target Sephora
Machine Learning Career Landscape
The machine learning opportunities available today are vast and varied, providing ample room for growth and advancement. Video surveillance, smartphone traffic alerts, facial recognition software, personalized product recommendations on website browsers — these are all examples of real-life machine learning applications. This important field appeals to many due to its potential for technological breakthroughs and lucrative career opportunities.
The machine learning career landscape offers a variety of paths for professionals at different stages of their careers. Whether you aspire to become a machine learning engineer or explore other ML-related career opportunities, the future in this field is promising and full of potential.
Frequently Asked Questions
Expand All Is machine learning a good career? Yes, it offers strong job growth, high salaries and opportunities to work with cutting-edge technology. A machine learning career is both rewarding and impactful. What are the career paths for machine learning? Career paths for machine learning professionals typically involve: Entry-level roles such as data analyst or junior data scientist. Advancing to positions like machine learning engineer or data scientist. Specializing further into roles such as AI research scientist or NLP scientist. Potentially moving into senior or leadership positions, such as senior machine learning engineer or head of AI. How do I start a career in machine learning? To start a career in machine learning, you should: Get a relevant bachelor’s degree (e.g., computer science). Learn programming languages (Python, R). Study machine learning concepts. Gain practical experience through projects or internships. Consider advanced education, such as a master’s degree in artificial intelligence, and professional certifications. Is machine learning high paying? Yes, with an average salary of $168,730 in the U.S., machine-learning jobs are considered high-paying. Depending on your experience, location and other factors, you can expect a salary range of $135,000 to $215,000. View All FAQs
Take Your Machine Learning Career to the Next Level with an Advanced Degree
High-paying career opportunities in AI, machine learning and related disciplines continued to expand across a variety of industries. An advanced degree in artificial intelligence can position you for success and provide a competitive edge in the growing artificial intelligence job market, which includes machine learning.
The University of San Diego — a highly regarded industry thought leader and education provider — offers an innovative, online AI master’s degree program, the Master of Science in Applied Artificial Intelligence, which is designed to prepare graduates for success in this important, fast-growing field. This program includes a significant emphasis on real-world applications, ethics, privacy, moral responsibility and social good in designing AI-enabled systems.
| 2021-07-29T00:00:00 |
2021/07/29
|
https://onlinedegrees.sandiego.edu/machine-learning-engineer-career/
|
[
{
"date": "2023/03/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2023/04/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2023/07/01",
"position": 8,
"query": "machine learning job market"
},
{
"date": "2023/10/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2023/12/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2024/03/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2024/07/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2024/10/01",
"position": 86,
"query": "machine learning workforce"
},
{
"date": "2024/11/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2025/01/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2025/02/01",
"position": 5,
"query": "machine learning job market"
},
{
"date": "2025/03/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2025/04/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2025/05/01",
"position": 7,
"query": "machine learning job market"
},
{
"date": "2025/06/01",
"position": 7,
"query": "machine learning job market"
}
] |
The 2025 Playbook: Outlook of the Machine Learning Engineer Job ...
|
The 2025 Playbook: Outlook of the Machine Learning Engineer Job Market Trends
|
https://magnimindacademy.com
|
[] |
There are fewer machine learning engineers at present, but there is a high probability they would be in demand as 2025 approaches. In light of the fact that ...
|
There are fewer machine learning engineers at present, but there is a high probability they would be in demand as 2025 approaches. In light of the fact that AI solutions are now being adopted by companies in virtually all sectors, this role could be one of the better ones within the tech world. For now, let us consider where we are today and also what young professionals should expect.
AI and Other Emerging Technologies Creating New Job Opportunities
Considering the current trajectory of machine learning, as well as advancements in data processing, automation, and artificial intelligence, there’s a genuine chance there will be a significant increase in demand for ML professionals by 2025, the forecast is such that most of the future listings targeting such professionals will come from retail, banking, logistics and healthcare.
A more concrete role would be molding and elaborating predictive models and incorporating new AI applications.
Key Industries Hiring Machine Learning Engineers
Even while AI is changing almost every industry, some are setting the pace:
Healthcare: Personalised medicine and AI-powered diagnostics are becoming more popular.
Personalised medicine and AI-powered diagnostics are becoming more popular. Finance: Complex machine learning models are needed for algorithmic trading and fraud detection.
Complex machine learning models are needed for algorithmic trading and fraud detection. Retail: There is a strong need for inventory optimisation and customer behaviour analysis.
There is a strong need for inventory optimisation and customer behaviour analysis. Logistics: AI is transforming route optimisation and supply chain management.
Trends Affecting Careers in AI: Work from Home and Hybrid
Another trend that is currently defining the AI job market is remote and hybrid work. This shows that machine learning engineers now accept working at home or in other locations which necessitate an engineer to possess communication and interaction skills as much as technical knowledge.
This has also created an opportunity for companies to grab in the out best candidates present in the market without caring about their locations.
Quick Tip: Programming languages including Python and Rust and comprehension of various fields are among the most effective skills to garner if One wishes to take up this vocation.
Machine Learning Engineer Jobs 2025: Amalgamation of Trends
AI is in an enormous transition across domains with new trends emerging that desires new set of tools among MLG engineers. In order to keep pace one needs to be aware of these trends and modify skills as needed.
The Rise of Generative AI and its influence on work
With ChatGPT, DALL-E and other generative AI technologies these precursors certainly pushed further the limits of which AI can reach.
With a growing usage of these technologies there is a growing need for engineers capable of:
Training of large language models to perform other tasks.
Improving AI systems in terms of their cost and power usage.
Establishing CCO (content creation order) regulations that ensure the ethical usage of AI.
The Landscape of AI Talent: What the Future Will Be Looking Like
Drawing only from the wellspring of machine learning experience is no longer a benchmark.
Employers now want to be able to combine AI with other areas such as:
Domain Knowledge : Familiarity with challenges in subindustries such as finance, healthcare, etc.
: Familiarity with challenges in subindustries such as finance, healthcare, etc. Data Engineering : The capacity to design large datasets that can be operated on data pipelines.
: The capacity to design large datasets that can be operated on data pipelines. Software Development: The seamless deployment of machine learning models into production refers to putting those models – algorithms ready to learn – into their intended use and maintaining them in specified and expected performance levels.
Importance of Accreditations and Distance Learning Courses
With the adoption of online learning programs offered by start-ups such as Coursera, edX or Udemy. These programs are inexpensive and allow great flexibility in acquiring.
Current developments in machine learning.
Very useful AI-related skills such as Rust programming for numerical computing and matrix calculations.
More specific areas such as computer vision or natural language processing.
Pro Tip: Update your curriculum vitae and your work portfolio including all certifications and project works that you have under your name to ensure that you will fit in the modern work force of the AI world.
Future of Machine Learning Careers: What Are the Expectations?
The machine learning careers are absolutely going to be diverse and it is anticipated that AI would bring in new prospects, but at the same time this expansion comes with a set of challenges that will need to be dealt with.
Machine Learning Roles Evolution
Roles of machine learning engineer are evolving as more job scopes are defined.
Come 2025, expect to see:
AI Strategists : Engineers who will devise and carry out long-term AI deployment plans in the organizations.
: Engineers who will devise and carry out long-term AI deployment plans in the organizations. Data Science Integrators : Those who will connect large and disorganized streams of data with models and systems built about implementing A.I.
: Those who will connect large and disorganized streams of data with models and systems built about implementing A.I. AI Deployment Specialists: Experts who will embed Artificial Intelligence in the current systems of the business.
Emerging Employment Trends To Watch Out For And Prepare Yourself.
While it is a global phenomenon, it comes with various specializations. And this key area is bound to profit, especially:
1. NLP Engineers
NLP is incorporated with many applications, such as chatbots, voice assistants, translation, etc.
Skills in:
Determining text-based and emotion tracking.
Fine-tuning work in trained networks like BERT or GPT.
2. Computer Vision Specialists
An understanding of Computer Vision is imperative especially demand for self-driving cars alongside facial recognition technology are on the rise.
The focus would mainly include:
Detecting images through classifiers and working with segmentation.
Using advanced tools like TensorFlow and PyTorch.
3. AI Ethics and Bias Analysts
Any system shows or lay out the framework of a certain ideological perception, and what is important is the ethics of that perception, especially with the increasing areas on which AI has an impact on.
In this role, the professional will go about working to:
Find out as well as reduce any bias present in the AI algorithms.
Aid and implement moral and regulatory boundaries.
Quick Insight: Improving skills in languages such as Rust targeted for data science and numerical computing are gaining traction, allowing ML professionals an upper hand in the area.
In-Demand AI Skills 2025: What Employers Want
The 2025 machine learning engineer should balance soft skills and technical expertise. While these skills are important, learning how to work with new technology and with diverse teams is more desirable for employers.
Core Technical Skills
Here’s what you’ll need to stand out:
1. Programming Languages
Python remains a staple, but Rust is gaining traction for its efficiency in numerical computing and data workflows. Key skills include:
Writing efficient algorithms in Rust.
Leveraging Rust libraries for data science, such as ndarray and Polars .
and . Optimizing matrix operations in Rust for high-performance tasks.
2. Machine Learning Frameworks
Familiarity with tools like TensorFlow and PyTorch is essential, but proficiency in emerging frameworks optimized for performance will set you apart.
3. Cloud and Distributed Computing
Knowledge of AWS and Google Cloud as well as Azure in the deployment of models at scale is becoming a necessary skill set.
4. Advanced Mathematical Knowledge
Master concepts like:
Linear algebra for matrix computations.
Probability and statistics for model evaluation.
Soft Skills in High Demand
Machine learning engineers must also excel in areas beyond coding:
1. Problem-Solving and Creativity
Employers value engineers who can think outside the box to design innovative AI solutions for complex problems.
2. Team Collaboration in AI Teams
Clear methods for communication as well as teamwork on remote and hybrid work settings is rather important considering the increase of cross functional teams.
Pro Tip: Include projects on your CV that illustrate your work in projects where both your soft and hard skills were required.
Emerging Machine Learning Technologies to Watch
The best of the new AI developments set to change the use of machine learning in the coming month’s developments are the new AI job market trends 2025. Here are some things you should be on the look out in 2025.
Edge AI and On-Device Processing
Edge AI performs training and inference of ML models on the device, thus reducing latency and improving privacy. These include the following:
Mobile and IoT optimized neural networks.
Specifics in wearer technology, smart features further integrated in gadgets and even driverless vehicles.
Advances in Reinforcement Learning
Reinforcement learning continues to unlock possibilities in robotics, gaming, and real-time decision-making. Expect developments in:
More efficient reward systems.
Hybrid models combining reinforcement learning with other techniques like supervised learning.
Rust for Numerical Computing and Data Workflow Acceleration
Rust is emerging as a game-changer in machine learning due to its high performance and safety features.
1. Rust Performance Benchmarks
Rust outshines traditional languages like Python in computational tasks, thanks to its:
Speed in handling matrix operations.
Efficiency in memory management, crucial for large datasets.
2. Real-World Applications in Machine Learning
Rust’s versatility is evident in its adoption for:
Building scalable data workflows.
Developing AI tools that prioritize performance without sacrificing safety.
Accelerating Data Workflows with Rust
Rust libraries for data science, such as DataFusion and ndarray, are simplifying complex workflows.
Use cases include:
Real-time data analytics.
Seamless integration with big data platforms.
Quick Insight: Engineers skilled in Rust for numerical computing and data science will be at the forefront of innovation in 2025.
Tools and Frameworks Shaping AI Development
In the context of machine learning, how will the world look like in 2025? In my opinion, a great portion of the flux will come through advanced tools and frame works that foster productivity, improve work and enable greater scaling.
One has to be in touch with these technologies to be relevant within the profession.
Rust’s Growing Role in AI Development
Rust is making waves in AI for its unparalleled speed, safety, and performance.
1. Comparison with Python and C++ in Performance
Rust offers significant advantages over Python and C++ in key areas:
Speed : Rust’s compilation process produces highly optimized executables.
: Rust’s compilation process produces highly optimized executables. Memory Safety : Unlike C++, Rust minimizes errors with its ownership model.
: Unlike C++, Rust minimizes errors with its ownership model. Concurrency: Rust handles multi-threading efficiently, crucial for parallel computing tasks.
2. Libraries Optimized for Speed and Efficiency
Rust’s ecosystem is expanding rapidly with libraries tailored for AI and data science:
ndarray : For numerical operations and array manipulations.
: For numerical operations and array manipulations. Polars : A fast DataFrame library for big data tasks.
: A fast DataFrame library for big data tasks. DataFusion: Enables SQL queries over large datasets for real-time analytics.
Other Must-Have Machine Learning Frameworks
While Rust is rising, traditional frameworks remain indispensable:
TensorFlow and PyTorch : For deep learning and neural network development.
: For deep learning and neural network development. Hugging Face Transformers : Essential for NLP and large language model tasks.
: Essential for NLP and large language model tasks. ONNX: For model interoperability across platforms.
When to Choose Rust Over Traditional Tools
Rust is ideal when performance is a priority, such as:
Deploying machine learning models in resource-constrained environments.
Processing large-scale datasets in real time.
Pro Tip: Combine Rust with Python for a hybrid workflow—using Python’s extensive libraries for prototyping and Rust for production-grade performance.
Tips for Career of Machine Learning Engineers in 2025
With the evolution of machine learning, the sectors are becoming more competitive, but strategies are there to ensure that one does not lag behind. Here’s how to be able to do your work as a machine learning engineer without worrying about the future.
How to Stay Ahead in a Competitive Market
1. Build a Portfolio with Emerging Technologies
Employers are looking for hands-on experience. Create a strong portfolio by working on projects that leverage:
Emerging machine learning technologies like Edge AI.
Programming languages such as Rust for numerical computing.
Tools for optimizing matrix operations and data workflows.
2. Attend AI Conferences and Hackathons
These events are excellent for:
Networking with industry leaders.
Gaining insights into the latest AI job market trends for 2025.
Showcasing your skills through competitive projects.
Choosing the Right Specialization for Career Growth
Identify areas where demand is high and align them with your interests:
NLP : With the continued growth of generative AI, NLP remains a lucrative specialization.
: With the continued growth of generative AI, NLP remains a lucrative specialization. Computer Vision : Essential for industries like healthcare, autonomous vehicles, and retail.
: Essential for industries like healthcare, autonomous vehicles, and retail. AI Ethics: Growing concerns over bias and fairness make this an emerging field.
Networking and Professional Development in AI
Build a strong professional network to stay informed about opportunities and trends:
Join LinkedIn groups focused on AI and machine learning.
Participate in online forums like GitHub, Stack Overflow, or Rust-focused communities.
Engage in continuous learning through platforms offering certifications and new skills.
Pro Tip: Tailor your learning to include skills like Rust for data science and participate in projects that highlight your expertise in accelerating data workflows.
Getting Ready For The Future Of A Machine Learning Engineer In 2025
With the increase in hiring of machine learning engineers, the task of getting a position is no more easy and requires more than just having skills.
To be successful, you will have to be one step ahead by being ready with the requirements of the industry in general. This is how you can remain relevant in the year 2025 and beyond.
1. Master Emerging Technologies
With so many people focusing on one particular area, it is important to specialize in new upcoming technologies.
Rust for Performance Improvement: Rust nowadays has become people’s favorite programming language when it comes to speed or efficiency. Machine learning engineers who understand the language would be considered a great asset to the firm as tasks involving numerical computing and data workflow acceleration will be required.
Rust nowadays has become people’s favorite programming language when it comes to speed or efficiency. Machine learning engineers who understand the language would be considered a great asset to the firm as tasks involving numerical computing and data workflow acceleration will be required. Generative AI Tools : The tools, GPT and DALL-E are only a few of the generative models that are changing the way work is done. It is important to understand how these models are utilized as they will make you a great resource to potential employers.
: The tools, GPT and DALL-E are only a few of the generative models that are changing the way work is done. It is important to understand how these models are utilized as they will make you a great resource to potential employers. Edge AI: AI is slowly switching towards the devices along with edge computing and having expertise in building AI models that run on devices will be highly favored.
2. Understand the Demand of the Frameworks
Although new ML frameworks such as Rust are rising notably, classical ML frameworks are still relevant resources in the development of AI.
TensorFlow and PyTorch : Despite the evolution of deep learning concepts and the emergence of new methods, deep learning models are always constructed and trained using these two frameworks as they are the most accepted in the academic community and the professional world.
: Despite the evolution of deep learning concepts and the emergence of new methods, deep learning models are always constructed and trained using these two frameworks as they are the most accepted in the academic community and the professional world. Hugging Face Transformers : As NLP-related tasks can no longer ignore the use of LLMs, this library becomes instrumental in 2025, especially considering the increasing number of such text-generating models.
: As NLP-related tasks can no longer ignore the use of LLMs, this library becomes instrumental in 2025, especially considering the increasing number of such text-generating models. ONNX: Being competent in ONNX (Open Neural Network Exchange) will enable you to easily transition from one AI platform to another.
3. Stay Ahead with Data Science Tools
Machine learning engineers are often required to work alongside data scientists. To make yourself more marketable, gain proficiency in data science tools and libraries.
Big Data Platforms : Learning how to work with big data platforms like Apache Spark and Hadoop will give you an edge in industries dealing with massive datasets.
: Learning how to work with big data platforms like Apache Spark and Hadoop will give you an edge in industries dealing with massive datasets. Rust Libraries for Data Science : Explore libraries such as ndarray for numerical operations and Polars for high-performance data manipulation. These Rust tools are becoming essential for data-driven AI solutions.
: Explore libraries such as for numerical operations and for high-performance data manipulation. These Rust tools are becoming essential for data-driven AI solutions. SQL and Data Engineering Skills: A good grasp of SQL alongside the capability to design quality data fetching processes is a plus for the ML engineers’ toolbox.
Conclusion
There are a lot of opportunities for the machine learning engineer job in 2025. To be competitive, one should specialize in the current technologies such as Rust and the current trends in AI skills. It means that you should always try to stay flexible, to learn new things, to develop yourself, and to build professional contacts, which will help you in your career.
To succeed, one has to learn how to balance between the technical aspect of it and the ability to work with others and solve problems, thus being well equipped for any obstacle that may come in the future.
| 2025-01-29T00:00:00 |
2025/01/29
|
https://magnimindacademy.com/blog/the-2025-playbook-outlook-of-the-machine-learning-engineer-job-market-trends/
|
[
{
"date": "2023/03/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2023/04/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2023/07/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2023/10/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2023/12/01",
"position": 8,
"query": "machine learning job market"
},
{
"date": "2024/03/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2024/07/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2024/11/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2025/01/01",
"position": 9,
"query": "machine learning job market"
},
{
"date": "2025/02/01",
"position": 15,
"query": "machine learning job market"
},
{
"date": "2025/03/01",
"position": 10,
"query": "machine learning job market"
},
{
"date": "2025/04/01",
"position": 13,
"query": "machine learning job market"
},
{
"date": "2025/05/01",
"position": 13,
"query": "machine learning job market"
},
{
"date": "2025/06/01",
"position": 9,
"query": "machine learning job market"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.