title_s
stringlengths
2
79
title_dl
stringlengths
0
200
source_url
stringlengths
13
64
authors
listlengths
0
10
snippet_s
stringlengths
0
291
text
stringlengths
21
100k
date
timestamp[ns]date
1926-02-14 00:00:00
2030-07-14 00:00:00
publish_date_dl
stringlengths
0
10
url
stringlengths
15
590
matches
listlengths
1
278
Brisk Teaching: Free AI Tools for Teachers and Educators
Free AI Tools for Teachers and Educators – Brisk Teaching
https://www.briskteaching.com
[]
Brisk is the best AI for teachers and schools, offering free AI tools for education that fit seamlessly into your routine. With an AI lesson plan generator ...
I have seen Brisk Teaching transform the teaching experience for educators and the learning experience for students with the click of a button. Danielle Bosanec "I didn’t have to learn Brisk, it’s so intuitive. It figured out what I wanted, and it didn't take any extra time." Nicole L. "This is truly brilliant, the best product I have ever seen in my teaching career of 25 years." Middle School Teacher "I just have to say, Brisk is the gift that keeps on giving to my classroom. I have used it almost every day... As a teacher who left teaching because of burnout and came back because I missed the classroom so much, your tool has made my job SO MUCH MORE MANAGEABLE, and has allowed me to give such quality feedback to my students in SECONDS! As an ELA teacher, this is life saving. Truly." Kelly Ann S. “What petition do I need to sign or group do I need to join in order to get premium BRISK FOREVER? This targeted feedback is ridiculously accurate and has already saved me hours. Anyone not using this is, let's be honest, crazy." Middle School Teacher "As a Language Arts teacher, one of the hardest parts of my job is providing enough meaningful feedback for my students. Brisk has allowed me to provide way more specific and actionable feedback for my students. It has also helped me to see more elements of quality that I had previously overlooked in my students' work." Christa M.
2022-12-01T00:00:00
https://www.briskteaching.com/
[ { "date": "2022/12/01", "position": 32, "query": "AI education" }, { "date": "2022/12/01", "position": 69, "query": "artificial intelligence education" }, { "date": "2023/02/28", "position": 9, "query": "AI education" }, { "date": "2023/03/01", "position": 37, "query": "AI education" }, { "date": "2023/06/01", "position": 33, "query": "AI education" }, { "date": "2023/09/01", "position": 32, "query": "AI education" }, { "date": "2023/11/01", "position": 33, "query": "AI education" }, { "date": "2024/01/01", "position": 33, "query": "AI education" }, { "date": "2024/03/01", "position": 34, "query": "AI education" }, { "date": "2024/05/01", "position": 36, "query": "AI education" }, { "date": "2024/07/01", "position": 32, "query": "AI education" }, { "date": "2024/10/01", "position": 31, "query": "AI education" }, { "date": "2024/11/01", "position": 33, "query": "AI education" }, { "date": "2025/02/01", "position": 84, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 21, "query": "AI education" }, { "date": "2025/05/01", "position": 18, "query": "AI education" }, { "date": "2025/06/01", "position": 74, "query": "artificial intelligence education" } ]
AI in Education Podcast
AI in Education Podcast
https://aipodcast.education
[]
It's a weekly chat about Artificial Intelligence in Education - what it is, how it works, and the different ways it is being used.
Welcome to the AI in Education podcast With Dan Bowen from Microsoft Australia and Ray Fleming from InnovateGPT It's a weekly chat about Artificial Intelligence in Education - what it is, how it works, and the different ways it is being used. It's not too serious, or too technical, and is intended to be a good conversation of background information. Of course, as well as getting it here on the website, you can also just subscribe to your normal podcast service: “This podcast is produced by a Microsoft Australia & New Zealand employee, alongside an employee from InnovateGPT. The views and opinions expressed on this podcast are our own.”
2022-12-01T00:00:00
https://aipodcast.education/
[ { "date": "2022/12/01", "position": 33, "query": "AI education" }, { "date": "2023/03/01", "position": 32, "query": "AI education" }, { "date": "2023/06/01", "position": 34, "query": "AI education" }, { "date": "2023/09/01", "position": 38, "query": "AI education" }, { "date": "2023/11/01", "position": 35, "query": "AI education" }, { "date": "2024/11/01", "position": 35, "query": "AI education" } ]
Generative Artificial Intelligence - Center for Teaching Innovation
Generative Artificial Intelligence
https://teaching.cornell.edu
[]
Generative artificial intelligence is a subset of AI that utilizes machine learning models to create new, original content, such as images, text, or music.
In Spring 2024, with a goal of inspiring other instructors through the sharing of new ideas, methods, and strategies at Cornell, five faculty were recognized for their creative classroom experiences and teaching implementations using – or creatively precluding use of – generative AI. Learn about the projects here: Teaching Innovation Case Studies: Creative Responses to Generative AI Since the release of new generative artificial intelligence (AI) tools, including ChatGPT, we have all been navigating our way through both the landscape of AI in education and its implications for teaching. As we adapt to these quickly evolving tools and observe how students are using them, many of us are still formulating our own values around what this means for our classes. Our CTI resources aim to provide support on what these tools are and how they work. We'll address common concerns and considerations in the context of AI, such as academic integrity, accessibility and ethical uses of the technology. We'll also explore practical applications and pedagogical strategies for teaching and assignment design as you determine what approaches and policies regarding AI are the right fit for your classes. What is Generative Artificial Intelligence (AI)? Generative artificial intelligence is a subset of AI that utilizes machine learning models to create new, original content, such as images, text, or music, based on patterns and structures learned from existing data. A prominent model type used by generative AI is the large language model (LLM). An LLM, like ChatGPT, is a type of generative AI system that can produce natural language texts based on a given input, such as a prompt, a keyword, or a query. LLMs typically consist of millions or billions of parameters that are “trained” on massive amounts of text data, such as books, articles, websites, and social media posts, and can perform various tasks, such as answering questions, summarizing texts, writing essays, creating captions, and generating stories. LLMs can also learn from their own outputs and are likely to improve over time. It’s important to note that while LLMs can answer questions and provide explanations, they are not human and thus do not have knowledge or understanding of the material they generate. Rather, LLMs generate new content based on patterns in existing content, and build text by predicting most likely words. Because of how LLMs work, it is possible for these tools to generate content, explanations, or answers that are untrue. LLMs may state false facts as true because they do not truly understand the fact and fiction of what they produce. These generated fictions presented as fact are known as “hallucinations." back to top How Will Generative AI Affect Higher Education? Nobody knows the true impact that generative AI will have on higher education. These technologies are rapidly evolving in complexity and type of use. What we do know is that generative AI is opening up a world of possibilities, while also generating significant concerns about academic integrity, ethics, access and bias. Before we dig too deep into whether and how to incorporate generative AI into your courses, here are a few general steps you can take as you consider what generative AI means for your classroom: Reflect : How do you feel about generative AI? Concerned? Excited? A little of both? What additional information do you need to feel able to make informed decisions about whether or not to incorporate it into your courses? : How do you feel about generative AI? Concerned? Excited? A little of both? What additional information do you need to feel able to make informed decisions about whether or not to incorporate it into your courses? Try it out : Experiment with generative AI platforms relevant to your discipline, like ChatGPT, Gemini (formerly known as Bard), or DALL-E 2. Choose a tool, then ask it to complete an assignment you’d give your students. What are the results? Ask it to revise the assignment, and see how it responds. Can you identify possible areas of concern for academic integrity, or opportunities for student learning? : Experiment with generative AI platforms relevant to your discipline, like ChatGPT, Gemini (formerly known as Bard), or DALL-E 2. Choose a tool, then ask it to complete an assignment you’d give your students. What are the results? Ask it to revise the assignment, and see how it responds. Can you identify possible areas of concern for academic integrity, or opportunities for student learning? Predict and inquire : How might students use this technology in your course? Can you ask students how they are currently using generative AI tools? What clarity will students need to distinguish between appropriate and inappropriate uses of these tools? Consider how you might adjust assignments to either incorporate generative AI into your course, or to identify areas where students may lean on the technology, and turn those hot spots into opportunities to encourage deeper and more critical thinking. How might you use these tools to assist your teaching? For example, could generative AI help generate practice problems for students? : How might students use this technology in your course? Can you ask students how they are currently using generative AI tools? What clarity will students need to distinguish between appropriate and inappropriate uses of these tools? Consider how you might adjust assignments to either incorporate generative AI into your course, or to identify areas where students may lean on the technology, and turn those hot spots into opportunities to encourage deeper and more critical thinking. Learn more : This technology is evolving, and none of us are experts yet. Be open to continuing to learn more and to having ongoing conversations with colleagues, your department, people in your discipline, and even your students about the impact generative AI is having. : This technology is evolving, and none of us are experts yet. Be open to continuing to learn more and to having ongoing conversations with colleagues, your department, people in your discipline, and even your students about the impact generative AI is having. Set your parameters: Decide whether and when you want students to use the technology in your courses, and clearly communicate your parameters and expectations with them. Keep in mind that students will likely have instructors with different ideas about how to use, or why or why not to use, generative AI tools. Be transparent and direct about your expectations. back to top The Upside: Possibilities for Generative AI to Benefit Learning Environments We all want to discourage students from using generative AI to complete assignments at the expense of learning critical skills that will impact their success in their majors and careers. However, we’d also like to take some time to focus on the possibilities that generative AI presents. Here are a few ways it can be useful to students and faculty alike: Generative AI can potentially be used by both faculty and students to: Provide instant access to vast amounts of information quickly Aid diverse learners with different learning abilities, linguistic backgrounds or accessibility needs Accelerate exploration and creativity, spark curiosity, suggest new ideas and ways of thinking Students might explore using Generative AI to: Be more efficient with course work and tasks Help with studying Generate ideas for brainstorming Get further explanation of a topic a teacher is covering for class Improve their writing Get instant feedback Practice language skills in a safe environment Faculty might explore using Generative AI to: Generate content and course materials including lesson plans, quiz questions, sample problems, or writing scenarios Assist in research tasks including analyzing large datasets, identifying patterns, and generating insights and research directions Write learning objectives, course descriptions, syllabi statements, or course policies As you and your students prepare to investigate the use of generative AI tools, we recommend discussing course policies and expectations around their use, and clearly communicating with your students when and in what ways use of generative AI tools are permitted – or not. We also recommend that you consider the accessibility of generative AI tools as you explore their potential uses, especially those that students may be required to interact with. Finally, it’s important to take into account the ethical considerations of using such tools. These topics are fundamental if considering using AI tools in your assignment design. back to top Generative AI Literacy While ChatGPT and other LLMs can assist learners in various tasks and activities, they cannot replace human creativity, judgment, ethics, or responsibility, all of which are essential for learning. LLMs may help a learner write a paper or a report, but they cannot teach the learner how to conduct original research, synthesize information from multiple sources, formulate arguments, express opinions, or cite sources properly. Thus, the need for AI literacy is essential for students and faculty alike. We can think of ethical generative AI literacies as the ability to understand, evaluate, and critically engage with generative AI technologies. Generative AI literacy includes skills such as recognizing when and how generative AI is used in various domains; assessing the reliability and validity of AI-generated outputs; identifying the ethical and social implications of AI applications; and creating and communicating with generative AI systems in ways that are appropriate to your course. Just as we adapt to the changing media environment, developing AI literacy will be an ongoing process, but one that is vital to helping you and your students become more informed and responsible users and creators of AI technologies. back to top Stay Engaged and Informed The range of AI applications and their abilities continue to develop rapidly, bringing both opportunities and challenges for educators wanting to stay current and informed. As the higher Ed landscape changes with the advent of this new technology, CTI aims to be a dependable partner and resource for faculty working to incorporate generative AI into their courses. Our goal is to support faculty in enhancing their teaching and learning experiences with the latest AI technologies and tools. As such, we look forward to providing various opportunities for professional development and peer learning. As you further explore, you may be interested in CTI’s generative AI events. If you want to explore generative AI beyond our available resources and events, please reach out to schedule a consultation. back to top
2022-12-01T00:00:00
https://teaching.cornell.edu/generative-artificial-intelligence
[ { "date": "2022/12/01", "position": 36, "query": "AI education" }, { "date": "2022/12/01", "position": 37, "query": "artificial intelligence education" }, { "date": "2023/11/01", "position": 39, "query": "AI education" }, { "date": "2023/11/01", "position": 37, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 44, "query": "artificial intelligence education" }, { "date": "2024/09/01", "position": 47, "query": "artificial intelligence education" }, { "date": "2024/11/01", "position": 38, "query": "AI education" }, { "date": "2024/12/01", "position": 35, "query": "artificial intelligence education" }, { "date": "2025/06/01", "position": 40, "query": "artificial intelligence education" } ]
Learn essential AI skills - Google AI
Understanding AI: AI tools, training, and skills
https://ai.google
[]
Google offers various AI-powered programs, training, and tools to help advance your skills. Develop AI skills and view available resources.
AI SKILLS We're building AI skills programs, trainings, and tools to address the specific needs of workers everywhere. Discover Google's courses and resources designed to help you succeed in an AI-driven world.
2022-12-01T00:00:00
https://ai.google/learn-ai-skills/
[ { "date": "2022/12/01", "position": 37, "query": "AI education" }, { "date": "2022/12/01", "position": 61, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 38, "query": "AI education" }, { "date": "2023/06/01", "position": 36, "query": "AI education" }, { "date": "2023/09/01", "position": 35, "query": "AI education" }, { "date": "2023/11/01", "position": 36, "query": "AI education" }, { "date": "2023/11/01", "position": 59, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 56, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 35, "query": "AI education" }, { "date": "2024/03/01", "position": 37, "query": "AI education" }, { "date": "2024/05/01", "position": 37, "query": "AI education" }, { "date": "2024/07/01", "position": 34, "query": "AI education" }, { "date": "2024/09/01", "position": 63, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 36, "query": "AI education" }, { "date": "2024/11/01", "position": 34, "query": "AI education" }, { "date": "2024/12/01", "position": 60, "query": "artificial intelligence education" }, { "date": "2025/02/01", "position": 71, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 46, "query": "AI education" }, { "date": "2025/05/01", "position": 45, "query": "AI education" }, { "date": "2025/06/01", "position": 64, "query": "artificial intelligence education" } ]
IBM AI Education - IBM Mediacenter
IBM AI Education
https://mediacenter.ibm.com
[]
IBM AI Education is an immersive, online professional learning suite of FREE live and on-demand institutes, webinars taken to the next level, crafted by and ...
IBM AI Education is an immersive, online professional learning suite of FREE live and on-demand institutes, webinars taken to the next level, crafted by and for educators. The nine institutes will guide educators through AI’s foundational concepts and K-12 classroom connections, with topics including introduction to AI, natural language processing, ethics, robotics, and more. Educators can choose to view just one institute or experience multiple sessions, and by attending all nine, they have the opportunity to earn the IBM AI Education badge. All attendees will walk away with an exposure to AI and ideas on how to incorporate AI fundamentals into their curriculum through our example activities during each session as well as an interactive toolkit of strategies that attendees can access after each institute. Artificial Intelligence (AI) is all around us, from self-driving cars, digital assistants, ride-share apps, banking applications, and so much more. Experts say AI will change 100% of jobs over the next 10 years, but there is a fear that the next generation isn’t prepared for the shift to AI. It's imperative for teachers to learn how to infuse their content and curriculum with the knowledge, skills, and values driving innovation in AI today so that their students are prepared to be successful in the modern workforce, regardless of their career paths.
2022-12-01T00:00:00
https://mediacenter.ibm.com/media/IBM+AI+Education/1_frksa038
[ { "date": "2022/12/01", "position": 40, "query": "AI education" }, { "date": "2023/03/01", "position": 39, "query": "AI education" }, { "date": "2023/06/01", "position": 41, "query": "AI education" }, { "date": "2023/09/01", "position": 42, "query": "AI education" }, { "date": "2023/11/01", "position": 42, "query": "AI education" }, { "date": "2024/01/01", "position": 43, "query": "AI education" }, { "date": "2024/03/01", "position": 41, "query": "AI education" }, { "date": "2024/05/01", "position": 45, "query": "AI education" }, { "date": "2024/07/01", "position": 45, "query": "AI education" }, { "date": "2024/10/01", "position": 41, "query": "AI education" }, { "date": "2024/11/01", "position": 42, "query": "AI education" } ]
Meet Khanmigo: Khan Academy's AI-powered teaching assistant ...
Meet Khanmigo: Khan Academy's AI-powered teaching assistant & tutor
https://www.khanmigo.ai
[]
Khanmigo, built by nonprofit Khan Academy, is a top-rated AI for education. Save time on prep, tackle homework challenges, and get personalized tutoring.
No, you will not be able to grant students access to Khanmigo. At this time, classroom access for Khanmigo is only available through school or district implementations. We invite district and school administrators to contact our Districts Partnership team to learn about Khanmigo for Districts. Khanmigo for teachers offers a lot of exciting features, such as standards-aligned lesson planning tied to Khan Academy’s world-class content library; an on-demand summary of recent student work so you can quickly assess progress and identify areas where additional support is needed; and Khanmigo-crafted learning objectives, rubrics, and even exit tickets. As a nonprofit educational organization, we see it as our responsibility to explore what AI could mean for the future of education. We believe that AI has the potential to transform learning in a positive way, but we are also keenly aware of the risks. Therefore, we are only offering access to learners who are under 18 years old when their parents give them access or when their school district is partnered with Khan Academy. This collaboration across school leaders, teachers, families, and Khan Academy staff makes it possible to provide the level of supervision and support children deserve. We believe these guidelines will help us responsibly adapt AI for an educational setting.
2022-12-01T00:00:00
https://www.khanmigo.ai/
[ { "date": "2022/12/01", "position": 44, "query": "AI education" }, { "date": "2022/12/01", "position": 75, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 44, "query": "AI education" }, { "date": "2023/06/01", "position": 39, "query": "AI education" }, { "date": "2023/09/01", "position": 40, "query": "AI education" }, { "date": "2023/11/01", "position": 40, "query": "AI education" }, { "date": "2023/11/01", "position": 79, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 81, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 45, "query": "AI education" }, { "date": "2024/03/01", "position": 40, "query": "AI education" }, { "date": "2024/05/01", "position": 41, "query": "AI education" }, { "date": "2024/07/01", "position": 37, "query": "AI education" }, { "date": "2024/09/01", "position": 84, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 39, "query": "AI education" }, { "date": "2024/11/01", "position": 41, "query": "AI education" }, { "date": "2024/12/01", "position": 83, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 53, "query": "AI education" }, { "date": "2025/05/01", "position": 49, "query": "AI education" }, { "date": "2025/06/01", "position": 87, "query": "artificial intelligence education" } ]
Learning With AI, Learning About AI
Learning With AI, Learning About AI
https://www.cde.ca.gov
[]
Learning With AI, Learning About AI. Information regarding the role of artificial intelligence (AI) in California K12 education. CDE AI Resource Kit. Get the ...
CDE AI Resource Kit Get the CDE AI Resource Kit (PDF) Table of Contents Learning With AI, Learning About AI Introduction Resources AI Webinar Recordings Where is AI in the World of Today and Tomorrow? Where Might AI Fit Within California Schools? Fundamental Skills for Educators and Students Utilizing AI in Schools Developing AI in Schools References Introduction Please note: This page is meant to provide helpful guidance to our partners in education and is, in no way, required to be followed. This page is intended to be informative rather than prescriptive. The information is merely exemplary, and compliance with any information or guidance on this page is not mandatory. (See Education Code § 33308.5.) Emerging technologies often lead to new and exciting learning opportunities for students, particularly in increasing personalization and accessibility options. While Artificial Intelligence (AI) can be a valuable learning tool for educators and students, it must be evaluated according to usage terms, and clear guidelines for data collection should prioritize student safety. The California Department of Education (CDE) considers human relationships crucial in education, particularly when incorporating generative AI tools such as ChatGPT into schools. This is particularly vital in light of school closures that occurred in the recent past due to the COVID19 pandemic and other natural disasters, that left many educators and students physically isolated. AI or any other technology cannot replace the value of a student’s relationship with a caring educator who can connect on a human level. In accordance with State Superintendent of Public Instruction Tony Thurmond’s professional learning initiative, CDE is committed to supporting AI-focused professional learning for administrators and educators, to educate them about AI's benefits and limitations. The Professional Learning Initiative can keep educators up-to-date on emerging technology tools for the classroom. CDE’s Computer Science Coordinator has participated in numerous AI panels across the state and has led CDE’s STEAM (Science, Technology, Engineering, Arts, Mathematics) workgroup in introducing AI to CDE. The department hosted an AI panel discussion in May of 2023 entitled “Artificial Intelligence: Demystifying AI for CA Students,” and the Computer Science (CS) Coordinator regularly collaborates with members of the Computer Science Teachers Association and CSforCA AI workgroup to remain current on educational trends in the field of AI and other emerging technologies in respect to their impacts on education. CDE is preparing additional resources to support educators, including the implementation of an AI webinar series “Artificial Intelligence: Learning with AI, Learning about AI,” which features educators from K12 and postsecondary spaces, as well as industry professionals, complete with actionable resources for schools with a learner centered focus. CDE is participating along with multiple organizations including Code.org, Educational Testing Services, International Society for Technology in Education, Khan Academy, and World Economic Forum, in the TeachAI project (teachai.org), to engage in conversations with a goal of empowering educators to teach with and about AI. CDE encourages educators to approach AI with a lens of equity. As educators and students learn about potential inaccuracies, bias, and social impacts of AI, they build capacity to promote ethical use of technology. The department supports educators who adapt their teaching methods to incorporate emerging technologies effectively in line with student needs. Resources Information and resources related to AI in education are available on the California Educators Together Statewide AI group (Login Required) AI Webinar Recordings Select the links below to view recordings from the CDE AI Webinar Series "Learning with AI, Learning about AI." Each webinar features speakers, actionable content, and a learner-centered approach. AI and its Impacts on Mental Health (Video; 57:43) AI: Fundamental Skills for Educators and Students (Video; 58:20) AI: Safe Use (Video; 59:32) AI Prompts and AI Errors (Video; 58:57) AI: Bias and Social Impacts (Video; 1:05:16) Responding to Algorithmic Bias (Video; 1:06:43) Demystifying AI: Looking Under the Hood (Video; 56:47) Maximize Workflow and Planning with AI (Video; 1:08:10) Accessibility and Personalization with AI (Video; 57:37) AI for College and Career (Video; 53:20) Safe Practices for AI in Schools (Video; 56:40) Considering Data & Terms of Use in AI (Video; 54:09) Equitable Approaches to Ethical AI Guidelines (Video; 53:38) Opening the Dialogue for Ethical AI Policies (Video; 1:04:47) Collaborative AI Guidelines: Engaging School Communities (Video; 1:04:36) From A to GenZ: Students Discuss the Future of AI - Part 1 (Video; 49:00) From A to GenZ: Students Discuss the Future of AI - Part 2 (Video; 50;57) AI for Inclusive Learning (Video; 1:00:42) Highlighting & Addressing Inequities with AI (Video; 48:40) Equity Minded Instruction in an Age of AI (Video; 1:00:44) Making AI Visible: Building Awareness Amongst Teachers and Communities (Video; 58:32) AI Literacy: What is it and Why does it matter? (Video; 57:22) AI in the Classroom (Video; 1:02:43) Create and Think with AI (Video; 42:12) Boost Creativity and Media Literacy with AI (Video; 1:05:54) Leverage AI to Support Multilingual Learners - Episode 1 (Video; 52:39) Leverage AI to Support Multilingual Learners - Episode 2 (Video; 1:01:52) AI for Multilingual Learners - Why and How (Video; 50:27) Where is AI in the World of Today and Tomorrow? AI is an integral part of our daily lives, from virtual assistants to recommendation algorithms that determine the content on social media feeds. At its core, artificial intelligence (AI) refers to computer systems designed to perform tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, making decisions, and learning from data. AI systems start by ingesting vast amounts of data. This data can be in various forms, such as text, images, or numbers. For instance, in a language translation application, the input could be a sentence in one language. Once the data is inputted, AI systems employ algorithms to process and analyze it. Algorithms are akin to sets of instructions that guide the AI in making sense of the data. These instructions can involve complex mathematical operations, data transformations, and statistical modeling. After processing the data, the AI system generates an output, which could be a translation of the input sentence into another language in the case of the language translation example. AI systems often have a learning component, improving their performance over time by learning from new data, a process known as machine learning. In this age of AI, it is essential that both educators and students demystify this technology and grasp how it produces output. A conceptual knowledge of the benefits and potential risks of computing technologies is increasingly relevant for our students and educators alike. A growing knowledge about AI improves safety and efficiency when utilizing AI systems and supports understanding of potential inaccuracies and biases that may exist in their outputs. While we cannot predict the future, it appears certain that technology will continue to influence our personal and professional lives. A report by the World Economic Forum in 2023 indicates that 1.1 billion jobs are likely to be radically transformed within the next decade. Skills the workplace of the future will call for include artificial intelligence and machine learning, cloud computing, product management, and social media. According to the 2023 AI Index report by the Stanford Institute for Human Centered Artificial Intelligence, demand for AI related professionals is increasing in nearly every sector of the economy. As such, AI literacy is foundational for a well-rounded education to prepare students for today and tomorrow. California’s fundamental AI skills for educators and students address both learning with and about AI, to promote efficient and ethical use of these emerging technologies in and outside of the classroom. Where Might AI Fit Within California Schools? As is the case with all technologies utilized in educational settings, student safety concerns underscore the importance of evaluating terms of use, data collection guidelines, and ethical use policies. Educators are encouraged to engage in professional learning regarding the capabilities and limitations of AI so that they may guide students in navigating a world in which AI permeates personal and professional lives. Professional learning can guide educators to supporting safe environments in which students learn with AI. Learning with AI can enhance academic learning in any content area, when implemented with safety and ethical use in mind. Educators and students learning with AI may have conversations regarding appropriate use of AI in and outside of the classroom, crafting ethical use guidelines together to ensure students build skills in leveraging the benefits of AI while maintaining academic integrity and maximizing learner responsibility. Educational leaders are encouraged to provide access to computer science education for all K-12 students so that students learn about AI equitably. As educators and students demystify AI systems, as they see past the perceived “magic” of these technologies and deconstruct them to build a conceptual understanding of their inner workings, they are better able to engage as responsible, ethical citizens of emerging technologies. In accordance with the concepts in the California computer science standards, a comprehensive computer science foundation for students includes conceptual understanding and skills of the five concept areas of the standards which include computing systems, networks and the internet, data and analysis, algorithms and programming, and impacts of computing. AI touches upon each of these concept areas. Computer science is referred to as a core subject in the state board adopted California computer science standards, designed for all K-12 students. In alignment with these K12 standards, as students gain the ability to navigate computing systems, utilize networks and the internet responsibly, analyze and interpret data with an antibias lens, solve problems using algorithms and programming, and carefully consider the ethics of computing technologies, they become informed digital citizens who can leverage AI and other technologies to promote social justice and equity. The importance of computer science is recognized at a national level, with Every Student Succeeds Act including computer science as part of a “well-rounded education” (2015). Computer science concepts permeate nearly every industry in our midst, with the California computer science standards emphasizing its interdisciplinary nature and relevance in our digitally connected society. The appendix of the California computer science standards provides example models for providing access for computer science education in K12 systems, which are to be based on local capacity and context rather than as a one size fits all approach. Fundamental Skills for Educators and Students Safe Use Engage in Safe Use by Reviewing AI Data Collection, User Privacy Data collection, in the context of AI, refers to the process of gathering and storing information from various sources, such as user interactions, sensor data, or online behavior. This data serves AI systems, enabling them to learn, adapt, and make informed decisions. Data collection for AI systems may encompass a wide range of information, including personal details, preferences, browsing history, and more. Educators and students are encouraged to build awareness of data collection activity of AI systems, in order to safeguard personal information and adhere to data privacy regulations. User privacy refers to safeguarding individuals' personal information and ensuring that it is handled in a responsible and ethical manner. It entails protecting data from unauthorized access, breaches, and misuse. In the educational context, user privacy is of paramount importance to protect students' sensitive information and maintain trust within the learning environment. For example, while educators can leverage AI to support them in developing lesson plans, individual education plans, and analyzing assessment data, it is vital that they omit identifiable student information when inputting data to AI systems. It is important that educators and students review when and if data they input into an AI system is collected, and if that data will be stored to further its learning or be deleted. Educational leaders are encouraged to build trust by fostering transparency. Trust is the cornerstone of safe AI use in education. When educators and students know that their data is handled with care and transparency, they are more likely to embrace AI technologies for learning and teaching. Compliance with privacy laws is not just a legal requirement but may be considered a moral obligation for educational institutions. Districts and educators must consider COPPA (Children's Online Privacy Protection Act) and FERPA (Family Educational Rights and Privacy Act) when considering AI use in the educational setting. The integration of AI into California's education system presents immense opportunities for innovation, increased accessibility, and personalized learning. However, it also underscores the critical need for educators and students to engage in safe AI use by rigorously reviewing data collection and user privacy practices. By adhering to ethical principles and staying vigilant in safeguarding personal information, California's educational community can harness the power of AI while preserving individual privacy and security. Pros and Cons Leverage AI Capabilities, Critique Inaccuracies A commitment to providing equitable teaching and learning environments for all students aligns perfectly with the potential of AI to help bridge the digital divide. By supporting educators and students in leveraging AI effectively and addressing inaccuracies, we can enhance the quality of education and ensure that every student, regardless of their circumstances, has access to a meaningful education that prepares them for the world of today and tomorrow. The "Closing the Digital Divide Initiative," led by State Superintendent Thurmond and his Closing the Digital Divide Task Force, aims to eliminate disparities in access to computing devices and connectivity among California students. This initiative recognizes the profound impact that the digital divide has on educational equity. Educators and students who have the skills to leverage AI safely and efficiency will benefit from increased productivity. Conversely, communities who lack access to skill building experiences with AI at home may not have the opportunity to gain proficiency with this emerging technology if the skills are not also addressed in the school setting. Educators and students are encouraged to leverage AI capabilities. AI can facilitate personalized learning experiences, enabling teachers to create custom-tailored lessons, and provide personalized support to students. AI can analyze student data to tailor lessons, adapt to individual learning styles, and provide targeted interventions, thereby addressing educational disparities. AI tools can increase access for marginalized populations. Examples include language translation support for multilingual students, voice typing for students with disabilities, and organizational tools for executive processing support. AI can expedite administrative tasks and automate workflow for both educators and students. This may allow educators to focus more on instruction and student support, freeing up time to build connections with students. AI can support students with after-hours tutoring/resource support to support academic learning. This support may be of value to students who do not have access to academic support outside of the classroom. It is important that educators and students learn to critique AI inaccuracies, to support media literacy. AI systems may perpetuate biases or generate inaccurate results. AI systems may hallucinate information or provide incorrect citations. Educators can integrate lessons on AI ethics, bias detection, and fact-checking into the curriculum. As students are encouraged to question AI-generated information and engage in constructive discussions about AI's role in education, they build their skills in utilizing AI responsibly. Learning AI skills fosters digital literacy, a fundamental requirement for participating in an increasingly digital world. Educators have a unique opportunity to leverage AI capabilities and address inaccuracies within the "Closing the Digital Divide Initiative." By integrating AI skills into our educational approach, we not only bridge the digital use divide but also prepare our students for a future where AI is an integral part of their lives. By fostering critical thinking, ethical considerations, and technical proficiency, we empower our students to navigate the digital landscape with confidence. By embracing AI as an ally in this endeavor, we embark on a path toward a more inclusive and digitally proficient educational landscape. Social Impacts Consider Presence of Bias and Social Impacts of AI The fundamental skill of recognizing the impacts of algorithmic bias and considering the social consequences of artificial intelligence (AI) fit within State Superintendent of Public Instruction Tony Thurmond’s "Antibias Education Initiative." Algorithmic bias refers to the presence of unfair and discriminatory outcomes in machine learning algorithms and AI systems due to the data used to train them or the design choices made during their development. As educators and educational leaders, our commitment to confronting hate, bigotry, racism, and bias aligns seamlessly with the need to raise awareness about algorithmic bias and its far-reaching societal effects. By integrating these concepts, we can empower educators and students to become informed advocates in the fight against bias and discrimination. Algorithmic bias in AI systems can perpetuate and amplify societal biases, leading to discriminatory outcomes, particularly affecting marginalized communities. Educators and educational leaders play a crucial role in raising awareness about algorithmic bias. By communicating its impacts, we equip students to recognize and address bias in AI systems, fostering critical thinking and empathy. Educators are encouraged to integrate discussions about algorithmic bias into existing curriculum, emphasizing real-world examples of bias in AI applications. Students can be guided to explore the ethical implications and societal consequences of algorithmic bias. AI technologies have a profound impact on society, including their influence on employment, access to resources, and decision-making processes. Educators can incorporate lessons on the societal implications of AI into the curriculum and encourage students to analyze how AI affects various aspects of society, from healthcare to criminal justice, and explore strategies to mitigate bias and discrimination. By understanding the social impacts of AI and algorithmic bias, students can become more empathetic and better equipped to advocate for fair and equitable AI systems. Equipped with knowledge about AI bias, students can make informed ethical decisions in a world increasingly shaped by AI technologies. Incorporating the fundamental skill of communicating algorithmic bias impacts and considering the social consequences of AI in alignment with the "Antibias Education Initiative" presents a unique opportunity to empower educators and students to confront hate, bigotry, racism, and bias. AI education can be utilized as a powerful tool to help address bias and discrimination, creating a future where our students are not only informed but also engaged advocates for a more inclusive and tolerant society. Demystify Build Understanding of How AI Systems Work The continued growth of artificial intelligence (AI) and its impact on various facets of society necessitates equipping our students with the knowledge and skills to engage with AI systems thoughtfully and responsibly. AI technologies are becoming pervasive across industries, making AI understanding a core competency. As AI continues to shape various industries, students who understand AI fundamentals will be better prepared for future careers. AI literacy is becoming a valuable skill across many fields. Building a foundational understanding of how AI systems work and produce output not only prepares students for future careers in computer science and technology-related fields, but it also ensures that all students enter the workforce as knowledgeable, resourceful consumers of AI. A strong conceptual understanding of AI empowers students to adapt and contribute to an evolving workforce. Today's students are digital natives, but they must also be digitally literate to thrive in the AI era. Knowing how AI processes data and generates outputs enables students to think critically about the results AI systems provide. They can question and evaluate the information they receive and make informed decisions. This is of particular significance as students utilize AI in the classroom, to maintain academic integrity and promote ethical use of AI. Educators and students are urged to leverage AI to foster and not inhibit critical thinking. An understanding of the inner workings of AI supports learners in identifying inaccuracies, questioning AI outputs, and recognizing potential bias. Ethical considerations are paramount in AI development and usage. As educators and students build an understanding of how data collection feeds AI outputs, educators and students become ethical technology users and potential creators who prioritize fairness, accountability, and transparency. Building AI understanding for all students, regardless of zip code, race, gender, or socioeconomic status, can help to diversify the STEM pipeline by breaking down barriers and biases in tech fields. AI can be a powerful tool for solving complex problems. Students with knowledge of AI can harness this technology to address real- world challenges and innovate. Ensuring that all students have access to AI education promotes inclusivity and reduces the risk of a digital divide. Educators and students who build skills in understanding how AI systems work is essential for equipping them with the skills and knowledge needed to navigate the digital landscape responsibly, think critically, and thrive in a technology-driven future. It empowers them to become not just consumers but active participants and ethical stewards of AI technology. Utilizing AI in Schools What might it look like to utilize AI in California schools? Artificial intelligence technologies, tools, and applications can be utilized in the educational environment to enhance and improve various aspects of teaching and learning. This integration can take several forms and serves multiple purposes. Implementation may be customized to meet local context and needs of each school community. Why? Planning and Workflow Support AI-powered tools can enhance productivity and empower educators to provide more personalized and effective learning experiences. By embracing AI, educators can optimize their workflows, save time, and elevate the quality of education they deliver. As the educational landscape evolves, educators are tasked with managing increasingly complex teaching environments. AI offers to meet this challenge by providing tools that streamline administrative tasks, enhance instructional planning, and support educators in adapting to diverse student needs. AI can automate routine administrative tasks such as data analysis, generating practice sets, unpacking standards, scheduling, content generation, and more. This can allow educators to allocate more time to teaching and supporting students. AI is not a replacement for educators but a powerful ally in the pursuit of educational excellence. By utilizing AI for planning, automation, and workflow support, educators can optimize their time and resources, ultimately improving the quality of education they provide. The integration of AI-driven tools empowers educators to address individual student needs, promote engagement, and make data-informed decisions. Accessibility & Personalization In alignment with the neuroscientifically backed principles of Universal Design for Learning, inclusive learning environments empower students to become expert learners, equipped with tools that increase accessibility and personalize the learning experience to meet their unique needs. Additionally, connecting content to students’ lives furthers the personalization process and supports culturally responsive pedagogy. AI tools can support educators and students in the endeavor to tailor learning experiences to the students in their midst. AI can generate personalized learning materials, such as worksheets, quizzes, and reading assignments, aligned with students' learning goals, strengths, and interests. AI analytics can provide educators with insights into student progress and areas that require additional support, allowing for educators to provide targeted interventions during small group instruction. AI-powered tools can enable text-to-speech and speech- to-text capabilities, ensuring that all students, including those with visual or hearing impairments, can access educational content. AI-driven closed captioning and transcription services can enhance accessibility by making multimedia content, such as videos and podcasts, accessible to individuals with hearing impairments. AI-driven language translation tools can break language barriers, making educational content accessible to multilingual learners. AI-powered assistive technologies, such as screen readers and voice recognition software, can empower students with disabilities to engage fully in the learning process. College/Career Readiness As students utilize AI responsibly in K12 schools, they gain skills that support thinking processes, organization of workflow, ethical technology use, and abilities to communicate effectively in a digitally connected world. AI-driven tools encourage students to think critically and approach complex problems analytically, a skill highly valued in both higher education and the workforce. Students exposed to AI tools are adept at automating routine tasks and managing their time efficiently, which is invaluable for college success. As we teach students about AI ethics, we help to ensure they make responsible decisions in both personal and professional lives. Proficiency in AI technologies gives students a competitive edge in the job market and positions them as valuable assets in a technology-driven workforce. AI integration promotes digital literacy, enabling students to effectively navigate the digital landscape, a skill required in both academic and professional settings. Address the Digital Use Divide As technology becomes increasingly integral to education, educators and educational leaders must harness AI's power to ensure equitable access, enhance digital literacy, and prepare all students to thrive in an AI-driven world. The digital use divide, characterized by disparities in access to and effective use of technology, is a pressing concern in modern education. AI, when thoughtfully integrated, offers a powerful means to bridge this divide and promote equitable opportunities. Infusing AI into curricula enables students to develop digital literacy skills, empowering them to navigate technology effectively and responsibly. AI-powered language translation and accessibility tools can break down language and physical barriers, supporting more inclusive learning environments. How? Evaluate Terms of Use, Enact Data Collection Guidelines As school districts consider the integration of AI systems into their educational environments, it is essential to thoroughly evaluate the terms of use to ensure the responsible and effective deployment of AI technology. Local education agencies may wish to consider the following steps as they evaluate safety of AI systems: Data Privacy Compliance: Legal Framework: Ensure that the AI system complies with federal and state data privacy laws, including the Family Educational Rights and Privacy Act (FERPA) and the Children's Online Privacy Protection Act (COPPA). Data Handling: Review how the AI system collects, stores, and manages student data. Ensure it adheres to privacy protocols and encryption standards. Data Ownership: Clarify who owns the data generated or processed by the AI system and establish rights and responsibilities regarding data access and sharing. Security Measures: Data Security: Assess the AI system's security measures, including encryption, authentication, and authorization protocols, to safeguard against data breaches and cyber threats. Access Control: Verify that the AI system has robust access controls to restrict data access to authorized personnel only. Transparency: Algorithm Transparency: Seek clarity on how the AI system's algorithms make decisions and whether the vendor can provide insights into the system's decision-making processes. Data Retention: Data Retention: Determine how long the AI system retains student data and whether it aligns with your district's data retention policies. Accessibility and Inclusivity: Universal Design: Confirm that the AI system adheres to universal design principles, making it accessible to all students, including those with disabilities. Educational Partner Involvement: Community Engagement: Involve parents, students, educators, and administrators in the evaluation process to gather their input and address their concerns. Continuous Monitoring: Ongoing Assessment: Establish procedures for ongoing monitoring and evaluation of the AI system's compliance with the terms of use and data privacy standards. Ethical Use Policies In today's technology-driven world, students are exposed to digital tools and platforms from a young age. It is important to provide them with ethical use guidelines that promote responsible, respectful, and safe online behavior. Involving students in the creation of these guidelines not only ensures their relevance but also empowers them to be active participants in shaping their digital learning environment. This is true of all technologies and is of particular need as emerging technologies such as AI become more prevalent in society. The following steps may be helpful in developing ethical use policies for AI use with students. It may be beneficial to involve students in the process of creating ethical use guidelines from the beginning. Their input is invaluable in shaping rules that resonate with their experiences. Provide students with educational resources and discussions on digital ethics, covering topics like online privacy, cyberbullying, plagiarism, and responsible sharing. Facilitate brainstorming sessions or focus groups with students to collect their thoughts, concerns, and ideas regarding online behavior. Work with students to draft the ethical use guidelines collaboratively. Encourage them to express their views and concerns and guide them in turning those ideas into actionable rules. Integrate real-world scenarios and case studies into discussions to help students apply ethical principles to practical situations. Encourage students to review and provide feedback on the drafted guidelines. Peer review fosters a sense of ownership and accountability. Launch the guidelines formally, communicate them to all educational partners, and provide training or workshops to help students understand and embrace them. Establish a process for continuous review and updates to keep the guidelines relevant and responsive to evolving digital challenges. Creating ethical use guidelines for students, with a focus on valuing their voices, is a powerful way to empower them to navigate the digital world responsibly and ethically. These guidelines can provide a framework for fostering a positive digital learning environment, where students can thrive academically and personally while upholding principles of respect, responsibility, and safety. By involving students in this process, we not only teach them valuable life skills but also instill a sense of ownership and shared responsibility for their digital community. Support Mental Health: Emphasize Human Connection While AI offers numerous benefits, it is vital to ensure that the technology enhances rather than replaces the human touch in education, promoting student well-being and emotional development. The introduction of AI into K-12 education has the potential to revolutionize learning experiences. However, it is essential to strike a balance between technological advancement and the preservation of essential human elements, including human connection and mental health support. Encourage educators to maintain strong, personal relationships with students despite the integration of AI. AI should complement, not replace, these connections. AI can be used to facilitate peer collaboration, allowing students to work together, discuss, and learn from one another, fostering social connections. As educational communities integrate AI, they are encouraged to ensure that it intersects with the spirit of State Superintendent of Public Instruction Tony Thurmond’s Mental Health Support Initiative. It is important to educate students about the role of AI data collection in shaping their online experiences. For example, biased data collection can perpetuate stereotypes and discrimination. When data collected reflects biases, it can lead to unfair treatment or exclusion of certain groups. For students, this can result in feelings of marginalization and can negatively affect their mental health. Further mental health challenges may arise outside of the school setting as students indicate preferences on social media platforms and streaming services. The AI algorithms behind these platforms may lead individuals to narrow views. This can hinder constructive dialogue and understanding between students, leading to feelings of isolation and frustration, both of which can negatively impact mental health. Building healthy relationships and connections with peers is fundamental to mental health. If AI algorithms steer students toward divisive content, it can impede their ability to engage in open, empathetic conversations and form diverse social connections. Therefore, conversations regarding the impacts of AI on mental health are paramount for students and educators to navigate the digital world. The integration of AI into K-12 education offers immense potential for enhancing learning experiences. However, it is imperative to prioritize human connection and consider mental health implications as we adopt these technologies. By emphasizing strong teacher-student relationships, promoting peer collaboration, and incorporating social-emotional learning, educators can create a nurturing environment that fosters both academic and emotional growth. Developing AI in Schools What does it mean for K-12 students to develop AI in schools? This means students actively engage in the creation, programming, and understanding of artificial intelligence within their educational environment. This concept goes beyond merely using AI tools or applications; it involves students becoming creators and innovators in the field of AI. Students are encouraged to not only learn with AI but also to learn about AI through hands-on exploration. This opportunity can be provided to students through engagement with computer science concept areas, each of which can align with AI. The national computer science framework, which provided a foundation to guide the development of California’s computer science standards, indicates that in early childhood education, “computer science supports play-based pedagogy, extends what educators are already doing in their classrooms, and guides young learners to notice, name, and recognize how computing shapes their world.” This concept extends to developing AI in schools as well. Students of all ages may explore data collection via unplugged activities, experience concepts of algorithm design by experimenting with AI prompts and discuss the impacts of AI on society. The student activities align to the California computer science concepts of data and analysis, algorithms and programming, and impacts of computing. Resources for educators to embed AI lessons into their classrooms is available on the California Educators Together statewide AI group (Login Required). Flexible implementation ideas for providing access to computer science education in K12 systems are included in the California computer science standards appendix and may be customized according to local context and community needs. Developing AI in K-12 schools is about empowering students to become active participants in the technology of today and tomorrow. It equips them with the knowledge, skills, and ethical understanding needed to contribute to the responsible and innovative development of artificial intelligence, both during their educational journey and in their future endeavors. Why? Expand Access for Traditionally Underrepresented Groups Developing artificial intelligence (AI) in K-12 schools, which may occur through computer science pathways, may act as a powerful means to address systemic bias and expand access for traditionally marginalized groups in computer science, technology, and STEM (science, technology, engineering, mathematics) fields. By integrating AI education with a focus on diversity and inclusion, we can pave the way for a more equitable future in these disciplines. A lack of diversity and representation in computer science, technology, and STEM fields has long been a concern. In April of 2021, Pew Research Center indicated that LatinX and Black workers are underrepresented in the STEM workforce, while White and Asian workers are overrepresented. Women were reported to make up a mere quarter or less of all computer and engineering jobs. To address this issue, it is crucial to start early in the educational pipeline, which makes K-12 schools an ideal setting for nurturing diversity, inclusion, and equal access. Computer Science is akin to a gateway to STEM for the post-secondary level, particularly for underrepresented populations. College Board reports that in 2019, the Advanced Placement (AP) Computer Science Principles course was a first AP course for 68% of Black students, 59% of LatinX students, and 60% of first-generation college students. Furthermore, students who take AP computer science principles are more likely to major in computer science or another STEM major in contrast with similar students who did not take the course. The likelihood is further increased for LatinX and women. AI education in K-12 schools introduces students from all backgrounds to the world of computer science and AI, providing early exposure and demystifying these fields. The introduction of California’s computer science standards highlights the importance of inclusionary computer science opportunities: “The standards are designed for each and every student, including underserved populations: girls, low-income students, homeless students, rural students, African American and LatinX students, students who are English learners, students with disabilities, and foster youth. Students’ access to and achievement in computer science must not be predictable on the basis of race, ethnicity, gender, socioeconomic status, language, religion, sexual orientation, cultural affiliation, or special needs.” As educators model diversity in computer science opportunities and emphasize the importance of recognizing and mitigating bias in AI systems, they promote fairness and teach students about ethical AI development. Empower Learners to Design Systems to Solve Challenges By fostering student voice, motivation, and problem-solving skills within AI education, we can equip students to become active contributors to technologies of today and tomorrow. As we incorporate AI education in K-12 schools in a way that provides opportunity for students to not only understand AI but to actively engage with it, we demystify AI, promote critical thinking, and instill motivation to design AI systems that tackle meaningful problems. As K12 systems empower students to see AI as a tool they can wield to solve problems, this challenges the perception that AI is the realm of experts. As student voice is welcomed in AI education, this fosters a sense of ownership over learning experiences. This allows students to infuse creativity into AI projects, exploring innovative solutions to real-world issues. As students voice their concerns and interests, education in developing AI can help them to propose technological solutions to relevant challenges in their lives. Engaging with AI in meaningful ways taps into students' intrinsic motivation, as they see the direct impact of their work on solving authentic problems. By embracing student voice, promoting problem-solving, and providing opportunities for hands-on learning, we can equip the next generation with the skills and confidence needed to shape the future of AI for the better. This approach not only enriches their educational experience but also prepares them to tackle complex challenges in a rapidly evolving technological landscape. How? Integrate AI Skills & California Computer Science Standards into Curriculum Incorporating AI skills and computer science standards into K-12 education can equip students with the knowledge and abilities necessary to navigate and contribute to an increasingly AI-powered world. This integration fosters computational thinking, problem- solving, and innovation, paving the way for students to become AI developers and creators. Infusing AI skills into the curriculum ensures that students are prepared to engage with and contribute to emerging technologies that are reshaping industries and society. California’s computer science standards provide a comprehensive foundation that includes essential concepts such as algorithms, data analysis, and programming— key components of AI development. The computer science standards emphasize interdisciplinary learning, connecting AI with various subjects and applications, from healthcare to engineering to agriculture to art and more. Infusing AI skills and computer science standards into K-12 education aligns curriculum with the needs of the modern world, fosters critical thinking and creativity, and prepares students to become active contributors to the AI revolution. As we provide educators with the professional learning and resources needed to integrate AI, we provide all students with the opportunity to gain the skills and knowledge required to navigate and shape the AI-driven future. This approach not only enriches educational experiences but also empowers students to innovate and create in an increasingly AI-powered world. Integrate 5 Big Ideas of AI The field of artificial intelligence (AI) is rapidly evolving, and its impact on society is profound. To ensure that K-12 students are well-prepared to navigate this AI-powered world, the 5 Big Ideas of AI can be integrated into the curriculum to meet students’ developmental levels. These ideas serve as a foundational framework that aligns with educational goals and enhances students' cognitive, computational, and ethical skills. The 5 Big Ideas of AI: Perception Computers perceive the world using sensors. Perception is the process of extracting meaning from sensory signals. Making computers “see” and “hear” well enough for practical use is one of the most significant achievements of AI to date. Representation & Reasoning Agents maintain representations of the world and use them for reasoning. Representation is one of the fundamental problems of intelligence, both natural and artificial. Computers construct representations using data structures, and these representations support reasoning algorithms that derive new information from what is already known. While AI agents can reason about very complex problems, they do not think the way a human does. Learning Computers can learn from data. Machine learning is a kind of statistical inference that finds patterns in data. Many areas of AI have progressed significantly in recent years thanks to learning algorithms that create new representations. For the approach to succeed, tremendous amounts of data are required. This “training data” must usually be supplied by people, but is sometimes acquired by the machine itself. Natural Interaction Intelligent agents require many kinds of knowledge to interact naturally with humans. Agents must be able to converse in human languages, recognize facial expressions and emotions, and draw upon knowledge of culture and social conventions to infer intentions from observed behavior. All of these are difficult problems. Today’s AI systems can use language to a limited extent, but lack the general reasoning and conversational capabilities of even a child. Societal Impact AI can impact society in both positive and negative ways. AI technologies are changing the ways we work, travel, communicate, and care for each other. But we must be mindful of the harms that can potentially occur. For example, biases in the data used to train an AI system could lead to some people being less well served than others. Thus, it is important to discuss the impacts that AI is having on our society and develop criteria for the ethical design and deployment of AI-based systems. Integrating the 5 Big Ideas of AI into K-12 education is a forward-looking strategy that supports educational goals by fostering critical thinking, creativity, problem-solving, ethical awareness, and career readiness. This approach not only equips students with AI knowledge but also prepares them to be informed and responsible citizens in an increasingly AI-driven world. The 5 Big Ideas of AI provide a solid foundation for AI education that aligns with the broader educational mission of preparing students for success in college/career for today and into the future. References AI for K12 Initiative is a joint project of the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA), funded by National Science Foundation Award DRL-1846073 is a joint project of the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA), funded by National Science Foundation Award DRL-1846073 California Department of Education. (2018). Computer Science Standards for California Public School Kindergarten through Grade Twelve (PDF) . . CAST (2018). Universal Design for Learning Guidelines version 2.2 . . Every Student Succeeds Act of 2015, Pub. L. No. 114-95. 20 U.S.C.A. 6301 (2016). K–12 Computer Science Framework . (2016). . (2016). Jeff Wyatt, Jing Feng, and Maureen Ewing, “AP Computer Science Principles and the STEM and Computer Science Pipelines”, College Board, 2020. Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. Pew Research Center, April, 2021, “STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity. World Economic Forum. (2023). Future of Jobs 2023 Insight Report May 2023. Geneva. Return to Top Questions: Teacher and Leader Policy Office | [email protected] | 916-445-7331 Last Reviewed: Wednesday, June 11, 2025
2022-12-01T00:00:00
https://www.cde.ca.gov/ci/pl/aiincalifornia.asp
[ { "date": "2022/12/01", "position": 45, "query": "AI education" }, { "date": "2022/12/01", "position": 59, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 52, "query": "AI education" }, { "date": "2023/06/01", "position": 17, "query": "AI education" }, { "date": "2023/09/01", "position": 51, "query": "AI education" }, { "date": "2023/09/13", "position": 95, "query": "AI labor market trends" }, { "date": "2023/09/13", "position": 3, "query": "artificial intelligence education" }, { "date": "2023/11/01", "position": 47, "query": "AI education" }, { "date": "2023/11/01", "position": 62, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 63, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 11, "query": "AI education" }, { "date": "2024/03/01", "position": 44, "query": "AI education" }, { "date": "2024/05/01", "position": 14, "query": "AI education" }, { "date": "2024/07/01", "position": 42, "query": "AI education" }, { "date": "2024/09/01", "position": 38, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 44, "query": "AI education" }, { "date": "2024/11/01", "position": 48, "query": "AI education" }, { "date": "2024/12/01", "position": 65, "query": "artificial intelligence education" }, { "date": "2025/05/22", "position": 64, "query": "artificial intelligence education" }, { "date": "2025/05/22", "position": 16, "query": "AI education" }, { "date": "2025/05/22", "position": 9, "query": "AI education" }, { "date": "2025/05/22", "position": 74, "query": "machine learning workforce" } ]
AI Course for Educators - AI for Education
AI Course for Educators — AI for Education
https://www.aiforeducation.io
[]
This FREE 2-hour, hands-on course is designed to help educators get started using ChatGPT to save time, engage students, and implement AI responsibly.
We designed this course to give you all the tools necessary to get started with AI in your classroom.
2022-12-01T00:00:00
https://www.aiforeducation.io/ai-course
[ { "date": "2022/12/01", "position": 47, "query": "AI education" }, { "date": "2023/03/01", "position": 49, "query": "AI education" }, { "date": "2023/06/01", "position": 48, "query": "AI education" }, { "date": "2023/09/01", "position": 47, "query": "AI education" }, { "date": "2023/11/01", "position": 51, "query": "AI education" }, { "date": "2024/01/01", "position": 48, "query": "AI education" }, { "date": "2024/03/01", "position": 49, "query": "AI education" }, { "date": "2024/05/01", "position": 46, "query": "AI education" }, { "date": "2024/07/01", "position": 49, "query": "AI education" }, { "date": "2024/10/01", "position": 50, "query": "AI education" }, { "date": "2024/11/01", "position": 50, "query": "AI education" }, { "date": "2025/04/01", "position": 28, "query": "AI education" }, { "date": "2025/05/01", "position": 26, "query": "AI education" } ]
DeepLearning.AI: Start or Advance Your Career in AI
DeepLearning.AI: Start or Advance Your Career in AI
https://www.deeplearning.ai
[]
DeepLearning.AI | Andrew Ng | Join over 7 million people learning how to use and build AI through our online courses. Earn certifications, level up your ...
Jul 02, 2025 Amazon’s $100 Billion Bet, Meta’s Sensor-Packed Glasses, Anthropic’s Reason-Free Reasoning, Google’s Extreme Weather Prediction The Batch AI News and Insights: I’d like to share a tip for getting more practice building with AI — that is, either using AI building blocks to build applications or using AI coding assistance to create powerful applications quickly.
2022-12-01T00:00:00
https://www.deeplearning.ai/
[ { "date": "2022/12/01", "position": 48, "query": "AI education" }, { "date": "2022/12/01", "position": 68, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 45, "query": "AI education" }, { "date": "2023/06/01", "position": 50, "query": "AI education" }, { "date": "2023/09/01", "position": 49, "query": "AI education" }, { "date": "2023/11/01", "position": 52, "query": "AI education" }, { "date": "2023/11/01", "position": 67, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 66, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 50, "query": "AI education" }, { "date": "2024/03/01", "position": 51, "query": "AI education" }, { "date": "2024/05/01", "position": 52, "query": "AI education" }, { "date": "2024/07/01", "position": 50, "query": "AI education" }, { "date": "2024/09/01", "position": 76, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 52, "query": "AI education" }, { "date": "2024/11/01", "position": 49, "query": "AI education" }, { "date": "2024/12/01", "position": 68, "query": "artificial intelligence education" }, { "date": "2025/02/01", "position": 87, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 59, "query": "AI education" }, { "date": "2025/05/01", "position": 53, "query": "AI education" }, { "date": "2025/06/25", "position": 23, "query": "AI education" }, { "date": "2025/06/25", "position": 50, "query": "AI employers" }, { "date": "2025/06/01", "position": 75, "query": "artificial intelligence education" } ]
Teach AI - aiEDU
Teach AI — aiEDU
https://www.aiedu.org
[]
Teach AI. Whether you're taking a deep dive or just dipping your toes in, we ... Get our Educator Newsletter! EDUCATION. Professional Learning · Project ...
Are you ready to bring AI to your students but not sure where to begin? Whether you have a few minutes or a few months, we have something for you.
2022-12-01T00:00:00
https://www.aiedu.org/teach-ai
[ { "date": "2022/12/01", "position": 50, "query": "AI education" }, { "date": "2023/03/01", "position": 54, "query": "AI education" }, { "date": "2023/06/01", "position": 54, "query": "AI education" }, { "date": "2023/09/01", "position": 54, "query": "AI education" }, { "date": "2023/11/01", "position": 54, "query": "AI education" }, { "date": "2024/01/01", "position": 51, "query": "AI education" }, { "date": "2024/03/01", "position": 57, "query": "AI education" }, { "date": "2024/05/01", "position": 50, "query": "AI education" }, { "date": "2024/07/01", "position": 52, "query": "AI education" }, { "date": "2024/10/01", "position": 55, "query": "AI education" }, { "date": "2024/11/01", "position": 55, "query": "AI education" }, { "date": "2025/04/01", "position": 62, "query": "AI education" }, { "date": "2025/05/01", "position": 57, "query": "AI education" } ]
International Journal of Artificial Intelligence in Education
International Journal of Artificial Intelligence in Education
https://link.springer.com
[ "Chenyan Zhao", "Mariana Silva", "Seth Poulsen", "Hiroaki Funayama", "Yuichiroh Matsubayashi", "Kentaro Inui", "Lovis Heindrich", "Saksham Consul", "Falk Lieder", "Osman Tasdelen" ]
The International Journal of Artificial Intelligence in Education (IJAIED) publishes papers concerned with the application of AI to education.
The International Journal of Artificial Intelligence in Education (IJAIED) publishes papers concerned with the application of AI to education. It aims to help the development of principles for the design of computer-based learning systems. Its premise is that such principles involve the modeling and representation of relevant aspects of knowledge, before implementation or during execution, and hence require the application of AI techniques and concepts. IJAIED has a very broad notion of the scope of AI and of a 'computer-based learning system'. Coverage extends to agent-based learning environments, architectures for AIED systems, bayesian and statistical methods, cognitive tools for learning, computer-assisted language learning, distributed learning environments, educational robotics, human factors and interface design, intelligent agents on the internet, natural language interfaces for instructional systems, real-world applications of AIED systems, tools for administration and curriculum integration, and more. The International Journal of Artificial Intelligence in Education is published in conjunction with the International Artificial Intelligence in Education Society (IAIED). Please note: the median time indicated below is computed over all the submitted manuscripts including the ones that are not put into the review pipeline at the onset of the review process. The typical time to first decision for manuscripts is approximately 41 days.
2022-12-01T00:00:00
https://link.springer.com/journal/40593
[ { "date": "2022/12/01", "position": 51, "query": "AI education" }, { "date": "2022/12/01", "position": 26, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 46, "query": "AI education" }, { "date": "2023/06/01", "position": 49, "query": "AI education" }, { "date": "2023/09/01", "position": 50, "query": "AI education" }, { "date": "2023/11/01", "position": 50, "query": "AI education" }, { "date": "2023/11/01", "position": 25, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 27, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 49, "query": "AI education" }, { "date": "2024/03/01", "position": 50, "query": "AI education" }, { "date": "2024/05/01", "position": 51, "query": "AI education" }, { "date": "2024/07/01", "position": 51, "query": "AI education" }, { "date": "2024/08/01", "position": 9, "query": "artificial intelligence education" }, { "date": "2024/09/01", "position": 25, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 51, "query": "AI education" }, { "date": "2024/11/01", "position": 47, "query": "AI education" }, { "date": "2024/12/01", "position": 23, "query": "artificial intelligence education" }, { "date": "2025/02/01", "position": 8, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 58, "query": "AI education" }, { "date": "2025/05/01", "position": 58, "query": "AI education" }, { "date": "2025/06/01", "position": 22, "query": "artificial intelligence education" } ]
OpenAI Academy
OpenAI Academy
https://academy.openai.com
[]
Unlock the new opportunities of the AI era by equipping yourself with the knowledge and skills to harness artificial intelligence effectively.
People around the world are embracing generative AI to solve complex problems and enhance creativity, productivity, and learning. OpenAI Academy will help them harness AI’s transformative power through workshops, discussions, and digital content—ranging from foundational AI literacy to advanced integration for engineers. With a mix of online and in-person events, the Academy fosters a vibrant, collaborative community where participants of all backgrounds can gain actionable insights and drive innovation in their fields.
2022-12-01T00:00:00
https://academy.openai.com/
[ { "date": "2022/12/01", "position": 52, "query": "AI education" }, { "date": "2022/12/01", "position": 78, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 63, "query": "AI education" }, { "date": "2023/06/01", "position": 47, "query": "AI education" }, { "date": "2023/09/01", "position": 46, "query": "AI education" }, { "date": "2023/11/01", "position": 49, "query": "AI education" }, { "date": "2024/01/01", "position": 47, "query": "AI education" }, { "date": "2024/03/01", "position": 47, "query": "AI education" }, { "date": "2024/05/01", "position": 49, "query": "AI education" }, { "date": "2024/07/01", "position": 48, "query": "AI education" }, { "date": "2024/10/01", "position": 48, "query": "AI education" }, { "date": "2024/11/01", "position": 46, "query": "AI education" }, { "date": "2025/02/20", "position": 28, "query": "AI education" }, { "date": "2025/06/01", "position": 88, "query": "artificial intelligence education" } ]
K-12 AI Education Program - AI | University of Florida
K-12 AI Education Program
https://ai.ufl.edu
[]
Explore K-12 initiatives that bring AI education to younger students, helping them develop skills and understanding needed in a technology-driven world.
AI Shapes Students and Society Florida is among the first states to adopt a K-12 artificial intelligence education program to prepare its youth for the growing global demand for an AI-enabled workforce. As educators, you are key to preparing students to thrive in a world where AI is an integral part of their lives and careers. Today’s students will face AI ubiquitously throughout their careers, so it is important they understand what it is and how to use it properly. UF faculty from the Herbert Wertheim College of Engineering and the College of Education have been at the helm of designing the framework for the Florida public schools’ AI coursework. This framework is modeled after the National Science Foundation’s national guidelines for teaching AI in elementary and secondary schools. AI coursework increases students’ awareness of how AI is used in their everyday lives. It also increases their knowledge of how AI works in the world.
2022-12-01T00:00:00
https://ai.ufl.edu/teaching-with-ai/k-12-ai-education-program/
[ { "date": "2022/12/01", "position": 53, "query": "AI education" }, { "date": "2023/03/01", "position": 42, "query": "AI education" }, { "date": "2023/06/01", "position": 51, "query": "AI education" }, { "date": "2023/09/01", "position": 52, "query": "AI education" }, { "date": "2023/11/01", "position": 60, "query": "AI education" }, { "date": "2024/01/01", "position": 58, "query": "AI education" }, { "date": "2024/03/01", "position": 52, "query": "AI education" }, { "date": "2024/05/01", "position": 59, "query": "AI education" }, { "date": "2024/07/01", "position": 39, "query": "AI education" }, { "date": "2024/10/01", "position": 45, "query": "AI education" }, { "date": "2024/11/01", "position": 56, "query": "AI education" }, { "date": "2025/04/01", "position": 50, "query": "AI education" }, { "date": "2025/05/01", "position": 62, "query": "AI education" } ]
Learn AI with courses and programs - edX
Best AI courses and programs
https://www.edx.org
[]
Some AI education options to consider include: Courses: AI courses are a great way to learn the key concepts of artificial intelligence. They typically last ...
Artificial intelligence (AI) simulates human intelligence using computer systems. It can solve many problems almost as efficiently or more efficiently than a person, using huge amounts of information and data as its training medium. Around the world, there's a growing demand for people who understand AI and can implement it in commercial and support settings. If you're interested in AI upskilling, there are many training programs, ranging from simple courses to university degrees, available on edX.
2022-12-01T00:00:00
https://www.edx.org/learn/artificial-intelligence
[ { "date": "2022/12/01", "position": 55, "query": "AI education" }, { "date": "2022/12/01", "position": 58, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 48, "query": "AI education" }, { "date": "2023/11/01", "position": 56, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 53, "query": "artificial intelligence education" }, { "date": "2024/09/01", "position": 61, "query": "artificial intelligence education" }, { "date": "2024/12/01", "position": 55, "query": "artificial intelligence education" }, { "date": "2025/02/01", "position": 72, "query": "artificial intelligence education" }, { "date": "2025/02/01", "position": 90, "query": "artificial intelligence healthcare" }, { "date": "2025/03/01", "position": 91, "query": "artificial intelligence healthcare" }, { "date": "2025/04/01", "position": 61, "query": "AI education" }, { "date": "2025/04/01", "position": 80, "query": "artificial intelligence healthcare" }, { "date": "2025/05/01", "position": 59, "query": "AI education" }, { "date": "2025/06/01", "position": 59, "query": "artificial intelligence education" } ]
The Future of AI in K-12 Education - UC San Diego Today
The Future of AI in K-12 Education
https://today.ucsd.edu
[]
AI could potentially change education drastically,” says UC San Diego education scholar Amy Eguchi – who is both excited and concerned about ...
AI tools, on the other hand, are far more accessible. From questions about whether students are using AI to complete assignments to the rise of AI chatbot tutors, Eguchi says artificial intelligence is already “shaking up” education. We sat down with her to learn more. What are some of the benefits and challenges of the growing use of AI in schools? AI has three different sides: one is to use AI, one is to teach with AI, and one is to teach about AI. But somehow people are just talking about using AI. We need to talk about all three, and then we can talk about how to use it in classrooms. Teachers need to fully understand how AI actually works so that they can make informed decisions on when and how to use it. I recently met with a kindergarten teacher who was very worried and asked me, ‘Do I really have to use AI with my kids? I don’t even know what it is.’ She was under such pressure, and that’s not healthy. That’s an incredibly difficult and unfair situation for teachers to be placed in. That's why AI literacy — and supporting teachers with the integration of AI literacy in their classrooms — is a main priority for me. It’s very important to slow down and make sure teachers feel comfortable and confident before integrating AI into schools. It’s also important to think about how to use AI in age-appropriate ways and to address privacy issues. So there’s a lot of missing pieces at this point, but I am optimistic. AI has the potential to make our lives easier — potentially helping us become more productive and creative — if we know how to use it more like a collaborative partner. How do these innovations contribute to the changing landscape of education? Are you fearful, hopeful, or something else?
2022-12-01T00:00:00
https://today.ucsd.edu/story/the-future-of-ai-in-k-12-education
[ { "date": "2022/12/01", "position": 56, "query": "AI education" }, { "date": "2022/12/01", "position": 70, "query": "artificial intelligence education" }, { "date": "2023/02/01", "position": 82, "query": "future of work AI" }, { "date": "2023/03/01", "position": 56, "query": "AI education" }, { "date": "2023/06/01", "position": 58, "query": "AI education" }, { "date": "2023/09/01", "position": 64, "query": "AI education" }, { "date": "2023/11/01", "position": 61, "query": "AI education" }, { "date": "2023/12/01", "position": 68, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 63, "query": "AI education" }, { "date": "2024/03/01", "position": 62, "query": "AI education" }, { "date": "2024/05/01", "position": 62, "query": "AI education" }, { "date": "2024/07/01", "position": 63, "query": "AI education" }, { "date": "2024/10/01", "position": 62, "query": "AI education" }, { "date": "2024/11/01", "position": 62, "query": "AI education" }, { "date": "2024/12/01", "position": 67, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 74, "query": "AI education" }, { "date": "2025/05/01", "position": 63, "query": "AI education" } ]
7 AI Tools That Help Teachers Work More Efficiently - Edutopia
7 AI Tools That Help Teachers Work More Efficiently
https://www.edutopia.org
[]
Creating and supplementing content: Through AI-powered platforms, teachers can curate a range of educational resources. With generative AI in particular, ...
Over the past five years, I’ve explored ways to integrate AI into my teaching practice—even before ChatGPT and other generative AI became some of the most talked-about topics in education. Every educator needs to learn about AI and how we can leverage this technology to benefit our students and enhance our own work. To best provide for our students, we need to understand how this technology will impact them and us. What better way than to explore new tools via AI in our teaching practice? Our roles as educators have continued to change over the years. With technology comes a bit of hesitancy, especially with something as powerful as AI. In our schools, we have to provide opportunities for students to learn about changing technology because of the impact it may have on their future. Not only can AI tools enhance creativity and productivity, but also they can provide educators with valuable insights into student learning and assist with some of the time-consuming tasks that educators have. Even with all of the promises of AI, it is important that we take time to talk about artificial intelligence in our classrooms. Not only do we teach the content, but we serve as mentors, facilitators of learning and co-learners with our students, especially as we embrace these emerging powerful technologies. It’s important that we help our students learn about the benefits of them and also show how to use these tools properly, responsibly, and ethically. How AI Can Improve A Teacher’s Job Personalized learning: Educators can provide tailored learning experiences based on AI-driven analytics that provide valuable insights into student performance and learning trends. Using this data, AI can instantly adapt student learning materials. Teachers can then use this information to provide personalized learning experiences, adapting to each student’s strengths, weaknesses, and learning pace. Productivity and efficiency: Greater efficiency comes with AI as well. Educators are responsible for a variety of clerical tasks, such as communicating with students and their families, grading assessments, and providing feedback. Educators may find they spend more time on these clerical tasks rather than on teaching and working directly with students. The right AI tools can help to automate or streamline these tasks, which allows teachers to have additional time with their students. Creating and supplementing content: Through AI-powered platforms, teachers can curate a range of educational resources. With generative AI in particular, teachers are able to create lessons, activities, assessments, prompts for discussion, and presentations simply by providing a short prompt with keywords. Here are seven AI-powered tools that will help teachers with personalized learning that enables them to become more efficient and save time that can then be spent with students. I have used each of these for my own personal writing and creating of presentations, and the amount of time they save by generating the slides alone helps me to focus more closely on the content. I also appreciate that the tools offer translation options and a variety of templates and other resources that are commonly used by educators. 1. AudioPen: For years, I have been using voice-to-text to write blogs, books, emails, and lesson plans. This is an AI-powered web app that you can use on your computer or phone. The app takes your words and enhances them as it generates the text, which you can edit as needed. 2. Canva Magic Write: Canva now offers an AI text-to-image generator called Magic Write, which can inspire creativity in writing. It provides ideas, helps with brainstorming, and supports lesson planning, making it a useful tool for educators for creating a presentation or other graphic for classroom use. Magic Write can assist with many writing tasks that educators may have by analyzing the word prompts and then helping with brainstorming, creating an outline, writing lesson plans, or generating a visually engaging presentation in far less time. 3. Curipod: This website enables teachers to create interactive lessons in minutes using AI. Students can explore various topics, and the AI functionality helps generate customized lessons tailored to their learning needs. Teachers simply type in a topic, and a ready-to-run lesson is generated with text, images, and activities such as polls, open-ended responses, word clouds, and more. There are even activities to build in that focus on SEL check-ins. 4. Eduaide.Ai: This is an AI-assisted lesson-development tool that provides educators with more than 100 resource types to choose from to create high-quality instructional materials. It offers the ability to translate the generated content into more than 15 languages instantly. Educators can generate a syllabus, create discussion prompts, use the “teaching assistant” for help with creating individualized education program plans, write emails, or even compile a list of accommodations for students. Eduaide.AI has a content generator, teaching assistant, feedback bot, free-form chat, and assessment builder. 5. OpenAI: The recently released Teaching with AI guide for teachers was created to help educators use ChatGPT in their classroom. The guide comes with several suggested prompts and includes explanations that clarify exactly how ChatGPT works and what its limitations are, and it provides reminders of the importance of verifying information and checking for bias. With ChatGPT 4, which is a paid version, there is greater accuracy and reliability of information than with the original version. 6. Quizizz: With Quizizz, teachers can design quizzes that will create a personalized learning path based on each student’s responses. Teachers can also create lessons with Quizizz, which now has an AI enhancement that can adjust question difficulty, check grammar, and redesign questions to reflect real-world scenarios, with more features on the way. 7. Slidesgo: This tool provides access to free templates via Google Slides and now has the AI Presentation Maker. With this new functionality, presentations can be created within minutes. Simply choose a topic; select a tone such as casual, creative, or professional; make changes; and download your presentation. A time-saver for sure!
2022-12-01T00:00:00
https://www.edutopia.org/article/7-ai-tools-that-help-teachers-work-more-efficiently/
[ { "date": "2022/12/01", "position": 64, "query": "AI education" }, { "date": "2023/03/01", "position": 59, "query": "AI education" }, { "date": "2023/06/01", "position": 60, "query": "AI education" }, { "date": "2023/09/01", "position": 61, "query": "AI education" }, { "date": "2023/11/01", "position": 62, "query": "AI education" }, { "date": "2024/01/01", "position": 56, "query": "AI education" }, { "date": "2024/03/01", "position": 59, "query": "AI education" }, { "date": "2024/05/01", "position": 57, "query": "AI education" }, { "date": "2024/07/01", "position": 60, "query": "AI education" }, { "date": "2024/10/01", "position": 59, "query": "AI education" }, { "date": "2024/11/01", "position": 61, "query": "AI education" }, { "date": "2025/01/25", "position": 23, "query": "AI education" }, { "date": "2025/04/01", "position": 35, "query": "AI education" }, { "date": "2025/05/01", "position": 32, "query": "AI education" } ]
Q&A: Education researcher discusses the future of AI in K-12 ...
Q&A: Education researcher discusses the future of AI in K-12 education
https://phys.org
[ "Melanie Poppel", "Inga Kiderra", "University Of California - San Diego" ]
"AI could potentially change education drastically," says UC San Diego education scholar Amy Eguchi, who is both excited and concerned about ...
This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility: Eguchi with a robot called Pepper, who's wearing a VR headset and is involved in a human-robot performance project with colleagues from the UC San Diego School of Arts and Humanities and Jacobs School of Engineering. Credit: Amy Eguchi "AI could potentially change education drastically," says UC San Diego education scholar Amy Eguchi, who is both excited and concerned about the prospect. Technological innovation is nothing new to Eguchi. A teaching professor in the Department of Education Studies at UC San Diego's School of Social Sciences, she has extensive experience in robotics and computing education. She's been involved with RoboCupJunior—an international robotics competition for kids and teens—for more than 20 years. She also advises the Los Angeles Unified School District's AI Task Force and leads a project to advance STEM learning in refugee communities. Thanks to this and other efforts, Eguchi has developed a deep understanding of how to apply emerging technologies in ways that serve students and schools. While some may dream—or worry— about a future with robot teachers, Eguchi says not so fast. Robots are increasingly used around the world for tasks like food delivery and cooking, and studies suggest they may be helpful in language practice and with students on the autism spectrum. But they're still expensive, difficult to maintain, and far from widespread adoption in classrooms. AI tools, on the other hand, are far more accessible. From questions about whether students are using AI to complete assignments to the rise of AI chatbot tutors, Eguchi says artificial intelligence is already "shaking up" education. We sat down with her to learn more. What are some of the benefits and challenges of the growing use of AI in schools? AI has three different sides: one is to use AI, one is to teach with AI, and one is to teach about AI. But somehow people are just talking about using AI. We need to talk about all three, and then we can talk about how to use it in classrooms. Teachers need to fully understand how AI actually works so that they can make informed decisions on when and how to use it. I recently met with a kindergarten teacher who was very worried and asked me, "Do I really have to use AI with my kids? I don't even know what it is." She was under such pressure, and that's not healthy. That's an incredibly difficult and unfair situation for teachers to be placed in. That's why AI literacy—and supporting teachers with the integration of AI literacy in their classrooms—is a main priority for me. It's very important to slow down and make sure teachers feel comfortable and confident before integrating AI into schools. It's also important to think about how to use AI in age-appropriate ways and to address privacy issues. So there are a lot of missing pieces at this point, but I am optimistic. AI has the potential to make our lives easier—potentially helping us become more productive and creative—if we know how to use it more like a collaborative partner. How do these innovations contribute to the changing landscape of education? Are you fearful, hopeful, or something else? I'm hopeful that this could be a long-awaited tipping point where teaching and learning will be drastically transformed, since education is one of the only systems that has not changed much since its inception. AI is here to stay, so we have to decide to make it positive in the ways we want. If we don't do anything, it could have negative effects. This feels like a big change, but people were also worried when computers and smartphones were first introduced in schools. We have adapted to those and will adapt to life with AI too. One of the fundamental goals of education is to prepare our students to contribute meaningfully to society. With things changing as rapidly as they are, we need to equip students with the competencies they'll need to navigate and thrive in a constantly evolving world. We should emphasize learning how to learn and how to adapt. With AI technology, teachers will need to change learning priorities. Mastering proficiency will not be as important as it has been. Students may need to place greater emphasis on learning how to analyze and evaluate content for accuracy, as current AI tools are not reliable for fact-checking. They will also need strong skills in editing and refining what they create. Greater use of AI in the classroom could encourage a greater focus on deeper learning instead of a focus on memorization. Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly. You are a teacher of teachers, and you offer a number of courses about teaching computation, including one on teaching with educational robots. What do you hope your students implement in their own future classrooms? In my course, students practice using educational robotics kits themselves before learning how to teach with them. It is very important that they get a grasp on these tools by problem solving and learning from their mistakes. AI, robotics and computer science are unique from other subjects because many people do not have prior learning experience with them. For subjects like math and writing, students have a foundational understanding from their own education, so teaching about them feels more familiar. But since these tools are still new, we need to begin by engaging in our own learning experiences first. Any parting thoughts on the future of AI in education? AI has a lot of potential to do good in education, but we have to be very intentional about its implementation. Also, keep in mind the connection between AI and robotics. From this point, moving forward, robots will have AI embedded in them. That is allowing robots to process information much faster and in more sophisticated ways. They'll be able to do a lot more tasks soon. It's not at the level we see in the movies, but that could be coming down the pike. Again, we need to be intentional about the implementation—and also make sure that all students and all teachers have access to high-quality learning opportunities.
2022-12-01T00:00:00
https://phys.org/news/2025-07-qa-discusses-future-ai.html
[ { "date": "2022/12/01", "position": 67, "query": "AI education" }, { "date": "2023/03/01", "position": 66, "query": "AI education" }, { "date": "2023/06/01", "position": 70, "query": "AI education" }, { "date": "2023/09/01", "position": 71, "query": "AI education" }, { "date": "2023/11/01", "position": 70, "query": "AI education" }, { "date": "2024/01/01", "position": 69, "query": "AI education" }, { "date": "2024/03/01", "position": 73, "query": "AI education" }, { "date": "2024/05/01", "position": 72, "query": "AI education" }, { "date": "2024/07/01", "position": 73, "query": "AI education" }, { "date": "2024/10/01", "position": 73, "query": "AI education" }, { "date": "2024/11/01", "position": 72, "query": "AI education" } ]
A free online introduction to artificial intelligence for non-experts
A free online introduction to artificial intelligence for non-experts
https://www.elementsofai.com
[]
Learn more about MinnaLearn's and the University of Helsinki's AI course - no programming or complicated math required.
Our goal is to demystify AI The Elements of AI is a series of free online courses created by MinnaLearn and the University of Helsinki. We want to encourage as broad a group of people as possible to learn what AI is, what can (and can’t) be done with AI, and how to start creating AI methods. The courses combine theory with practical exercises and can be completed at your own pace.
2022-12-01T00:00:00
https://www.elementsofai.com/
[ { "date": "2022/12/01", "position": 68, "query": "AI education" }, { "date": "2022/12/01", "position": 82, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 69, "query": "AI education" }, { "date": "2023/06/01", "position": 67, "query": "AI education" }, { "date": "2023/09/01", "position": 67, "query": "AI education" }, { "date": "2023/11/01", "position": 68, "query": "AI education" }, { "date": "2023/11/01", "position": 78, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 74, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 68, "query": "AI education" }, { "date": "2024/03/01", "position": 68, "query": "AI education" }, { "date": "2024/05/01", "position": 69, "query": "AI education" }, { "date": "2024/07/01", "position": 69, "query": "AI education" }, { "date": "2024/09/01", "position": 83, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 69, "query": "AI education" }, { "date": "2024/11/01", "position": 68, "query": "AI education" }, { "date": "2024/12/01", "position": 82, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 64, "query": "AI education" }, { "date": "2025/05/01", "position": 77, "query": "AI education" }, { "date": "2025/06/01", "position": 90, "query": "artificial intelligence education" } ]
AI, Irreality and the Liberal Educational Project - Inside Higher Ed
AI, Irreality and the Liberal Educational Project (opinion)
https://www.insidehighered.com
[ "Jacob Riyeff" ]
AI, Irreality and the Liberal Educational Project. Jacob Riyeff asks how higher education can achieve its aim of scrutinizing reality when ...
I work at Marquette University. As a Roman Catholic, Jesuit university, we’re called to be an academic community that, as Pope John Paul II wrote, “scrutinize[s] reality with the methods proper to each academic discipline.” That’s a tall order, and I remain in the academy, for all its problems, because I find that job description to be the best one on offer, particularly as we have the honor of practicing this scrutinizing along with ever-renewing groups of students. This bedrock assumption of what a university is continues to give me hope for the liberal educational project despite the ongoing neoliberalization of higher education and some administrators’ and educators’ willingness to either look the other way regarding or uncritically celebrate the generative software (commonly referred to as “generative artificial intelligence”) explosion over the last two years. In the time since my last essay in Inside Higher Ed, and as Marquette’s director of academic integrity, I’ve had plenty of time to think about this and to observe praxis. In contrast to the earlier essay, which was more philosophical, let’s get more practical here about how access to generative software is impacting higher education and our students and what we might do differently. At the academic integrity office, we recently had a case in which a student “found an academic article” by prompting ChatGPT to find one for them. The chat bot obeyed, as mechanisms do, and generated a couple pages of text with a title. This was not from any actual example of academic writing but instead was a statistically probable string of text having no basis in the real world of knowledge and experience. The student made a short summary of that text and submitted it. They were, in the end, not found in violation of Marquette’s honor code, since what they submitted was not plagiarized. It was a complex situation to analyze and interpret, done by thoughtful people who care about the integrity of our academic community: The system works. In some ways, though, such activity is more concerning than plagiarism, for, at least when students plagiarize, they tend to know the ways they are contravening social and professional codes of conduct—the formalizations of our principles of working together honestly. In this case, the student didn’t see the difference between a peer-reviewed essay published by an academic journal and a string of probabilistically generated text in a chat bot’s dialogue box. To not see the difference between these two things—or to not care about that difference—is more disconcerting and concerning to me than straightforward breaches of an honor code, however harmful and sad such breaches are. I already hear folks saying: “That’s why we need AI literacy!” We do need to educate our students (and our colleagues) on what generative software is and is not. But that’s not enough. Because one also needs to want to understand and, as is central to the Ignatian Pedagogical Paradigm that we draw upon at Marquette, one must understand in context. Another case this spring term involved a student whom I had spent several months last fall teaching in a writing course that took “critical AI” as its subject matter. Yet this spring term the student still used a chat bot to “find a quote in a YouTube video” for an assignment and then commented briefly on that quote. The problem was that the quote used in the assignment does not appear in the selected video. It was a simulacrum of a quote; it was a string of probabilistically generated text, which is all generative software can produce. It did not accurately reflect reality, and the student did not cite the chat bot they’d copied and pasted from, so they were found in violation of the honor code. Another student last term in the Critical AI class prompted Microsoft Copilot to give them quotations from an essay, which it mechanically and probabilistically did. They proceeded to base their three-page argument on these quotations, none of which said anything like what the author in question actually said (not even the same topic); their argument was based in irreality. We cannot scrutinize reality together if we cannot see reality. And many of our students (and colleagues) are, at least at times, not seeing reality right now. They’re seeing probabilistic text as “good enough” as, or conflated with, reality. Let me point more precisely to the problem I’m trying to put my finger on. The student who had a chat bot “find” a quote from a video sent an email to me, which I take to be completely in earnest and much of which I appreciated. They ended the email by letting me know that they still think that “AI” is a really powerful and helpful tool, especially as it “continues to improve.” The cognitive dissonance between the situation and the student’s assertion took me aback. Again: the problem with the “We just need AI literacy” argument. People tend not to learn what they do not want to learn. If our students (and people generally) do not particularly want to do work, and they have been conditioned by the use of computing and their society’s habits to see computing as an intrinsic good, “AI” must be a powerful and helpful tool. It must be able to do all the things that all the rich and powerful people say it does. It must not need discipline or critical acumen to employ, because it will “supercharge” your productivity or give you “10x efficiency” (whatever that actually means). And if that’s the case, all these educators telling you not to offload your cognition must be behind the curve, or reactionaries. At the moment, we can teach at least some people all about “AI literacy” and it will not matter, because such knowledge refuses to jibe with the mythology concerning digital technology so pervasive in our society right now. If we still believe in the value of humanistic, liberal education, we cannot be quiet about these larger social systems and problems that shape our pupils, our selves and our institutions. We cannot be quiet about these limits of vision and questioning. Because not only do universities exist for the scrutinizing of reality with the various methods of the disciplines as noted at the outset of this essay, but liberal education also assumes a view of the human person that does not see education as instrumental but as formative. The long tradition of liberal education, for all its complicity in social stratification down the centuries, assumes that our highest calling is not to make money, to live in comfort, to be entertained. (All three are all right in their place, though we must be aware of how our moneymaking, comfort and entertainment derive from the exploitation of the most vulnerable humans and the other creatures with whom we share the earth, and how they impact our own spiritual health.) We are called to growth and wisdom, to caring for the common good of the societies in which we live—which at this juncture certainly involves caring for our common home, the Earth, and the other creatures living with us on it. As Antiqua et nova, the note released from the Vatican’s Dicastery for Culture and Education earlier this year (cited commendingly by secular ed-tech critics like Audrey Watters) reiterates, education plays its role in this by contributing “to the person’s holistic formation in its various aspects (intellectual, cultural, spiritual, etc.) … in keeping with the nature and dignity of the human person.” These objectives of education are not being served by students using generative software to satisfy their instructors’ prompts. And no amount of “literacy” is going to ameliorate the situation on its own. People have to want to change, or to see through the neoliberal, machine-obsessed myth, for literacy to matter. I do believe that the students I’ve referred to are generally striving for the good as they know how. On a practical level, I am confident they’ll go on to lead modestly successful lives as our society defines that term with regard to material well-being. I assume their motivation is not to cause harm or dupe their instructors; they’re taking part in “hustle” culture, “doing school” and possibly overwhelmed by all their commitments. Even if all this is indeed the case, liberal education calls us to more, and it’s the role of instructors and administrators to invite our students into that larger vision again and again. If we refuse to give up on humanistic, liberal education, then what do we do? The answer is becoming clearer by the day, with plenty of folks all over the internet weighing in, though it is one many of us do not really want to hear. Because at least one major part of the answer is that we need to make an education genuinely oriented toward our students. A human-scale education, not an industrial-scale education (let’s recall over and over that computers are industrial technology). The grand irony of the generative software moment for education in neoliberal, late-capitalist society is that it is revealing so many of the limits we’ve been putting on education in the first place. If we can’t “AI literacy” our educational problems away, we have to change our pedagogy. We have to change the ways we interact with our students inside the classroom and out: to cultivate personal relationships with them whenever possible, to model the intellectual life as something that is indeed lived out with the whole person in a many-partied dialogue stretching over millennia, decidedly not as the mere ability to move information around. This is not a time for dismay or defeat but an incitement to do the experimenting, questioning, joyful intellectual work many of us have likely wanted to do all along but have not had a reason to go off script for. This probably means getting creative. Part of getting creative in our day probably means de-computing (as Dan McQuillan at the University of London labels it). To de-compute is to ask ourselves—given our ambient maximalist computing habits of the last couple decades—what is of value in this situation? What is important here? And then: Does a computer add value to this that it is not detracting from in some other way? Computers may help educators collect assignments neatly and read them clearly, but if that convenience is outweighed by constantly having to wonder if a student has simply copied and pasted or patch-written text with generative software, is the value of the convenience worth the problems? Likewise, getting creative in our day probably means looking at the forms of our assessments. If the highly structured student essay makes it easier for instructors to assess because of its regularity and predictability, yet that very regularity and predictability make it a form that chat bots can produce fairly readily, well: 1) the value for assessing may not be worth the problems of teeing up chat bot–ifiable assignments and 2) maybe that wasn’t the best form for inviting genuinely insightful and exciting intellectual engagement with our disciplines’ materials in the first place. I’ve experimented with research journals rather than papers, with oral exams as structured conversations, with essays that focus intently on one detail of a text and do not need introductions and conclusions and that privilege the student’s own voice, and other in-person, handmade, leaving-the-classroom kinds of assessments over the last academic year. Not everything succeeded the way I wanted, but it was a lively, interactive year. A convivial year. A year in which mostly I did not have to worry about whether students were automating their educations. We have a chance as educators to rethink everything in light of what we want for our societies and for our students; let’s not miss it because it’s hard to redesign assignments and courses. (And it is hard.) Let’s experiment, for our own sakes and for our students’ sakes. Let’s experiment for the sakes of our institutions that, though they are often scoffed at in our popular discourse, I hope we believe in as vibrant communities in which we have the immense privilege of scrutinizing reality together.
2025-07-08T00:00:00
2025/07/08
https://www.insidehighered.com/opinion/views/2025/07/08/ai-irreality-and-liberal-educational-project-opinion
[ { "date": "2022/12/01", "position": 70, "query": "AI education" }, { "date": "2022/12/01", "position": 85, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 67, "query": "AI education" }, { "date": "2023/06/01", "position": 71, "query": "AI education" }, { "date": "2023/09/01", "position": 70, "query": "AI education" }, { "date": "2023/11/01", "position": 72, "query": "AI education" }, { "date": "2024/01/01", "position": 72, "query": "AI education" }, { "date": "2024/03/01", "position": 72, "query": "AI education" }, { "date": "2024/05/01", "position": 73, "query": "AI education" }, { "date": "2024/07/01", "position": 71, "query": "AI education" }, { "date": "2024/10/01", "position": 72, "query": "AI education" }, { "date": "2024/11/01", "position": 71, "query": "AI education" } ]
Machine Learning & AI Courses | Google Cloud Training
Machine Learning & AI Courses
https://cloud.google.com
[]
Take machine learning & AI classes with Google experts. Grow your ML skills with interactive labs. Deploy the latest AI technology. Start learning!
Machine learning and artificial intelligence Did you know that the adoption of machine learning results in 2x more data-driven decisions, 5x faster decision-making, and 3x faster execution? 1 Learn how to implement the latest machine learning and artificial intelligence technology with courses on Vertex AI, BigQuery, TensorFlow, and more. Boost your AI skills to take your career to the next level or to prepare for a role in machine learning or software development.
2022-12-01T00:00:00
https://cloud.google.com/learn/training/machinelearning-ai
[ { "date": "2022/12/01", "position": 71, "query": "AI education" }, { "date": "2022/12/01", "position": 83, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 68, "query": "AI education" }, { "date": "2023/06/01", "position": 63, "query": "AI education" }, { "date": "2023/09/01", "position": 65, "query": "AI education" }, { "date": "2023/11/01", "position": 67, "query": "AI education" }, { "date": "2023/11/01", "position": 76, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 75, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 66, "query": "AI education" }, { "date": "2024/03/01", "position": 66, "query": "AI education" }, { "date": "2024/05/01", "position": 67, "query": "AI education" }, { "date": "2024/07/01", "position": 68, "query": "AI education" }, { "date": "2024/09/01", "position": 80, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 66, "query": "AI education" }, { "date": "2024/11/01", "position": 67, "query": "AI education" }, { "date": "2024/12/01", "position": 78, "query": "artificial intelligence education" }, { "date": "2025/02/01", "position": 97, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 82, "query": "AI education" }, { "date": "2025/05/01", "position": 79, "query": "AI education" }, { "date": "2025/06/01", "position": 86, "query": "artificial intelligence education" } ]
Deep Learning Institute (DLI) Training and Certification - NVIDIA
NVIDIA Deep Learning Institute
https://www.nvidia.com
[]
Simplify AI development with NVIDIA AI Workbench on GPUs · API Catalog. Explore NVIDIA's AI ... NVIDIA Training Partner (Education Services) Services - AI.Lab.
NVIDIA set up a great virtual training environment, and we were taught directly by deep learning/CUDA experts, so our team could understand not only the concepts but also how to use the codes in the hands-on lab, which helped us understand the subject matter more deeply. The team enjoyed the class immensely.
2022-12-01T00:00:00
https://www.nvidia.com/en-us/training/
[ { "date": "2022/12/01", "position": 72, "query": "AI education" }, { "date": "2022/12/01", "position": 98, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 71, "query": "AI education" }, { "date": "2023/06/01", "position": 74, "query": "AI education" }, { "date": "2023/09/01", "position": 73, "query": "AI education" }, { "date": "2023/11/01", "position": 77, "query": "AI education" }, { "date": "2023/11/01", "position": 87, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 90, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 75, "query": "AI education" }, { "date": "2024/03/01", "position": 76, "query": "AI education" }, { "date": "2024/05/01", "position": 77, "query": "AI education" }, { "date": "2024/07/01", "position": 74, "query": "AI education" }, { "date": "2024/10/01", "position": 75, "query": "AI education" }, { "date": "2024/11/01", "position": 74, "query": "AI education" }, { "date": "2024/12/01", "position": 93, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 88, "query": "AI education" }, { "date": "2025/05/01", "position": 88, "query": "AI education" } ]
Synthesis Tutor
Synthesis Tutor
https://www.synthesis.com
[ "Chrisman Frank", "Bryce S Mom", "Wayne S Mom", "Ayden S Mom", "Claire S Mom", "Noah S Mom", "Anand S Dad", "Jane'S Mom", "-Noah S Mom", "Josh Dahn" ]
It's the educational AI that actually works, and I am excited to see my son ... An education worthy of SpaceX. In 2014, Elon Musk asked me to start an ...
In 2014, Elon Musk asked me to start an experimental school with him at SpaceX. The goal was to develop students who are enthralled by complexity and solving for the unknown. Synthesis offers the most innovative learning experiences from that school. Synthesis Tutor does what school was supposed to: Show your kids that they can learn anything, and that every "boring" topic is fascinating when taught well. Until recently, Synthesis was only available to a handful of SpaceX families. Today, our community is worldwide and growing quickly. Excited for you to be a part of it.
2022-12-01T00:00:00
https://www.synthesis.com/tutor
[ { "date": "2022/12/01", "position": 77, "query": "AI education" }, { "date": "2023/03/01", "position": 72, "query": "AI education" }, { "date": "2024/12/12", "position": 52, "query": "AI education" }, { "date": "2025/04/01", "position": 83, "query": "AI education" }, { "date": "2025/05/01", "position": 83, "query": "AI education" } ]
AI courses and tools from Google
AI courses and tools from Google
https://grow.google
[]
1 HOUR. Learn the basics of AI and how Google AI-powered tools can help ... Google for Education · Google Developers · Google.org · Learning with Google · About ...
Google AI Essentials is a good course for beginners. It is taught by AI experts at Google who are working to make the technology helpful for everyone. In under 10 hours, they’ll do more than teach you about AI — they’ll show you how to actually use it. You’ll get practical, hands-on experience augmenting your current and future work tasks with AI. Through videos, readings, and interactive exercises, you’ll learn how to use generative AI tools, create effective prompts, use AI responsibly, and select the right AI tools for your work needs. This self-paced course is accessible to everyone and is designed to fit into busy schedules. You don’t need any technical experience to begin, and the skills you’ll learn can be applied to a variety of roles across industries. After you complete the course, you’ll earn a certificate from Google to share with your network and potential employers.
2022-12-01T00:00:00
https://grow.google/ai/
[ { "date": "2022/12/01", "position": 78, "query": "AI education" }, { "date": "2023/02/01", "position": 98, "query": "AI skills gap" }, { "date": "2023/02/01", "position": 82, "query": "generative AI jobs" }, { "date": "2023/03/01", "position": 74, "query": "AI education" }, { "date": "2023/04/01", "position": 81, "query": "generative AI jobs" }, { "date": "2023/05/01", "position": 98, "query": "AI skills gap" }, { "date": "2023/05/01", "position": 83, "query": "generative AI jobs" }, { "date": "2023/06/01", "position": 99, "query": "AI skills gap" }, { "date": "2023/06/01", "position": 80, "query": "generative AI jobs" }, { "date": "2023/07/01", "position": 84, "query": "generative AI jobs" }, { "date": "2023/08/01", "position": 98, "query": "AI graphic design" }, { "date": "2023/08/01", "position": 99, "query": "AI skills gap" }, { "date": "2023/09/01", "position": 80, "query": "generative AI jobs" }, { "date": "2023/10/01", "position": 99, "query": "AI skills gap" }, { "date": "2023/10/01", "position": 87, "query": "generative AI jobs" }, { "date": "2023/11/01", "position": 97, "query": "artificial intelligence education" }, { "date": "2023/11/01", "position": 87, "query": "generative AI jobs" }, { "date": "2023/12/01", "position": 97, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 82, "query": "generative AI jobs" }, { "date": "2024/01/01", "position": 95, "query": "AI graphic design" }, { "date": "2024/01/01", "position": 85, "query": "generative AI jobs" }, { "date": "2024/02/01", "position": 98, "query": "AI skills gap" }, { "date": "2024/02/01", "position": 85, "query": "generative AI jobs" }, { "date": "2024/03/01", "position": 84, "query": "generative AI jobs" }, { "date": "2024/04/27", "position": 50, "query": "AI graphic design" }, { "date": "2024/04/27", "position": 86, "query": "artificial intelligence business leaders" }, { "date": "2024/07/01", "position": 96, "query": "AI skills gap" }, { "date": "2024/07/01", "position": 84, "query": "generative AI jobs" }, { "date": "2024/08/01", "position": 84, "query": "generative AI jobs" }, { "date": "2024/09/01", "position": 98, "query": "AI skills gap" }, { "date": "2024/09/01", "position": 81, "query": "generative AI jobs" }, { "date": "2024/11/01", "position": 83, "query": "generative AI jobs" }, { "date": "2025/01/01", "position": 85, "query": "generative AI jobs" }, { "date": "2025/02/01", "position": 92, "query": "AI graphic design" }, { "date": "2025/02/01", "position": 91, "query": "AI skills gap" }, { "date": "2025/02/01", "position": 75, "query": "generative AI jobs" }, { "date": "2025/03/01", "position": 74, "query": "generative AI jobs" }, { "date": "2025/04/01", "position": 90, "query": "AI education" }, { "date": "2025/04/01", "position": 77, "query": "generative AI jobs" }, { "date": "2025/05/01", "position": 94, "query": "AI education" }, { "date": "2025/05/01", "position": 92, "query": "AI graphic design" }, { "date": "2025/06/01", "position": 92, "query": "AI graphic design" }, { "date": "2025/06/01", "position": 72, "query": "generative AI jobs" } ]
UW-Stevens Point: New undergraduate program blends technical ...
UW-Stevens Point: New undergraduate program blends technical and ethical education in AI
https://www.wispolitics.com
[ "Wisconsin Press Releases" ]
With the launch of the Bachelor of Science in Artificial Intelligence (AI) this fall, the program seeks to blend technical programming education ...
As artificial intelligence continues to redefine industries and everyday life, the University of Wisconsin-Stevens Point is looking to take the next step towards preparing students for an evolving workforce. With the launch of the Bachelor of Science in Artificial Intelligence (AI) this fall, the program seeks to blend technical programming education with ethical awareness and practical application of AI. The program will prepare graduates to thrive in a fast, evolving field, where AI is not only a technology but a transformative tool reshaping everything from healthcare to manufacturing. “It’s a new undergraduate program in computing, so there’s quite a bit of overlap with our existing computer information systems (CIS) program, but then we are offering completely new courses in AI,” explains Associate Professor Tomi Heimonen. “We’re covering everything from deep learning and neural networks to AI for security and natural language processing.” Courses will introduce students to foundational and emerging areas within AI, such as machine learning, cloud environments and AI-driven cybersecurity. A strong focus on hands-on learning will ensure that theory is grounded in real-world application. A final capstone course will challenge students to build functional AI systems for local organizations. Leaders in the program are already teaming up with partners from Central Wisconsin for these projects. One launching this fall will involve a student-built chatbot designed to help a local agency’s customer support team access internal policy information quickly and accurately. “I think the hallmark of all our courses is that it’s not just theory,” said Heimonen. “There’s a pretty heavy application emphasis in all of them.” In addition to AI-specific courses, students will complete coursework in data analytics, programming and mathematics. While artificial intelligence is often misunderstood, the program aims to ground students in a practical, ethical and realistic understanding of what AI is and is not. “We’re not building terminators,” Heimonen said. “AI are systems that try to imitate human intelligence by taking in data, learning from it and then recommending actions or producing outcomes based on that data.” These systems are already influencing daily life, whether it’s Netflix recommending your next show, your bank flagging a suspicious transaction or ChatGPT helping draft an email. Looking forward, AI is expected to become even more integrated into everyday systems, such as smarter and more efficient supply chains, advanced healthcare diagnostics or manufacturing, which is prevalent in the Central Wisconsin region. The decision to launch the AI program came in response to both institutional direction and market demand. Backed by state workforce development funds, the department seized the opportunity to become a potential leader in AI education among regional universities. “There’s definitely a gap between the number of trained professionals and what the workforce needs,” Heimonen noted. He adds that UWSP saw a chance to be one of the few institutions in the state training student specifically to work with AI straight out of their undergraduate and deliver talents to needs of Wisconsin employers. One of the program’s defining features is its attention to the ethical implications of AI development. Courses will emphasize responsible use, social responsibility and the necessity of human oversight. “There has to be some guardrails,” said Heimonen. “If we’re going to trust AI to make decisions, we need to make sure those decisions are accurate, fair and conveyed in a way that can be explained to the user.” Students will be challenged to critically evaluate whether AI is the right solution for a given problem and to ask tough questions about bias, misuse and unintended consequences. When it comes to career preparation, the AI degree positions students well for roles such as software developers, computer and information systems managers and computer systems analysts. While “AI Developer” may not yet be a common job title, employers are increasingly seeking candidates who understand AI tools and methodologies. Graduates will leave with a robust knowledge of ethics, technical proficiency and the ability to communicate complex AI systems to both technical and non-technical audiences. “This is a step into the unknown, but to now actually have a real AI program is something we’re excited about it,” said Heimonen. For more information, visit uwsp.edu/programs/degree/artificial-intelligence/.
2025-07-08T00:00:00
2025/07/08
https://www.wispolitics.com/2025/uw-stevens-point-new-undergraduate-program-blends-technical-and-ethical-education-in-ai/
[ { "date": "2022/12/01", "position": 81, "query": "AI education" }, { "date": "2023/03/01", "position": 75, "query": "AI education" }, { "date": "2023/06/01", "position": 94, "query": "AI education" }, { "date": "2023/09/01", "position": 89, "query": "AI education" }, { "date": "2023/11/01", "position": 97, "query": "AI education" }, { "date": "2024/03/01", "position": 97, "query": "AI education" }, { "date": "2024/05/01", "position": 98, "query": "AI education" }, { "date": "2024/07/01", "position": 97, "query": "AI education" }, { "date": "2024/10/01", "position": 98, "query": "AI education" }, { "date": "2024/11/01", "position": 96, "query": "AI education" } ]
Notion for Students
Notion for Students
https://www.notion.com
[ "Alpha Theta Delta", "Member", "University Of Michigan", "Ann Arbor", "Cyclone Robosub", "Uc Santa Cruz", "App Development Club", "Oregon State University", "University Of Waterloo", "Uc Berkeley" ]
Notion AI. Built ... You can continue using your free Plus plan for education as long as it continues to be associated with your educational email address.
The Education Plus Plan for individual students is a free Plus Plan for a one-member workspace. It’s available for students signed in with their education institution's email per the WHED. You may also be eligible for 50% off Notion AI if you’re on this plan. Offer terms and conditions apply. With the Plus plan, students can organize plans, tasks and notes across school and life with unlimited pages & blocks, file uploads, more features to customize and publish your site, and 30-day version history. Just sign in with your school email address, by scrolling down the Billing in-product page and selecting Get free Education plan. Learn more about our student offers here: https://www.notion.so/help/notion-for-education
2022-12-01T00:00:00
https://www.notion.com/product/notion-for-education
[ { "date": "2022/12/01", "position": 82, "query": "AI education" }, { "date": "2023/03/01", "position": 78, "query": "AI education" }, { "date": "2023/06/01", "position": 79, "query": "AI education" }, { "date": "2023/09/01", "position": 78, "query": "AI education" }, { "date": "2023/11/01", "position": 81, "query": "AI education" }, { "date": "2024/01/01", "position": 80, "query": "AI education" }, { "date": "2024/03/01", "position": 81, "query": "AI education" }, { "date": "2024/05/01", "position": 84, "query": "AI education" }, { "date": "2024/07/01", "position": 81, "query": "AI education" }, { "date": "2024/10/01", "position": 82, "query": "AI education" }, { "date": "2024/11/01", "position": 81, "query": "AI education" }, { "date": "2025/04/01", "position": 94, "query": "AI education" }, { "date": "2025/05/01", "position": 95, "query": "AI education" } ]
Machine Learning Crash Course - Google for Developers
Google for Developers
https://developers.google.com
[]
We're delighted to announce the launch of a refreshed version of MLCC that covers recent advances in AI, with an increased focus on interactive learning.
What's new in Machine Learning Crash Course? Since 2018, millions of people worldwide have relied on Machine Learning Crash Course to learn how machine learning works, and how machine learning can work for them. We're delighted to announce the launch of a refreshed version of MLCC that covers recent advances in AI, with an increased focus on interactive learning. Watch this video to learn more about the new-and-improved MLCC.
2022-12-01T00:00:00
https://developers.google.com/machine-learning/crash-course
[ { "date": "2022/12/01", "position": 83, "query": "AI education" }, { "date": "2022/12/01", "position": 84, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 79, "query": "AI education" }, { "date": "2023/06/01", "position": 85, "query": "AI education" }, { "date": "2023/09/01", "position": 86, "query": "AI education" }, { "date": "2023/11/01", "position": 87, "query": "AI education" }, { "date": "2023/11/01", "position": 77, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 78, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 82, "query": "AI education" }, { "date": "2024/03/01", "position": 88, "query": "AI education" }, { "date": "2024/05/01", "position": 86, "query": "AI education" }, { "date": "2024/07/01", "position": 82, "query": "AI education" }, { "date": "2024/08/13", "position": 66, "query": "AI education" }, { "date": "2024/09/01", "position": 87, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 84, "query": "AI education" }, { "date": "2024/11/01", "position": 88, "query": "AI education" }, { "date": "2024/12/01", "position": 79, "query": "artificial intelligence education" }, { "date": "2025/04/01", "position": 96, "query": "AI education" }, { "date": "2025/05/01", "position": 97, "query": "AI education" }, { "date": "2025/06/01", "position": 89, "query": "artificial intelligence education" } ]
The Next Era of Assessment: A Global Review of AI in Assessment ...
The Next Era of Assessment: A Global Review of AI in Assessment Design
https://www.digitaleducationcouncil.com
[]
Developed by the Digital Education Council and Pearson, this global report presents the first comprehensive mapping of AI-integrated assessment practices in ...
Developed by the Digital Education Council and Pearson, this global report presents the first comprehensive mapping of AI-integrated assessment practices in higher education. Drawing on 101 global case studies, it offers institutions a clear and actionable framework to rethink and redesign assessment in the age of AI, featuring: 14 practical methodologies of AI-integrated assessment design A step-by-step guide to support assessment redesign at scale A dual-priority approach that ensures students develop both foundational human skills and AI fluency AI-resistant as a baseline design principle to uphold assessment validity and integrity The Next Era of Assessment: A Global Review of AI in Assessment Design is now available for download and supports higher education educators in designing rigorous, future-ready assessments grounded in real institutional practice.
2022-12-01T00:00:00
https://www.digitaleducationcouncil.com/post/the-next-era-of-assessment-a-global-review-of-ai-in-assessment-design
[ { "date": "2022/12/01", "position": 84, "query": "AI education" }, { "date": "2022/12/01", "position": 92, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 80, "query": "AI education" }, { "date": "2023/06/01", "position": 80, "query": "AI education" }, { "date": "2023/09/01", "position": 80, "query": "AI education" }, { "date": "2023/11/01", "position": 83, "query": "AI education" }, { "date": "2023/11/01", "position": 83, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 83, "query": "artificial intelligence education" }, { "date": "2024/01/01", "position": 84, "query": "AI education" }, { "date": "2024/03/01", "position": 83, "query": "AI education" }, { "date": "2024/05/01", "position": 85, "query": "AI education" }, { "date": "2024/07/01", "position": 83, "query": "AI education" }, { "date": "2024/09/01", "position": 91, "query": "artificial intelligence education" }, { "date": "2024/10/01", "position": 85, "query": "AI education" }, { "date": "2024/11/01", "position": 83, "query": "AI education" }, { "date": "2024/12/01", "position": 85, "query": "artificial intelligence education" } ]
Enhancing AI literacy in undergraduate pre-medical education ...
Enhancing AI literacy in undergraduate pre-medical education through student associations: an educational intervention - BMC Medical Education
https://bmcmededuc.biomedcentral.com
[ "Hopson", "Department Of Physics", "Astronomy", "Brigham Young University", "Provo", "Mildon", "Hassard", "Kubalek", "Laverty", "Urie" ]
However, a gap exists in formal AI education for pre-medical students. This study evaluates the effectiveness of the AI in Medicine Association ...
Study design This study employed a quasi-experimental, pretest-posttest control group design to evaluate the educational impact of a student-led AI curriculum. Participants were assigned to either an intervention group, which completed the curriculum through the AI in Medicine Association (AIM), or a control group, which did not receive the intervention. Outcomes were assessed through knowledge gains in AI and pathology-related content. Participants and recruitment Participants were undergraduate students at BYU pursuing pre-medical or other pre-health career paths. Cohort participants were recruited via flyers, campus bulletins, and an in-person booth at BYU’s student center. Control group participants were recruited through in-person outreach in large, pre-med-focused courses. Inclusion criteria required current enrollment in pre-medical coursework and interest in healthcare careers. Students with prior formal AI training beyond the scope of the AIM curriculum were excluded. Intervention The AIM curriculum consisted of four weekly educational sessions incorporating principles of active learning and multimodal instruction, as recommended in the literature [10,11,12,13]. The curriculum integrated foundational AI concepts, ethical frameworks, hands-on engagement with histological image analysis, and exposure to current AI research. Specific topics included: 1. Introduction to AI in healthcare. 2. Ethical implications and responsible use of AI. 3. Data preprocessing techniques. 4. Model development, training, and evaluation. Instructional materials were designed by AIM student officers and reviewed by a senior faculty member to ensure academic rigor and pedagogical clarity. A key project involved identifying cancerous tissue and histological features in whole-slide prostate images, fostering practical engagement with AI in pathology [20]. Assessment design The primary assessment tool consisted of a structured survey administered both pre- and post-intervention. It began with six demographic/background questions (e.g., prior AI exposure, graduation year, attitudes toward AI) followed by ten knowledge items divided into two domains: AI and Data Science (5 questions). Pathology and AIM Projects (5 questions). Item formats included multiple-choice and short-answer questions. A complete version of the assessment is available in Supplementary Materials. Data collection Both the intervention and control groups completed the same assessment before and after a four-week period. Assessments were conducted in supervised or distraction-free environments. Only cohort participants who completed all four AIM sessions were eligible for the posttest. Control participants received their posttest four weeks after the pretest, with no exposure to the intervention. To reduce response bias, participants were assured that their responses would remain anonymous, were not graded for performance, and could omit questions if desired. Identifying information was removed from scores once collected, and only used to contact students if needed, to link pre- and post-tests to the same student, and to collect demographic information that was not collected in the survey such as the sex of the participant. Participation was voluntary, and all students were informed they could withdraw at any time. Those who opted out of the study were still allowed to engage fully in AIM activities. Data were collected over two academic semesters and included 15 intervention participants and 16 controls. Grading and scoring Responses were graded by a faculty member using a structured rubric to ensure consistency and minimize bias. Partial credit was awarded for partially correct answers, with specific criteria delineated in the rubric (see Supplementary Materials). Statistical analysis Descriptive statistics (means, standard deviations, medians, frequencies, and percentages) were used to summarize demographic variables and baseline characteristics. Between-group comparisons at baseline were assessed using chi-square tests (categorical variables) and independent t-tests or Mann-Whitney U tests (continuous variables). To assess knowledge gains, within-group changes in pretest and posttest scores were analyzed using paired t-tests (or Wilcoxon signed-rank tests if data were non-normally distributed). Between-group comparisons of posttest scores were analyzed using independent t-tests or Mann-Whitney U tests. Effect sizes were calculated using Cohen’s d. Additionally, an analysis of covariance was conducted to compare posttest scores between groups while controlling for pretest scores and potential confounders (e.g., prior AI experience, pre-med status). Exploratory analyses included Pearson or Spearman correlations between prior AI experience and knowledge gains. A multiple regression model was used to identify predictors of improvement, with group assignment, prior AI experience, and baseline attitudes toward AI entered as independent variables. Data visualization To visualize score distributions and changes, violin plots were generated for pre- and posttest scores by group. Bar plots with error bars depicted mean score changes and standard deviations across AI, pathology, and combined domains. Scatter plots explored associations between prior experience or attitudes and test performance. All figures were generated using Python packages seaborn and matplotlib. Ethical considerations The study received approval from the Brigham Young University Institutional Review Board. All participants provided informed consent, and data confidentiality was maintained via secure storage and anonymized analysis (see Supplementary Material).
2025-12-14T00:00:00
2025/12/14
https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-025-07556-2
[ { "date": "2022/12/01", "position": 86, "query": "AI education" }, { "date": "2023/03/01", "position": 83, "query": "AI education" }, { "date": "2023/06/01", "position": 84, "query": "AI education" }, { "date": "2023/09/01", "position": 85, "query": "AI education" }, { "date": "2023/11/01", "position": 86, "query": "AI education" }, { "date": "2024/01/01", "position": 89, "query": "AI education" }, { "date": "2024/03/01", "position": 87, "query": "AI education" }, { "date": "2024/05/01", "position": 90, "query": "AI education" }, { "date": "2024/07/01", "position": 88, "query": "AI education" }, { "date": "2024/10/01", "position": 88, "query": "AI education" }, { "date": "2024/11/01", "position": 87, "query": "AI education" } ]
How Can AI Help Students With Reading Comprehension? - GovTech
How Can AI Help Students With Reading Comprehension?
https://www.govtech.com
[ "Jennifer Vilcarino", "Education Week", "Bethesda" ]
This question was posed by an English/language arts teacher and educational consultant to a group of educators at a session at the ISTELive 25 + ...
HOW THIS TEACHER USED AN AI IMAGE GENERATOR TO WORK ON READING SKILLS (TNS) — What barriers do students face when asked to read text and show their comprehension of it?This question was posed by an English/language arts teacher and educational consultant to a group of educators at a session at the ISTELive 25 + ASCD Annual Conference 25 in San Antonio, held June 29 to July 2.Some of the common responses among the audience of mostly teachers included limited vocabulary, boredom, and difficulty decoding.The session — led by Jessica Pack, a sixth-grade language arts teacher at James Workman Middle School in Riverside County, Calif. — made the case that smart, strategic use of artificial intelligence tools could help boost reading skills. (However, it's important to note that many educators say AI tools do the exact opposite — they stifle creativity, lead to plagiarism, and give students an easy way out of tackling challenging assignments.)Improving reading skills is one of the top priorities in schools across the country because of data showing that those skills are declining. The 2024 National Assessment of Educational Progress has shown declining reading scores for fourth and eighth graders for years.Those trends have prompted growing interest in the "science of reading," the use of proven, evidence-based methods for teaching reading that include teaching the foundations of language in a structured progression.Pack has been using AI tools to bolster reading comprehension lessons, she said during the ISTE session, titled "Enhancing Literacy Through Creativity Using AI-Powered Tools.""A lot of folks are landing on AI as a purely teacher-centered type of tool, so what we are going to do today is encourage a bit of student-centered use," Pack told the audience of educators at the conference.Pack said her sixth-grade students from last school year had an average reading level of about second or third grade.To address this learning gap, students were first asked to examine paragraphs from a book and generate keywords from that text. They would use those keywords to create a prompt for an AI image generator.The students would then work in small groups to evaluate the image to see if it was missing something that was important to the text from the book. Finally, they would submit additional AI prompts to get the image to more accurately represent the text."This is huge. This is the metacognitive moment where they're demonstrating their comprehension — their full comprehension of the text — by being able to connect it back to whatever it is they were able to generate," said Pack.Research shows that asking students to monitor and correct their own understanding of text as they read can boost their comprehension — though using AI for this purpose hasn't been studied extensively.Pack emphasized that it is important for students to be taught to cite the images they create as generative AI images, because that instills the value of citing sources for content they create."They need to be aware of what content is AI-generated, and it starts building that digital citizenship foundation for citation of AI and being aware that not all the things we are creating are actual factual things," she said.
2025-07-08T00:00:00
2025/07/08
https://www.govtech.com/education/k-12/how-can-ai-help-students-with-reading-comprehension
[ { "date": "2022/12/01", "position": 87, "query": "AI education" }, { "date": "2023/03/01", "position": 85, "query": "AI education" } ]
Learn More About Google Workspace with Gemini
Learn More About Google Workspace with Gemini
https://edu.google.com
[]
Gemini for Education is an AI-powered assistant that can help you save time, create captivating learning experiences, and inspire fresh ideas – all in a ...
Research and brainstorm new ideas for professional development, lesson plans, IT security policies, and more by chatting with the Gemini app at gemini.google.com. Here, an IT admin uses the Gemini app to get ideas for security settings to implement in Google Workspace Education Plus. Research and brainstorm new ideas for professional development, lesson plans, IT security policies, and more by chatting with the Gemini app at gemini.google.com. Here, an IT admin uses the Gemini app to get ideas for security settings to implement in Google Workspace Education Plus.
2022-12-01T00:00:00
https://edu.google.com/intl/ALL_us/workspace-for-education/add-ons/google-workspace-with-gemini/
[ { "date": "2022/12/01", "position": 88, "query": "AI education" }, { "date": "2023/03/01", "position": 87, "query": "AI education" }, { "date": "2023/06/01", "position": 88, "query": "AI education" }, { "date": "2023/09/01", "position": 88, "query": "AI education" }, { "date": "2023/11/01", "position": 91, "query": "AI education" }, { "date": "2024/01/01", "position": 93, "query": "AI education" }, { "date": "2024/03/01", "position": 93, "query": "AI education" }, { "date": "2024/05/01", "position": 92, "query": "AI education" }, { "date": "2024/07/01", "position": 92, "query": "AI education" }, { "date": "2024/10/01", "position": 93, "query": "AI education" }, { "date": "2024/11/01", "position": 92, "query": "AI education" } ]
UMD Broadens Access to AI Education With New Course…
UMD Broadens Access to AI Education With New Course…
https://today.umd.edu
[]
With the launch of the free online course “Artificial Intelligence in Government,” the University of Maryland is broadening its portfolio of ...
The University of Maryland is expanding its suite of non-credit artificial intelligence (AI) offerings with the launch of a new, free online course, “Artificial Intelligence in Government.” Offered on Coursera, the self-paced course empowers public sector professionals to explore how AI technologies can enhance government operations. Taught by Joe Mariani, senior research manager at Deloitte Consulting’s Center for Government Insights, the new offering is part of UMD’s growing catalog of AI courses designed for current students, working professionals and lifelong learners. “These courses aren’t just about AI or even technology,” said Matt Nessan, director of programs, marketing, and communications in Extended Studies, the unit that manages UMD’s non-credit options. “They are about preparing students and the Maryland workforce to be agile, curious and capable in a world where knowledge is rapidly evolving." UMD’s other non-credit AI courses across Coursera, edX and Open Learning include: “AI in Digital Marketing”: tailored for digital marketers and business leaders seeking to use AI for customer engagement, personalized content strategies, and campaign optimization. “AI Empowerment for Small Businesses”: a practical guide for entrepreneurs and small business owners looking to enhance business planning, improve customer experiences, and harness predictive analytics. “Artificial Intelligence and Career Empowerment”: geared toward early- to mid-career professionals exploring transitions into AI-related roles or seeking new career directions. Two new courses with an AI focus are currently under development. “Introduction to Using Generative AI for Accounting and Finance Professionals” will provide practical training in tools like natural language processing and computer vision. “AI for Business Leaders” will offer strategic insights into how AI can be applied across domains such as supply chain, healthcare, finance, and marketing. All courses are available for free, with an option to pay a fee to receive a professional certificate. “Courses like these lower the barrier to AI education, equipping more students—regardless of major or background—with the skills they need to thrive in an increasingly AI-driven world,” Nessan said.
2022-12-01T00:00:00
https://today.umd.edu/briefs/umd-broadens-access-to-ai-education-with-new-course-on-government-applications
[ { "date": "2022/12/01", "position": 89, "query": "AI education" }, { "date": "2023/03/01", "position": 82, "query": "AI education" } ]
Professional Certificate Program in Machine Learning & Artificial ...
Professional Certificate Program in Machine Learning & Artificial Intelligence
https://professional.mit.edu
[]
MIT Professional Education is pleased to offer the Professional Certificate Program in Machine Learning & Artificial Intelligence. MIT has played a leading ...
Awarded upon successful completion of 16 or more days of qualifying Short Programs courses in Professional Education, this certificate equips you with the best practices and actionable knowledge needed to put you and your organization at the forefront of the AI revolution. Why Study Machine Learning and Artificial Intelligence at MIT? Machine learning is more than just algorithms: it requires math, statistics, data analysis, computer science, and programming skills. MIT is a hub of research and practice in all of these disciplines and our Professional Certificate Program faculty come from areas with a deep focus in machine learning and AI, such as the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); the MIT Institute for Data, Systems, and Society (IDSS); and the Laboratory for Information and Decision Systems (LIDS). The program allows individuals to interact with all these key disciplines. Leading MIT faculty experts will guide participants through the latest breakthroughs in research, cutting-edge technologies, and best practices used for building effective AI-systems. The program provides a well-rounded foundation of knowledge that can be put to immediate use to help people and organizations advance cognitive technology. Core Courses We recommend taking the two required courses first. However, if there are elective courses that you have the background and education to begin with you are welcome to do so. Please note that whether you begin with core or elective courses you are required to complete all requirements within 36 months. Machine Learning for Big Data and Text Processing: Foundations ⁠—$2,500 (2 days) Ensures those who are just getting started in the field know the core mathematical concepts and theories relevant to machine learning. You'll walk away with a solid understanding of probability, statistics, classification, regression, optimization. ⁠—$2,500 Ensures those who are just getting started in the field know the core mathematical concepts and theories relevant to machine learning. You'll walk away with a solid understanding of probability, statistics, classification, regression, optimization. Machine Learning for Big Data and Text Processing: Advanced⁠— $3,500 (3 days) See how the latest tools, techniques and algorithms driving modern and predictive analysis can be applied in different fields: what kinds of problems they can/cannot solve and what issues are likely to arise in practical applications. Note: MIT Professional Education's Short Programs is committed to providing a diverse and updated portfolio of Short Programs courses and reserves the right to change these course selections in future years. Electives Note: MIT Professional Education's Short Programs is committed to providing a diverse and updated portfolio of Short Programs courses and reserves the right to change these course selections in future years. Links & Resources
2022-12-01T00:00:00
https://professional.mit.edu/course-catalog/professional-certificate-program-machine-learning-artificial-intelligence-0
[ { "date": "2022/12/01", "position": 90, "query": "AI education" }, { "date": "2022/12/01", "position": 88, "query": "artificial intelligence education" }, { "date": "2023/03/01", "position": 86, "query": "AI education" } ]
Notion for education – Notion Help Center
Notion for education – Notion Help Center
https://www.notion.com
[]
It's available for students or teachers signed in with their education institution's email per the WHED. You may also be eligible for 50% off Notion AI if you' ...
Upgrade to the Plus Plan for education Upgrade to the Plus Plan for education What's included in the Plus Plan for education What's included in the Plus Plan for education Having issues upgrading to the Plus Plan for education? Try the solutions in this article → There are two types of free Plus Plans for education: The Education Plus Plan for individual students and teachers is a free Plus Plan for a one-member workspace. It’s available for students or teachers signed in with their education institution's email per the WHED. You may also be eligible for 50% off Notion AI if you’re on this plan. You can check your eligibility by going to Settings → Explore plans and selecting Add to plan next to Notion AI . Offer terms and conditions apply. Our student org offer is a free Plus Plan for a workspace with unlimited members. It’s available for verified student orgs who have submitted an application here → Check out more of the differences below: Plus Plan for individual students and teachers Plus Plan for student organizations One member per workspace Unlimited members per workspace Up to 100 guests per workspace Up to 100 guests per workspace Unlimited uploads Unlimited uploads Edit history for any page going back 30 days Edit history for any page going back 30 days May be eligible for 50% off Notion AI; offer terms and conditions apply. This offer consists of the following features: • Chatting with Notion AI • Writing assistance • Notion AI-powered blocks • Translation • Autofill in databases To see a complete list of features that are included in the Plus Plan or to learn about other plans offered by Notion, visit our pricing page → Individual students and teachers Individual students and teachers at accredited colleges and universities can upgrade to the Plus Plan for free at any time. Before you upgrade, make sure of the following: You’re signed in with your education institution email. If you’re using a personal email address with your Notion account, you can change it to your educational email address following the instructions here → Your school is an accredited institution recognized by and listed in the World Higher Education Database here → You’re on the Free Plan. If you’re currently paying for the Plus Plan, you can still take advantage of this offer. Downgrade to the Free Plan first by going to Settings in your sidebar → Billing → Change plan . You only have one member in your account. The Plus Plan for individual students and educators can only contain one member. Remove any additional workspace owners and members before upgrading. You're upgrading using Notion for desktop or Notion for web, not mobile. Once you’ve met the above requirements, follow these steps in the desktop or web app to upgrade to the free Plus Plan: Go to Settings in your sidebar. Select Upgrade plan . Select Get free education plan . If your school is recognized by our system, you’ll be upgraded automatically! If you satisfy all requirements above for access to the plan but don't see Get free education plan , please write into [email protected] to request your school domain get access. If your institution is recognized by and listed in the WHED, your domain will be allowlisted and receive access. Note: Unfortunately, you must have a valid educational email address to upgrade to the Plus Plan. We don’t accept student IDs or other documentation to verify a student or educator status at this time. Thank you for your understanding! Individual students and teachers may be eligible for 50% off Notion AI. This offer includes the following features: Chatting with Notion AI Writing assistance Notion AI-powered blocks Translation Autofill in databases Offer terms and conditions apply. Anyone who's already subscribed to Notion AI won’t be eligible for this discount. Note: This Notion AI offer for individual students and teachers is only available for 12 months upon redemption, and provides access to the following features: Chatting with Notion AI Writing assistance Notion AI-powered blocks Translation Autofill in databases Notion AI in Notion Mail To take advantage of this offer: Make sure you’re signed into Notion with your educational email. If you haven’t already, follow the steps above to get on the free Plus Plan for individual students and teachers. Go to Settings in your sidebar → Explore plans . Next to Notion AI , select Add to plan and follow the prompts to subscribe. Student organizations Before you apply, make sure of the following: You're a student representing a student organization at an accredited institution recognized by and listed in the World Higher Education Database here → You're a member of the student organization's workspace. All members in your student organization’s workspace have the same domain as your valid educational email address. Your organization is recognized by your institution. If you want to upgrade to the Plus Plan for your student organization’s workspace, you can find more information and apply here. Only one application per organization will be approved. Once you’ve applied, our team will reach out as soon as we can to let you know if you’ve been approved. As a reminder: Individual students and teachers can upgrade to the Plus Plan for free, and can work with up to 100 guests in their workspace. Approved student organizations can upgrade to the Plus Plan for free, and can invite guests and members to their workspace for free. If you’re brand new to Notion, check out the articles here to learn the basics. You can also use our templates to build a foundation for your education workspace! To find our templates for students and teachers:
2022-12-01T00:00:00
https://www.notion.com/help/notion-for-education
[ { "date": "2022/12/01", "position": 91, "query": "AI education" }, { "date": "2023/03/01", "position": 88, "query": "AI education" } ]
AI Act | Shaping Europe's digital future - European Union
AI Act
https://digital-strategy.ec.europa.eu
[]
AI solutions used in education institutions, that may determine the access to education and course of someone's professional life (e.g. scoring of exams); AI ...
The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. The AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence)is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe. The AI Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package, the launch of AI Factories and the Coordinated Plan on AI. Together, these measures guarantee safety, fundamental rights and human-centric AI, and strengthen uptake, investment and innovation in AI across the EU. To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation, engage with stakeholders and invite AI providers and deployers from Europe and beyond to comply with the key obligations of the AI Act ahead of time. Why do we need rules on AI? The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes. For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme. Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring. A risk-based approach The AI Act defines 4 levels of risk for AI systems: Unacceptable risk All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned. The AI Act prohibits eight practices, namely: harmful AI-based manipulation and deception harmful AI-based exploitation of vulnerabilities social scoring Individual criminal offence risk assessment or prediction untargeted scraping of the internet or CCTV material to create or expand facial recognition databases emotion recognition in workplaces and education institutions biometric categorisation to deduce certain protected characteristics real-time remote biometric identification for law enforcement purposes in publicly accessible spaces High risk AI use cases that can pose serious risks to health, safety or fundamental rights are classified as high-risk. These high-risk use-cases include: AI safety components in critical infrastructures (e.g. transport), the failure of which could put the life and health of citizens at risk AI solutions used in education institutions, that may determine the access to education and course of someone’s professional life (e.g. scoring of exams) AI-based safety components of products (e.g. AI application in robot-assisted surgery) AI tools for employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment) Certain AI use-cases utilised to give access to essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan) AI systems used for remote biometric identification, emotion recognition and biometric categorisation (e.g AI system to retroactively identify a shoplifter) AI use-cases in law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence) AI use-cases in migration, asylum and border control management (e.g. automated examination of visa applications) AI solutions used in the administration of justice and democratic processes (e.g. AI solutions to prepare court rulings) High-risk AI systems are subject to strict obligations before they can be put on the market: adequate risk assessment and mitigation systems high-quality of the datasets feeding the system to minimise risks of discriminatory outcomes logging of activity to ensure traceability of results detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance clear and adequate information to the deployer appropriate human oversight measures high level of robustness, cybersecurity and accuracy Limited risk This refers to the risks associated with a need for transparency around the use of AI. The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision. Moreover, providers of generative AI have to ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely deep fakes and text published with the purpose to inform the public on matters of public interest. Minimal or no risk The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters. How does it all work in practice for providers of high-risk AI systems? How does it all work in practice for providers of high-risk AI systems? Once an AI system is on the market, authorities are in charge of market surveillance, deployers ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and deployers will also report serious incidents and malfunctioning. A solution for the trustworthy use of large AI models General-purpose AI models can perform a wide range of tasks and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are very capable or widely used. To ensure safe and trustworthy AI, the AI Act puts in place rules for providers of such models. This includes transparency and copyright-related rules. For models that may carry systemic risks, providers should assess and mitigate these risks. The AI Act rules on general-purpose AI will become effective in August 2025. The AI Office is facilitating the drawing-up of a Code of Practice to detail out these rules. The Code should represent a central tool for providers to demonstrate compliance with the AI Act, incorporating state-of-the-art practices. Governance and implementation The European AI Office and authorities of the Member States are responsible for implementing, supervising and enforcing the AI Act. The AI Board, the Scientific Panel and the Advisory Forum steer and advise the AI Act’s governance. Find out more details about the Governance and enforcement of the AI Act. Next steps The AI Act entered into force on 1 August 2024, and will be fully applicable 2 years later on 2 August 2026, with some exceptions:
2022-12-01T00:00:00
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[ { "date": "2022/12/01", "position": 93, "query": "AI education" }, { "date": "2023/03/01", "position": 89, "query": "AI education" }, { "date": "2023/11/01", "position": 98, "query": "artificial intelligence education" }, { "date": "2023/12/01", "position": 98, "query": "artificial intelligence education" } ]
How GenAI Is Reshaping AI Course Content | 01 - Vocal Media
How GenAI Is Reshaping AI Course Content
https://vocal.media
[]
From curriculum updates to hands-on labs, GenAI is significantly influencing AI and machine learning education. If you're considering an AI ML Course in ...
Artificial Intelligence has always been at the forefront of technological evolution. But with the rise of Generative AI (GenAI), we're witnessing a transformational shift—not just in how AI is used, but in how it's taught. From curriculum updates to hands-on labs, GenAI is significantly influencing AI and machine learning education. If you’re considering an AI ML Course in Bangalore or elsewhere, understanding how GenAI is reshaping course content will help you choose a program that aligns with the future of this ever-evolving domain. The Rise of Generative AI in Mainstream Education From research labs to classroom essentials Until a few years ago, Generative AI was a niche topic explored primarily in research environments. Fast forward to today, it has become mainstream—thanks to technologies like ChatGPT, DALL·E, and other advanced generative models. As these tools revolutionize industries, educational institutions are racing to integrate GenAI into their core curriculum. Students are no longer just learning how AI models work—they are now expected to understand how these models create, adapt, and interact in real time. This new focus demands a restructured approach to AI education. Evolving Learning Objectives in AI Courses Beyond algorithms and into creativity Traditional AI and machine learning courses focused heavily on supervised learning, deep learning, and data preprocessing. While these are still crucial, GenAI introduces new dimensions—creativity, language modeling, and ethical considerations in content generation. Courses now emphasize not just model accuracy but also the quality of generated content, human-AI interaction, and understanding model hallucinations. This shift encourages a broader learning objective that combines hard technical skills with soft evaluative judgment. Hands-On Labs with Real GenAI Tools Learning by building, not just reading Modern Artificial Intelligence Course Bangalore offerings now integrate practical modules where students build their own generative models or work with APIs like GPT-4 or Stable Diffusion. These labs allow learners to create chatbots, generate code, write articles, or design images using real-world datasets. Such practical exposure helps students move beyond theoretical knowledge, preparing them for real-time problem-solving in industries like healthcare, finance, and marketing, where GenAI is being actively deployed. Curriculum Updates: New Modules and Topics Aligning content with industry demands To keep pace with technological innovation, AI course designers are updating their syllabi. Some of the new modules now included are: Introduction to Large Language Models (LLMs) Prompt Engineering and Response Optimization Text-to-Image and Text-to-Video Generation Fine-tuning Pre-trained GenAI Models Responsible Use of GenAI and Bias Mitigation These topics help learners understand the inner workings of generative models, moving from black-box usage to informed implementation. Ethical Considerations and Responsible AI Teaching learners to ask the right questions With great power comes great responsibility, especially in the world of GenAI. AI systems that can generate human-like text or media can also be misused for misinformation, deepfakes, or biased content. Therefore, ethical AI is no longer an optional chapter—it’s a core part of the learning journey. Courses now include dedicated sections on AI ethics, data privacy, transparency, and algorithmic bias. The goal is to prepare students not just to build powerful models, but to do so responsibly. Personalized Learning with AI Tutors AI shaping the way AI is taught Interestingly, GenAI is not only being taught—it is also being used to teach. Many institutions now employ AI-powered virtual assistants and tutors to support learners. These GenAI systems can answer coding questions, clarify lecture content, or even help students debug their models. Such AI-driven educational tools make learning more interactive and tailored. Especially in fast-paced cities like Bangalore, where professionals often juggle jobs and classes, these tools make AI ML Course in Bangalore programs more accessible and learner-friendly. Industry Projects Focused on GenAI Preparing students for real-world applications As the corporate world embraces GenAI, AI education must mirror that reality. Institutions are increasingly tying up with industry partners to offer capstone projects involving real generative AI problems—such as content automation, smart customer service, or personalized marketing. These projects serve as crucial bridges between classroom learning and job readiness. Students are encouraged to think about scalability, integration, and ethical deployment while building their projects, ensuring they’re equipped for roles in tomorrow’s AI-powered workplaces. Future-Proofing AI Careers Through GenAI Education Staying relevant in a rapidly changing field AI is not a static field; it evolves rapidly. By integrating GenAI into the curriculum, educators help future-proof their students. Courses that once focused primarily on foundational ML algorithms now teach adaptability and ongoing learning. Whether you're a newcomer or a working professional enrolling in an Artificial Intelligence Course Bangalore, you should look for programs that offer up-to-date content. GenAI is no longer the future—it’s the present. Courses that include it are not just teaching students how to build models; they’re teaching them how to innovate. Conclusion: GenAI Is Redefining the AI Learning Experience As GenAI continues to redefine how machines interact with language, vision, and creativity, it is simultaneously reshaping how humans learn about AI. From restructured curricula and advanced labs to ethical training and personalized instruction, GenAI is transforming every aspect of AI education. If you're evaluating an AI ML Course in Bangalore or any global hub for technology education, it’s crucial to choose one that incorporates these forward-looking GenAI components. By doing so, you not only gain cutting-edge skills but also position yourself at the forefront of the next big wave in artificial intelligence. AI education is no longer just about understanding algorithms—it’s about shaping them, questioning them, and, increasingly, co-creating with them. The future of learning is generative, and it’s already here.
2022-12-01T00:00:00
https://vocal.media/01/how-gen-ai-is-reshaping-ai-course-content
[ { "date": "2022/12/01", "position": 94, "query": "AI education" }, { "date": "2023/03/01", "position": 90, "query": "AI education" } ]
AI is running rampant on college campuses as professors ... - Fortune
AI is running rampant on college campuses as professors and students lean on artificial intelligence
https://fortune.com
[ "Beatrice Nolan" ]
Some students argue that this diminishes the value of their education and raises transparency and fairness issues. AI use is continuing to cause ...
AI use is continuing to cause trouble on college campuses, but this time it’s professors who are in the firing line. While it was once faculty at higher institutions who were up in arms about students’ use of AI, now some students are getting increasingly irked about their professors’ reliance on it. On forums like Rate My Professors, students have complained about lectures’ overreliance on AI. The average cost of yearly tuition at a four-year institution in the U.S. is $17,709. If students study at an out-of-state public four-year institution, this average cost jumps to $28,445 per year, according to the research group Education Data. Some students argue that instructors’ use of AI diminishes the value of their education, especially when they’re paying high tuition fees to learn from human experts. However, others say it’s unfair that students can be penalized for AI use while professors fly largely under the radar. One student at Northeastern University even filed a formal complaint and demanded a tuition refund after discovering her professor was secretly using AI tools to generate notes. College professors told Fortune the use of AI for things like class preparation and grading has become “pervasive.” However, they say the problem lies not in the use of AI but rather the faculty’s tendency to conceal just why and how they are using the technology. Automated Grading One of the AI uses that has become the most contentious is using the technology to grade students. Rob Anthony, part of the global faculty at Hult International Business School, told Fortune that automating grading was becoming “more and more pervasive” among professors. “Nobody really likes to grade. There’s a lot of it. It takes a long time. You’re not rewarded for it,” he said. “Students really care a lot about grades. Faculty don’t care very much.” That disconnect, combined with relatively loose institutional oversight of grading, has led faculty members to seek out faster ways to process student assessments. “Faculty, with or without AI, often just want to find a really fast way out of grades,” he said. “And there’s very little oversight…of how you grade.” However, if more and more professors simply decide to let AI tools make a judgment on their students’ work, Anthony is worried about a homogenized grading system where students increasingly get the same feedback from professors. “I’m seeing a lot of automated grading where every student is essentially getting the same feedback. It’s not tailored, it’s the same script,” he said. One college teaching assistant and full-time student, who asked to remain anonymous, told Fortune they were using ChatGPT to help grade dozens of student papers. The TA said the pressure of managing full-time studies, a job, and a mountain of student assignments forced them to look for a more efficient way to get through their workload. “I had to grade something between 70 to 90 papers. And that was a lot as a full-time student and as a full-time worker,” they said. “What I would do is go to ChatGPT…give it the grading rubric and what I consider to be a good example of a paper.” While they said they reviewed and edited the bot’s output, they added the process did feel morally murky. “In the moment when I’m feeling overworked and underslept…I’m just going to use artificial intelligence grading so I don’t read through 90 papers,” they said. “But after the fact, I did feel a little bad about it…it still had this sort of icky feeling.” They were particularly uneasy about how AI was making decisions that could impact a student’s academic future. “I am using artificial intelligence to grade someone’s paper,” they said. “And we don’t really know…how it comes up with these ratings or what it is basing itself off of.” “Bots talking to bots’” Some of the frustration is due to the students’ use of AI, professors say. “The voice that’s going through your head is a faculty member that says: ‘If they’re using it to write it, I’m not going to waste my time reading.’ I’ve seen a lot of just bots talking to bots,” Anthony said. A recent study suggests that almost all students are using AI to help them with assignments to some degree. According to a survey conducted earlier this year by the U.K.’s Higher Education Policy Institute, in 2025, almost all students (92%) now use AI in some form, up from 66% in 2024. When ChatGPT was first released, many schools either outright banned or put restrictions on the use of AI. Students were some of the early adopters of the technology after its release in late 2022, quickly finding they could complete essays and assignments in seconds. The widespread use of the tech created a distrust between students and teachers as professors struggled to identify and punish the use of AI in work. Now, many colleges are encouraging students to use the tech, albeit in an “appropriate way.” Some students still appear to be confused—or uninterested—about where that line is. The TA, who primarily taught and graded intro classes, told Fortune “about 20% to 30% of the students were using AI blatantly in terms of writing papers.” Some of the signs were obvious, like those who submitted papers that had nothing to do with the topic. Others submitted work that read more like unsourced opinion pieces than research. Instead of penalizing students for using AI directly, the TA said they docked marks for failing to include evidence or citations, rather than critiquing the use of AI. They added that the papers written by AI were marked favorably when automated grading was used. They said when they submitted an obviously AI-written student paper into ChatGPT for grading, the bot graded it “really, really well.” Lack of transparency For Ron Martinez, the problem with professors’ use of AI is the lack of transparency. The former UC Berkeley lecturer and current assistant professor of English at the Federal University of Paraná (UFPR), told Fortune he’s up front with his students about how, when, and why he’s using the tech. “I think it’s really important for professors to have an honest conversation with students at the very beginning. For example, telling them I’m using AI to help me generate images for slides. But believe me, everything on here is my thoughts,” he said. He suggests being upfront about AI use, explaining how it benefits students, such as allowing more time for grading or helping create fairer assessments. In one recent example of helpful AI use, the university lecturer began using large language models like ChatGPT as a kind of “double marker” to cross-reference his grading decisions. “I started to think, I wonder what the large language model would say about this work if I fed it the exact same criteria that I’m using,” he said. “And a few times, it flagged up students’ work that actually got…a higher mark than I had given.” In some cases, AI feedback forced Martinez to reflect on how unconscious bias may have shaped his original assessment. “For example, I noticed that one student who never talks about their ideas in class…I hadn’t given the student their due credit, simply because I was biased,” he said. Martinez added that the AI feedback led to him adjusting a number of grades, typically in the student’s favor. While some may despair that widespread use of AI may upend the entire concept of higher education, some professors are already starting to see the tech’s usage among students as a positive thing. Anthony told Fortune he had gone from feeling “this whole class was a waste of time” in early 2023 to “on balance, this is helping more than hurting.” “I was beginning to think this is just going to ruin education, we are just going to dumb down,” he said. “Now it seems to be on balance, helping more than hurting… It’s certainly a time saver, but it’s also helping students express themselves and come up with more interesting ideas, they’re tailoring it, and applying it.” “There’s still a temptation [to cheat]…but I think these students might realize that they really need the skills we’re teaching for later life,” he said.
2025-07-08T00:00:00
2025/07/08
https://fortune.com/2025/07/08/ai-higher-education-college-professors-students-chatgpt/
[ { "date": "2022/12/01", "position": 95, "query": "AI education" }, { "date": "2023/03/01", "position": 91, "query": "AI education" } ]
The AFT launches a national academy for AI in New York
The AFT launches a national academy for AI in New York
https://techinformed.com
[]
AFT launches $23m National Academy for AI in Education with Microsoft, OpenAI, Anthropic. July 8, 2025. In a move aimed at bringing artificial intelligence ...
In a move aimed at bringing artificial intelligence into the heart of US classrooms, the American Federation of Teachers (AFT) has launched the National Academy for AI Instruction, a $23 million joint initiative with Microsoft, OpenAI, Anthropic, and the United Federation of Teachers (UFT). The initiative, unveiled in New York City, aims to provide free, comprehensive AI training to all 1.8 million AFT members—starting with K-12 teachers—via a new physical and digital hub housed in Manhattan. It marks the first major partnership between a US teachers’ union and the technology sector on this scale, offering a national model as educators worldwide grapple with how to adapt to the rapid rise of AI in classrooms. The announcement comes amid growing global concern about the pace of AI adoption in education, with governments and unions in Canada, Australia, the UK, and Singapore all launching varying forms of AI literacy programs for teachers. In the UK, the Department for Education has funded pilot projects to embed AI tools into school leadership and lesson planning. In South Korea, the government has pledged to provide AI education in all schools by 2027. But the US initiative stands out for its union-led structure and its strong public-private coalition. “Educators are overwhelmed by the speed of change in AI,” said AFT President Randi Weingarten. “This academy puts them in the driver’s seat. It’s not about replacing teachers—it’s about giving them the tools and ethical frameworks to use AI to enhance what they already do best.” The academy will operate from a purpose-built centre in New York, with plans to scale nationwide. Within five years, it aims to train 400,000 educators—roughly 10 per cent of the US teaching workforce—and reach more than 7 million students. The curriculum will offer credentialed pathways and ongoing professional development, with both in-person and virtual components. Educators as architects of AI Brad Smith, vice chair and president of Microsoft, called the project “a model for responsible AI integration” in schools. “This partnership will not only help teachers learn to use AI—it gives them a voice in shaping how we build it,” he said. Microsoft and the AFT began laying the groundwork for the initiative two years ago in collaboration with the AFL-CIO, through summer symposiums aimed at exploring AI’s role in labour and education. OpenAI, whose technology underpins popular tools like ChatGPT, echoed the call for teachers to take the lead. “AI should be a coach, not a critic,” said Chris Lehane, chief global affairs officer. “This academy will ensure AI is being deployed to support the educator’s mission—not disrupt it.” Anthropic, known for its AI model Claude, said the partnership reflects the urgency of responsible AI adoption in schools. “We’re at a pivotal moment,” said co-founder Jack Clark. “How we teach AI now will shape the next generation’s relationship with it.” The curriculum will cover AI literacy, ethics, classroom applications, and workflow enhancements—from grading and lesson planning to generating differentiated instructional materials. Innovation labs will allow educators to co-design tools with AI developers, and feedback from classroom use will inform future updates. Lessons from the past For some teachers, the initiative is reminiscent of previous technological shifts. “It’s like when we first got word processors, but ten times bigger,” said Vincent Plato, a K–8 educator in New York City. “AI can become a teacher’s thought partner—especially when you’re lesson planning at midnight.” Marlee Katz, a teacher for deaf and hard-of-hearing students, noted how AI tools are already enhancing communication. “Sometimes you struggle to find the right tone or phrase—these tools don’t replace your voice, they help you express it better.” The initiative’s roots lie with Roy Bahat, a venture capitalist and AFT member who proposed the idea after helping facilitate early dialogues between Microsoft and the labour movement. Bahat, who leads Bloomberg Beta, will join the academy’s board. A global challenge The launch underscores growing awareness that educational AI cannot be left solely to the tech sector. The union-led approach offers a counterbalance to top-down government mandates or unregulated edtech rollouts seen elsewhere. Fremantle partners with Multiverse AI training platform Across Europe, AI guidelines for schools have largely been issued by education ministries with limited teacher consultation. In contrast, the AFT initiative positions educators not as adopters but as co-designers. “Too often, new technologies are weaponised against teachers,” said UFT President Michael Mulgrew. “This time, we’re building something that works for educators.”
2025-07-08T00:00:00
2025/07/08
https://techinformed.com/aft-launches-national-academy-for-ai-in-education/
[ { "date": "2022/12/01", "position": 96, "query": "AI education" }, { "date": "2023/03/01", "position": 94, "query": "AI education" }, { "date": "2023/06/01", "position": 98, "query": "AI education" }, { "date": "2023/09/01", "position": 98, "query": "AI education" } ]
Claude for Education | Partnering with Universities on ... - Anthropic
Claude for Education \ Anthropic
https://www.anthropic.com
[]
Claude helps universities maintain academic integrity while incorporating AI tools in education, backed by Anthropic's commitment to safety.
Build what you actually want to create while learning principles that transfer everywhere. Students across disciplines need coding literacy. Claude Code works like a scaled apprenticeship, pairing with students in development environments to show how professional programmers think through real problems.
2022-12-01T00:00:00
https://www.anthropic.com/education
[ { "date": "2022/12/01", "position": 97, "query": "AI education" }, { "date": "2023/03/01", "position": 93, "query": "AI education" }, { "date": "2023/06/01", "position": 53, "query": "AI education" }, { "date": "2023/09/01", "position": 57, "query": "AI education" }, { "date": "2023/11/01", "position": 55, "query": "AI education" }, { "date": "2024/01/01", "position": 54, "query": "AI education" }, { "date": "2024/03/01", "position": 54, "query": "AI education" }, { "date": "2024/05/01", "position": 53, "query": "AI education" }, { "date": "2024/07/01", "position": 53, "query": "AI education" }, { "date": "2024/10/01", "position": 53, "query": "AI education" }, { "date": "2024/11/01", "position": 52, "query": "AI education" } ]
Is ChatGPT killing higher education? - Vox
Is ChatGPT killing higher education?
https://www.vox.com
[ "Sean Illing" ]
AI is creating a cheating utopia. Universities don't know how to respond.
What’s the point of college if no one’s actually doing the work? It’s not a rhetorical question. More and more students are not doing the work. They’re offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They’re doing everything. We’re living in a cheating utopia — and professors know it. It’s becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it’s not clear that there’s anything to be done at this point. So what are we doing here? James Walsh is a features writer for New York magazine’s Intelligencer and the author of the most unsettling piece I’ve read about the impact of AI on higher education. Walsh spent months talking to students and professors who are living through this moment, and what he found isn’t just a story about cheating. It’s a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt. I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. Let’s talk about how students are cheating today. How are they using these tools? What’s the process look like? It depends on the type of student, the type of class, the type of school you’re going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT, and saying, “I need a four to five-page essay,” and copying and pasting that essay without ever reading it. One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they’re dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they’re using will say something random about broccoli or Dua Lipa. Unless you’re incredibly lazy, it takes just a little effort to cover that up. Every professor I spoke to said, “So many of my students are using AI and I know that so many more students are using it and I have no idea,” because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay. And there are these platforms, these AI detectors, and there’s a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that’s really just looking at the language and deciding whether or not that language is created by an LLM. But it doesn’t account for big ideas. It doesn’t catch the students who are using AI and saying, “What should I write this essay about?” And not doing the actual thinking themselves and then just writing. It’s like paint by numbers at that point. Did you find that students are relating very differently to all of this? What was the general vibe you got? It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, “I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.” And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading. The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, “Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I’m sitting here in class and people are referencing studies that we haven’t even covered in class, and it just makes for a really boring and unfulfilling class.” That was the realization for her that something is really wrong. So there are students like that. And then there are students who feel like they have to use AI because if they’re not using AI, they’re at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI. What’s the general professor’s perspective on this? They seem to all share something pretty close to despair. Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn’t appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it’s the best class she’s ever had. So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don’t know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you’re going to accuse a student of using AI, there’s no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down. Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general. How many professors do you think are now just having AI write their lectures? There’s been a little reporting on this. I don’t know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI. I’ve spoken to another couple of professors who are like, I’m nearing retirement, so it’s not my problem, and good luck figuring it out, younger generation. I just don’t think people outside of academia realize what a seismic change is coming. This is something that we’re all going to have to deal with professionally. And it’s happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, “We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.” Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it’s better for business to shove it aside? I do think there’s a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that’s coming. They want to adopt the technology and aren’t really picking up on how dangerous it might be. I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, Hey, figure it out. And I think it’s revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they’re much more content to let the whole AI thing play out. I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn’t threaten the bottom line in that way, or at least it doesn’t yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I’m being too harsh? I genuinely don’t think that’s too harsh. I think administrators may not fully appreciate the power of AI and exactly what’s happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem. I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, “Just grade it like it was any other paper.” I think they’re just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education. It’s this system that hasn’t been updated in forever. And in the case of the US higher ed system, it’s like, yeah, for a long time it’s been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important. The universities, in many cases, also have partnerships with AI companies, right? Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, “Google is letting parents know that they have created a chatbot for children under [13 years old].” And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty. There’s been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they’ve been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand. This is a good time to ask the obligatory question, Are we sure we’re not just old people yelling at clouds here? People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn’t just another moral panic? I think there’s a lot of different ways we could respond to that. It’s not a generational moral panic. This is a tool that’s available, and it’s available to us just as it’s available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won’t be in a year. However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, “This is a calculator for words.” And I just don’t really understand how that is compatible with other statements he’s made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course. I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we’ve maxed out the AI’s potential to supplement essays or writing. That it might not get much better than it is now. And I think that’s a very long shot, one that I would not want to bank on. Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we’re also post-thinking. It’s a very scary thought that I try not to dwell in — the idea that my profession and what I’m doing is just feeding the machine, that my most important reader now is a robot, and that there’s going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas.
2025-07-05T00:00:00
2025/07/05
https://www.vox.com/the-gray-area/418793/chatgpt-claude-ai-higher-education-cheating
[ { "date": "2022/12/01", "position": 98, "query": "AI education" }, { "date": "2023/03/01", "position": 97, "query": "AI education" } ]
Economics of Artificial Intelligence in Healthcare
Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment
https://pubmed.ncbi.nlm.nih.gov
[ "Khanna Nn Maindarkar Ma Viswanathan V Fernandes Jfe Paul S Bhagawati M Ahluwalia P Ruzsa Z Sharma A Kolluri R Singh Im Laird Jr Fatemi M Alizad A Saba L Agarwal V Sharma A Teji Js Al-Maini M Rathore V Naidu S Liblik K Johri Am Turk M Mohanty L Sobel Dw Miner M Viskovic K Tsoulfas G Protogerou Ad Kitas Gd Fouda Mm Chaturvedi S Kalra Mk Suri Js", "Ploug T", "Sunda", "Moeslund Tb", "Holm S.", "Et Al.", "Xiao X", "Xue L", "Ye L", "Li H" ]
by NN Khanna · 2022 · Cited by 223 — This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the ...
Motivation: The price of medical treatment continues to rise due to (i) an increasing population; (ii) an aging human growth; (iii) disease prevalence; (iv) a rise in the frequency of patients that utilize health care services; and (v) increase in the price. Objective: Artificial Intelligence (AI) is already well-known for its superiority in various healthcare applications, including the segmentation of lesions in images, speech recognition, smartphone personal assistants, navigation, ride-sharing apps, and many more. Our study is based on two hypotheses: (i) AI offers more economic solutions compared to conventional methods; (ii) AI treatment offers stronger economics compared to AI diagnosis. This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the traditional or non-AI-based approaches. Methodology: PRISMA was used to select the best 200 studies for AI in healthcare with a primary focus on cost reduction, especially towards diagnosis and treatment. We defined the diagnosis and treatment architectures, investigated their characteristics, and categorized the roles that AI plays in the diagnostic and therapeutic paradigms. We experimented with various combinations of different assumptions by integrating AI and then comparing it against conventional costs. Lastly, we dwell on three powerful future concepts of AI, namely, pruning, bias, explainability, and regulatory approvals of AI systems. Conclusions: The model shows tremendous cost savings using AI tools in diagnosis and treatment. The economics of AI can be improved by incorporating pruning, reduction in AI bias, explainability, and regulatory approvals.
2022-12-01T00:00:00
https://pubmed.ncbi.nlm.nih.gov/36554017/
[ { "date": "2022/12/01", "position": 5, "query": "AI healthcare" } ]
AI in the hands of imperfect users | npj Digital Medicine
AI in the hands of imperfect users
https://www.nature.com
[ "Kostick-Quenet", "Kristin M.", "Center For Medical Ethics", "Health Policy", "Baylor College Of Medicine", "Houston", "Gerke", "Penn State Dickinson Law", "Carlisle", "Tx" ]
by KM Kostick-Quenet · 2022 · Cited by 66 — The use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, with great promise for enhancing personalized clinical ...
The use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, with great promise for enhancing personalized clinical decision making1. As AI/ML tools become more widespread, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. However, less attention has fallen to mitigating potential bias among AI’s human users. As automated systems become more sophisticated in their capacity to predict, screen for, or diagnose disease, the temptation to rely on them in clinical decision making will increase2. However, factors that influence user reliance on AI are poorly understood, and healthcare professionals lack guidelines about the role that AI should play in their decision making. We argue for a more systematic approach to identifying the existence and impacts of user biases while using AI tools and their effects on clinical decision making and patient outcomes. Specifically, we call for greater empirical research into how to mitigate biases with anticipated negative outcomes through the use of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI tools. Expand notions of user testing Recognizing the potential harms of overreliance on AI systems in the context of high stakes decision making, regulators and policymakers seem to endorse keeping humans “in the loop” and focus their action plans and recommendations on improving the safety of AI/ML systems such as through enhanced computational accuracy3,4,5. Meanwhile, developers are innovating new ways of addressing trustworthiness, accountability, and explainability of “black box” AI/ML that involves deep learning or neural nets with significant interpretability limitations6,7. These goals appear to be particularly important when using AI/ML in clinical decision making, not only because the costs of misclassifications and potential harm to patients are high but also because undue skepticism or lack of trust can reduce stakeholders’ adoption of promising new AI technologies and inhibit their use and availability outside of experimental settings. One of us (SG in Babic et al.8), however, recently warned healthcare professionals to be wary of explanations that are presented to them for black box AI/ML models. Explainable AI/ML … offers post hoc algorithmically generated rationales of black box predictions, which are not necessarily the actual reasons behind those predictions or related causally to them. Accordingly, the apparent advantage of explainability is a “fool’s gold” because post hoc rationalizations of a black box are unlikely to contribute to our understanding of its inner workings. Instead, we are likely left with the false impression that we understand it better.” Consequently, instead of focusing on explainability as a strict condition for AI/ML in healthcare, regulators like the U.S. Food and Drug Administration (FDA) should focus more holistically on those aspects of AI/ML systems that directly bear on their safety and effectiveness—especially, how these systems perform in the hands of their intended users. While the FDA recently published its final guidance explicitly recognizing the risks of automation bias9 and is working on a new regulatory framework for modifications to AI/ML-based software as a medical device (i.e., software that is itself classified as a medical device under section 201(h)(1) of the U.S. Federal Food, Drug, and Cosmetic Act10), Babic et al. argue that regulators like the FDA should also, at least in some cases, emphasize well-designed clinical trials to test human factors and other outcomes of using AI in real-world settings. Gerke et al.11,12 similarly argue that more algorithmic tools must be prospectively tested to understand their performance across a variety of procedural contexts that mirror their intended use settings and human-AI interactions. The type of user testing these scholars are suggesting goes beyond the typical usability and acceptability testing that characterizes the pipeline from beta to a more finalized version of an AI tool. That type of testing is most often done heuristically13, using a small set of evaluators to examine the interface and judge its compliance with relevant usability principles (e.g., interpretability, perceived utility, navigability, satisfaction with use, etc.). While these metrics are often useful for gauging proximate user experiences (i.e., “UX” testing) with a tool’s interface, a deeper level of user testing is needed14 to help identify and address potential sources of “emergent” or “contextual” bias15 that arise due to mismatches between a product’s design and the characteristics of its users, use cases or use settings. These mismatches may be more difficult to predict and account for in the case of AI tools than for traditional medical devices or pharmaceuticals whose performance is less contingent on user interactions and interpretations12, or whose adaptive algorithms continuously change16. Mitigating these mismatches can only be achieved by broadening our notion of user testing beyond its current focus on AI performance metrics and proximate usability to examine human and systemic factors shaping how AI systems are applied in practice17,18 by imperfect users in imperfect settings. Further, testing does not have to be limited to simply observing how individuals in various contexts interact with AI tools; we can also test how best to shape those interactions using existing insights from the behavioral sciences, as we discuss below. Trust in the eye of the (imperfect) beholder At this stage in the history of human-machine relations, nearly everyone is an imperfect user of AI. By this, we mean imperfectly rational: our interpretations and integration of information into decision making, including insights derived from AI, are susceptible to well-documented forms of bias19,20. Not all biases, however, are equally salient or relevant to the safe, effective, and responsible use of AI. From both legal and ethical perspectives, the most important cognitive biases are those that impact the extent to which humans rely on AI in their decision making in ways that introduce risk. Reliance falls along a spectrum of utter rejection or skepticism of AI on one end to “blind” overreliance or acceptance of AI-derived conclusions on the other. Both types of error can have negative impacts on patient outcomes, with underreliance potentially leading to errors of omission and overreliance on errors of commission. Where clinical decision makers fall along this spectrum depends on how much they trust an AI system. Literature from anthropology and developmental psychology documents findings that human trust is influenced by how other people behave in contexts of reciprocity and exchange21, not only of goods and services but also attachment behaviors22,23 (e.g., affection, nurturance). Loyalty24, integrity25, and competence26 play important roles in human-human trust, increasingly conceptualized as an evolved capacity to help us navigate complex social dynamics and to mitigate personal risk by understanding which entities and objects can be trusted under which contingencies27,28,29. While we know a great deal about trust in human relationships, we are just beginning to understand how and in what circumstances humans trust machines. Literature on human-machine interactions, or “human factors” research, has existed for decades in other domains, including military, aerospace, and robotics; but only within the last decade have questions surrounding human interactions with autonomous systems (e.g., automation bias) begun to animate the field of AI broadly, and AI ethics in particular2,11. Impacts of uncertainty and urgency on decision quality Trust plays a particularly critical role when decisions are made in contexts of uncertainty. Uncertainty, of course, is a central feature of most clinical decision making, particularly for conditions (e.g., COVID-1930) or treatments (e.g., deep brain stimulation31 or gene therapies32) that lack a long history of observed outcomes. As Wang and Busemeyer (2021)33 describe, “uncertain” choice situations can be distinguished from “risky” ones in that risky decisions have a range of outcomes with known odds or probabilities. If you flip a coin, we know we have a 50% chance to land on heads. However, to bet on heads comes with a high level of risk, specifically, a 50% chance of losing. Uncertain decision-making scenarios, on the other hand, have no well-known or agreed-upon outcome probabilities. This also makes uncertain decision making contexts risky, but those risks are not sufficiently known to the extent that permits rational decision making. In information-scarce contexts, critical decisions are by necessity made using imperfect reasoning or the use of “gap-filling heuristics” that can lead to several predictable cognitive biases20. Individuals might defer to an authority figure (messenger bias34, authority bias35); they may look to see what others are doing (“bandwagon” and social norm effects35,36); or may make affective forecasting errors, projecting current emotional states onto one’s future self37. The perceived or actual urgency of clinical decisions can add further biases, like ambiguity aversion (preference for known versus unknown risks38) or deferral to the status quo or default39, and loss aversion (weighing losses more heavily than gains of the same magnitude40). These biases are intended to mitigate risks of the unknown when fast decisions must be made, but they do not always get us closer to arriving at the “best” course of action if all possible information were available. Reducing or exacerbating uncertainty One of AI’s most compelling advantages for healthcare is to reduce this uncertainty—for example, by calculating a personalized estimate that a patient’s condition will worsen after X amount of time or will enjoy a survival benefit of Y number of years post-intervention. However, whether AI successfully contributes to reducing uncertainty still depends to a large extent on how estimates are interpreted and acted upon. A small number of studies examining decisional biases when using AI have identified that physicians across expertise levels often fail to dismiss inaccurate advice generated by computerized systems (automation bias41,42,43,44,45), but as well as by humans, indicating that people are generally susceptible to suggestions. The tendency to follow even bad advice appears to be even more prevalent among participants with less domain expertise46,47. Receiving such advice from AI systems can raise further dangers by potentially engaging other cognitive biases such as anchoring effects and confirmatory bias, in which users are primed towards a certain perspective and disproportionately orient their attention to information that confirms it48. Other studies have found that participants are averse to following algorithmic advice when making final decisions (algorithmic bias)49,50,51, but this result is inconsistent with other studies, which show people sometimes prefer algorithmic to human judgment46,47,52. Given the diversity of cognitive biases and contingencies under which they are likely to emerge, further systematic research is needed to document which salient factors shape how we integrate AI into decisions and how best to calibrate trust so that it matches what AI systems can actually do (e.g., predict something with a given degree of probability and accuracy). In robotics, poor “trust calibration” between humans and machines is viewed as a core vulnerability and key predictor of performance breakdown53,54. Likewise, putting AI in the hands of users without systematically measuring, controlling for, or otherwise trying to calibrate trust and reliance likely exacerbates rather than reduces the already high levels of uncertainty that characterize these decision-making contexts, with potentially grievous consequences. The uncertain role of AI in clinical decision making The current push55,56,57 to enhance healthcare professionals’ literacy in AI/ML highlights a need to replace idiosyncratic variation with informed reasoning about the role that AI should play in clinical decision making. However, it is hard to know what kind of guidance healthcare professionals should receive when so few empirical conclusions have been drawn about how AI is or should be used in clinical (or any) decision making. Taking lessons from algorithmic tools that have been shown to reproduce negative societal biases in predicting factors like criminal recidivism58, health status and insurability1, and disease (e.g., skin cancer) risk59, many scholars argue60,61 that AI tools should not replace any decisions that are considered “high stakes”—those with significant health or justice-related impacts. In the healthcare setting, some experts recommend that even AI with a well-demonstrated capacity to autonomously identify and diagnose disease should be confirmed with human-led testing62,63. Similar conclusions have been made about autonomous weapons systems (AWS) in military64 and maritime (e.g., unmanned shipping65) applications, with ongoing debates about whether to keep humans “in” the loop or “on” the loop, the latter suggesting that humans may not need to take an active role in decision making but can (and should) still intervene or be able to appeal to AI inferences when their conclusions contradict those of the AWS (if caught in time). If we agree that humans should still be “in” or “on” the loop, how should one expect healthcare professionals to react to AI-derived information? The recommendation to proceed with caution, while warranted, seems too broad to fit the decisional needs of physicians engaging powerful AI to inform complex medical decisions. There is growing agreement that proficiency in AI (including its shortcomings related to bias, transparency, and liability) should be part of medical education, with suggestions that medical students must acquire sufficient knowledge of data science, biostatistics, computational science, and even health AI ethics66 to ensure they can, among other things, separate “information from hype” and critically evaluate AI systems57,67. Others68 have argued that learning effective debiasing strategies and cultivating awareness of how heuristics can impact clinical decision making should be prioritized in all stages of medical education. However, it remains unclear which biases healthcare providers should be made most aware of; whether providers should be responsible for being aware of their own biases, or whether bias mitigation may (or should) be embedded in standardized processes for implementing AI tools in clinical decision making or in the design of the technologies themselves. Enhancing decision quality by design While it is likely true that physicians will increasingly need to learn how to responsibly use AI to keep pace with clinical innovations, other complementary approaches should also be explored. One promising option is to support physicians in their likelihood to demonstrate the specific characteristics we value in clinical decision making by embedding bias mitigation techniques into the very design features of our AI systems and user interfaces. This notion builds on longstanding work in computing ethics69,70 and is known by various terms, including Value-Sensitive Design (VSD71), Values @ Play72, reflective design73, adversarial design74, and critical technical practice75, and was originally pioneered by Friedman and Nissenbaum76,77 in the 1990s as a way to encourage a reflective, iterative process for shaping human-computer interactions that prioritize trust and user welfare. A great deal of variation remains in how VSD is carried out, but the centrally motivating assumption behind this approach is that reflective design approaches can help to mitigate user biases for more favorable outcomes. Following the three main stages of VSD would entail identifying the range and diversity of stakeholder values and how best to balance them towards an articulated goal (conceptual), observing impacts of given values and practices on relevant outcomes (empirical), and devising technical specifications to design systems that reflect or help to shape the use of a system to align with stakeholders’ values (technical). An example would be to design interactive web browser cookie management systems to reflect principles of privacy, voluntariness, and right to disclosure71. Scholars have extended a fourth and ongoing step of life-cycle monitoring and evaluation to VSD for AI specifically, given the often unforeseeable impacts and adaptive nature of AI tools14,78. Building on these approaches, we argue that a VSD approach could not only help to embed values into the design of AI tools but also to actively and strategically influence (nudge) users to engage in more ethical and critical reflection in their use of such tools. Such an approach requires critical engagement with the ethics of nudging in health decisions as well as identification of the range of target values one wants physicians to demonstrate in decision making. Nudging is a form of libertarian paternalism in which decisions are actively shaped through strategies such as information framing, structuring incentives, and other means to enhance the uptake of certain behaviors79. While evidence for the efficacy of this approach dates back nearly two decades80, nudging tactics have shown to be effective, for example, during the COVID-19 pandemic to encourage compliance with public health-promoting behaviors, such as handwashing and social distancing81. Though not without its critiques (e.g., that it can be a form of manipulation82,83), a central rationale of nudging is to preserve individual choice while guiding people toward behaviors with population-level benefits84. However, determining who gets to decide which values are engaged in service of making “good” decisions when using an AI tool is complex and should draw on perspectives from multiple, diverse stakeholders, not just those of developers designing these systems. The Hippocratic Oath establishes a fundamental criterion that physicians’ decisions should be in service of what they believe to be a patients’ best interests. Additional criteria come from a rich literature on decision making and clinical decision support85, suggesting that “quality” decisions are those that are informed and generate positive outcomes that are congruent with a patient’s values. Other target values, such as decisional autonomy82, are likely to be relevant, and it should be noted that salient target values may shift depending on the nature of the AI tool or the ethical issues raised by its intended users or use contexts. For example, an AI tool designed to predict and prevent onset of psychiatric illness in adolescents raises a particular set of target values in decision making (e.g., decisional autonomy, patients’ right to an open future) while a tool to identify presence and prognosis of lung cancer in adults may raise others (e.g., avoidance of negative emotional reactions, actionability considerations, patients’ right to not know). Research is needed to elucidate which target values for “quality” decision making are most salient in which clinical scenarios. AI interfaces that encourage critical reflection One target value that is likely to be relevant in all clinical decision making involving AI is the need to promote reflexivity in decision making in order to avoid the potential negative consequences of overreliance on AI. A growing literature1,86 demonstrating the potentially deleterious effects of overreliance on AI algorithms highlights the importance of reflexivity and deliberation as guiding principles for AI deployment. These explorations and observations thus inform the conceptual and empirical stages of the VSD approach, leaving the technical challenge of designing interfaces that will help to shape the deliberative and reflexive use of AI systems in ways that align with users’ interests. Research has demonstrated that the ways in which information is presented can influence users’ likelihood of engaging in reflective or critical thought. For example, a study by Zhang et al.87 employed a simple interface nudge to encourage reflection by asking participants to answer brief questions clarifying their own opinions versus what they considered to be reasons driving alternative perspectives. Weinmann88 developed an online interface with similar questions to enhance “deliberation within” by asking questions that encouraged reasoning about alternative perspectives. Other research by Harbach et al.89 demonstrates the effectiveness of using interface design elements to inspire reflection by illustrating consequences of user choices (e.g., reminding users of the potential impacts on selecting certain user privacy parameters). Menon et al.90 similarly explored how modifying “interface nudges” in relation to specifically targeted cognitive biases (e.g., anchoring and social desirability effects) influenced user deliberation and responses. These studies highlight how strategic interface design can help to enhance reflection and reduce passive reception of information. For example specific to AI system interfaces, design elements might vary according to stakeholder type. An interface designed to reduce physicians’ overreliance on an AI model estimating a patient’s 1-year survival post-intervention might include brief questions or a checklist encouraging physicians to document which other clinical, psychosocial, or environmental factors or additional expert opinions they have consulted in order to corroborate (or challenge) the AI’s estimate. Complementarily, a patient-facing interface for the same tool may contextualize the numerical survival estimate within a more holistic values clarification exercise asking patients to circle one or more treatment goals influencing their decisions, encouraging reflective, value-based decision making. Building in such reflexivity metrics could not only help to nudge users away from overreliance on AI tools but also evaluate impacts on clinical decision making in practice, both within and beyond clinical trial contexts. However, interfaces are not the only tools available with this capacity. Conceptualizing how an AI system might fit into clinical flow in ways that encourage deliberation among clinical teams may also help to reduce potential for overreliance91. Situational and logistical factors could be considered, such as setting (e.g., the collective use of an AI tool during medical review board vs. individually in a physician’s office), timing (before or after treatment candidacy), and information access (direct-to-patient versus physician-privileged communication of results). Integration of AI with other existing clinical technologies may also alter outcomes of using AI tools by broadening information that is integrated into decision making92. Organizational aspects may include training, supervision, handover, and information flow across members of the clinical team91. These insights discussed above represent only the tip of the iceberg of factors that may potentially be coordinated to positively influence decision quality and outcomes using AI. They have been identified and often widely discussed in fields as diverse as decision science, behavioral economics, human factors, psychology, political science, robotics, and others. However, few of these insights have yet been integrated into AI systems design or systematically tested in clinical trials to proactively shape how AI is used.
2022-12-01T00:00:00
https://www.nature.com/articles/s41746-022-00737-z
[ { "date": "2022/12/01", "position": 7, "query": "AI healthcare" } ]
AI in medicine: Where are we now and where are we going?
AI in medicine: Where are we now and where are we going?
https://pmc.ncbi.nlm.nih.gov
[ "Md Mobashir Hasan Shandhi", "Department Of Biomedical Engineering", "Duke University", "Durham", "Nc", "Jessilyn P Dunn", "Department Of Biostatistics", "Bioinformatics", "Duke Clinical Research Institute" ]
by MMH Shandhi · 2022 · Cited by 36 — Advancements in AI enable personalizing healthcare, for example by investigating disease origins at the genetic or molecular level, understanding ...
Advancements in AI enable personalizing healthcare, for example by investigating disease origins at the genetic or molecular level, understanding intraindividual drug effects, and fusing multi-modal personal physiological, behavioral, laboratory, and clinical data to uncover new aspects of pathophysiology. Future efforts should address equity, fairness, explainability, and generalizability of AI models. The incorporation of artificial intelligence (AI) into medicine has revolutionized and continues to dramatically change the landscape of healthcare. In this commentary, Shandhi and Dunn highlight the potential and challenges of AI in personalized and precision medicine with specific examples of its current deployment and anticipated future implementations. Main text Introduction Personalized medicine is a novel approach to understanding health, disease, and treatment outcomes based on personal data, including medical diagnoses, clinical phenotype, laboratory studies, imaging, environmental, demographic, and lifestyle factors. Precision medicine often overlaps with this concept and also includes utilizing genomic data to tailor a plan of treatment or prevention of a particular disease. Personalized and precision medicine promise a better understanding of health and medicine, improvements in the early detection of diseases, and better long-term health and chronic disease management. The multi-modal information originating from multiple domains can be collected from individuals because of recent advancements in sensing, cloud infrastructure, medicine, genetics, metabolomics, and imaging technologies, among others. However, with such innovations in sensing and diagnostic testing capabilities, an incredible amount of personal data can now be generated for each patient. Appropriately storing and analyzing this voluminous personal data can be a challenging and daunting task. Thankfully, advances are also being made in these directions to increase the capabilities and efficiency with which we can digitize, store, and analyze these large volumes of personal and population-level data. Ultimately, the combined advancements in biomolecular, imaging, and sensing technologies, along with hardware, software, and data science, and the ability of the medical community to leverage these technologies effectively, have together enabled personalized/precision medicine. Personalized/precision medicine The human genome project (https://www.genome.gov/human-genome-project) spurred an entirely new way of thinking about the role of biology in healthcare. Completed in 2003, the fully sequenced human genome was anticipated to address a large proportion of open issues in our understanding of health and disease. However, two decades later, it had decidedly not delivered on that promise.1 This is because biomolecular pathways are more complex than were appreciated at the time, and the many layers of interactions between the genome, phenome, and environment mean that the one gene, one disease (i.e., Mendelian disease) framework is surprisingly infrequent and that more complex relationships within the genome itself (e.g., non-coding regions) and between the genome and other biomolecular species remain to be uncovered. The advent of new technologies for high-throughput biomolecular measurements over the past two decades has increased our knowledge of the biomolecular milieu, which is a snapshot of the entirety of biomolecules existing in any one compartment at any one point in time. One such technology, next-generation sequencing (NGS), enables transcriptomic analyses through RNA-seq, as well as 16S, meta-genomic, and metatranscriptomic analyses to characterize the microbial milieu and its activities. Further, proteomic and metabolomic profiling spawned from mass spectrometry technologies enable profiling of the biomolecules that are the machines and byproducts of life-sustaining processes. New knowledge generated by these methods has enabled a move from expensive and dirty “shotgun” approaches that capture a broad but poorly annotated set of biomolecules to fast, low-cost, and targeted panel profiling approaches (e.g., genotyping arrays). Further, new laboratory methods to profile various configurations of biomolecules have emerged (e.g., protein post-translational modifications; epigenomic histone and DNA methylation alterations; chromatin conformations, etc.). These methods have uncovered new knowledge about the importance of not only the biomolecules themselves, but also the role of their spatial arrangements and modifications. Through these emerging technologies and integration of the data generated by them, the promise of a truly “multi-omics” approach to research and medicine is increasingly being realized.2 Examples of the multi-omics approach in practice include: (1) gut microbiome relationships to health and disease, as mediated and characterized by the human metabolome and microbial meta-omes (i.e., metagenome, metatranscriptome, and meta-metabolome), (2) applications of machine learning to large multi-modal multi-omic datasets to uncover new relationships, and (3) variations on sequencing-based technologies to uncover higher-order information, including epigenetics and chromatin configuration (e.g., ATAC-Seq, RRBS, Hi-C). Further, a new appreciation for the key role of drug-metabolizing genes has come to light. Specifically, a host of genes in the Cyp (Cytochrome P450) family have been found to control the metabolism of a vast array of drugs, which in turn enables a precision approach to drug dosing on an individualized level. Applying these approaches complementarily will be key to a more comprehensive assessment of the biomolecular milieu. Collection and computational processing of these samples remains expensive and sometimes challenging. As costs decreases, we can obtain improved temporal resolution of the biomolecular dynamics, for example by dense sampling in time, which will enable us to better understand how this symphony evolves over the lifespan and during health or disease. As these new data become available, new complementary time series methods development will be needed. Further, methods for multi-modal data fusion remain relatively rudimentary and can be advanced through new AI methods that will support more data-driven rather than hypothesis-driven approaches. A contributor to the bottleneck on this front is the lack of samples on such high-dimensional data, which will be solved by decreasing cost and increasing accessibility of technologies, as well as continued pushes toward open science including the deposition of new data into public repositories. Artificial intelligence and machine learning Artificial intelligence (AI) and related technologies (machine learning [ML]) are becoming increasingly prevalent across diverse sectors including business, society, and healthcare. These technologies have the potential to transform many aspects of patient care, as well as administrative processes within provider, payer, and pharmaceutical organizations. Wearable sensor development and the wide adoption of these devices in people’s day-to-day lives provides a unique opportunity to monitor the physiology and behavior of an individual and to generate an unprecedented amount of personalized data in real-world settings. This opportunity is further enabled by complementary technological advancement in analyzing this enormous amount of data (a.k.a. “big data”) using AI/ML algorithms. Machine learning (ML), a branch of AI, has gained popularity in the past few decades due to advances in computing machinery which make previously inaccessible methods within practical reach. ML is a method of data analysis that automates analytical model building, which evolved as a way to recognize patterns in data without explicitly programming for that particular task. Although many of today’s top-performing ML algorithms were invented more than half a century ago, the computational power and resources required were unavailable to make this process practical and feasible. Advancements in the semiconductor industry, for example, has reduced computation times, lessened power requirements, and improved the cost effectiveness of computing, all of which has rejuvenated the field of ML in the last two to three decades. New ML applications have ranged from social networking to financial services, transportation, healthcare, and more. The advantages of ML are adaptability, scalability, automation, and the capability of leveraging multi-dimensional and multivariate data to learn new aspects of systems. ML can be divided into four overarching types of learning: supervised, semi-supervised, unsupervised, and reinforcement (Figure 1). Supervised learning is most common because of its utility for prediction. In supervised learning, the prediction target value (i.e., outcome variable, or label) is known, and the algorithm attempts to learn the relationship between it and other, often more easily measured, variables. For example, one may attempt to predict the size of a tumor after a drug treatment based on genetic characteristics of the patient. Supervised learning can be further classified for the intended tasks of either regression (where the target variable is continuous, e.g., heart rate) or classification (where the target variable is discrete, e.g., the presence or absence of arrhythmias). Its methods range from simple linear regression to random forests to neural nets and beyond. Often, simpler methods are preferred because they are understandable and generalize more readily. Unsupervised learning is used to uncover groups or patterns in data where there is no obvious outcome of interest or where the label is not known. Such methods include clustering and dimension reduction. Often, unsupervised learning can be used as a precursor to supervised learning to generate labels for prediction targets. Semi-supervised learning is used in places where labels are known for only a part of the data. Reinforcement learning algorithms learn from trial and error, where the model learns from past experiences and adapts its approaches to achieve the lowest possible prediction error. Figure 1. Open in a new tab Pipeline of data analysis in healthcare using artificial intelligence Multi-modal personal “big data” are curated from diversified sources and integrated with electronic health records that are analyzed using artificial intelligence, more specifically machine learning or deep learning algorithms (e.g., supervised, semi-supervised, unsupervised, and reinforcement learning). In recent years, deep learning (DL) models, a branch of ML, have become more and more popular, achieving great performance in biomedical applications, including diagnoses of tumors from radiology images.3 DL models are in general more complex in nature and structure of the algorithm and generally consist of many artificial neural network and recurrent neural network, which is relatively opaque and hard to interpret. In addition to that, DL models require a lot more data for training purposes compared to traditional ML algorithms (e.g., support vector machine, random forest) and tend to overfit. There are already a number of research studies suggesting that ML, and more specifically DL models, can perform as well as or better than humans at key healthcare tasks, such as diagnosing diseases and detecting malignant tumors from radiology images.3 While some algorithms can outperform humans, for example in disease detection (e.g., discovering arrhythmias like atrial fibrillation from longitudinal electrocardiogram data) and guiding researchers in how to best construct cohorts for costly clinical trials, AI technologies are most effective for augmenting rather than replacing humans. For example, AI can reduce human errors, augment knowledge capacity, and free up time consumed by menial tasks. However, it is important that we consider both the potential of AI/ML as well as the challenges and concerns in general and in areas that are specific to health care before applying these tools and models in clinical decision support system. AI and ML in personalized/precision medicine AI/ML models have been successfully applied to a wide variety of genomics data, particularly in cases with high dimensional and complex data that are challenging to process using traditional statistical methods. Genome-wide association studies, which involve rapidly scanning markers across the genomes of thousands or more people to discover genetic variants associated with a particular disease, have benefitted substantially from ML. One example is in Type 1 diabetes, where there is an improved risk assessment using ML algorithms that can account for interactions between a large collection of biomarkers.4 Using treatment outcomes data generated from previous patients treated for a disease, ML models can identify which future patients may benefit from a specific treatment based on their characteristics. An example of this is genetically informed therapeutic planning (using support vector machine-based anti-cancer drug sensitivity prediction method using genomic data5) for patients with pharmacogenomically actionable variants, where titrated prescription and dosing are critical. This advancement can avoid unnecessary treatments in non-responders and support titrated prescription and dosing to maximize the anti-cancer effect in responders. ML algorithms have also demonstrated remarkable performance in identifying novel biomarkers that can support early disease detection, prediction of treatment response, and provide indicators of disease progression. A recent comprehensive integrative molecular analysis of a complete set of tumors in The Cancer Genome Atlas (TCGA) employed unsupervised clustering to learn similarities and differences in tumors across 33 different cancer types.6 Further exploration into the resulting tumor subgroups has improved our understanding of, for example, how cancers mutate and the various factors that modulate these tumors.6 Similar to its applications in genomics, the application of ML algorithms and in particular DL to radiology and histopathology have improved the accuracy of image analysis while also reducing the time required. Currently, radiologists visually assess radiological images to detect, characterize, and monitor diseases—this work is manual, tedious, subjective, and prone to human error. Deep learning models, including convolutional neural networks and variational autoencoders, have automated such tasks via quantitative assessment of radiographic characteristics, e.g., cancer diagnosis, staging, and segmentation of tumors from neighboring healthy tissue.3 These methods fall under radiomics, a data-centric, radiology-based research field. Radiogenomics is a new and complimentary field in precision medicine that uses ML/DL to combine radiology images (e.g., cancer imaging) with gene expression signatures to stratify patients’ risk, guide therapeutic strategies, and perform prognoses.7 Similar to radiologic images, ML/DL models have been used to examine histopathology images which can contain billions of pixels and are difficult to process without computer-aided diagnosis.8 The utility of AI-assisted precision medicine has been demonstrated in other areas of precision medicine, including cardiovascular and neurological disease. Here, it has been used to identify novel genotypes and phenotypes in heterogeneous cardiovascular diseases and to improve genetic diagnostics in neurodevelopment disorders. For example, recent work has demonstrated methods to predict heart failure and other serious cardiovascular diseases in asymptomatic individuals,9 which can be acted upon with personalized prevention plans to delay disease onset and reduce negative health outcomes. Overall, the current value and future potential of AI in personalized health care, early detection of disease, tracking disease progression, and as a clinical decision support tool have been amply demonstrated (Figure 2). AI-assisted precision medicine has demonstrated promise not only for personalization of therapies for existing diseases, but also for individual risk prediction and personalized prevention plans. Figure 2. Open in a new tab Overview of incorporation of artificial intelligence into precision medicine The incorporation of artificial intelligence into precision medicine has demonstrated tremendous potential and progress in personalized care, clinical decision support systems, early disease detection, and tracking disease progression. However, there are technical and ethical challenges (e.g., fairness and bias, transparency and liability, trust, safety, and security) that may hinder the progress and reliability of the field and delay clinical implementation. Challenges and considerations Although tremendous progress has been made using AI techniques and high-volume data, there is substantial room for improvement. While the potential of AI/ML in personalized/precision medicine is clear, there are many remaining concerns and challenges, both technical and ethical, that may hinder progress and reliability of the field and delay clinical implementation. Some examples of this are: (1) Fairness and bias: Data and algorithms can reflect, reinforce, and perpetuate biases. When the data utilized to train AI is either incomplete (e.g., lack of representation from underserved and underrepresented communities) or inherently biased (e.g., collected in settings where existing stereotypes affect the data itself), the models built will be problematic and can serve to further exacerbate disparities and biases. (2) Limited data availability: In recent years, DL models have become increasingly popular, achieving great performance in many areas of biomedical applications. However, due to their data-hungry nature, DL models cannot easily learn from small datasets, and available datasets are often insufficient in size to train deep learning models. (3) Data deluge: As a society we are generating ever more data both in and out of clinical settings, with healthcare data storage projected to exceed 2,000 exabytes by 2020. 10 Collecting such multi-modal data on a large scale opens new data storage and organization challenges and costs. (4) Transparency and liability: DL and other high complexity ML models may demonstrate greater accuracy than simpler models, but are opaque and do not provide users with insights as to how the algorithm arrives at its conclusions. This “black box” nature of the technology is particularly concerning in healthcare where lives are on the line. Additionally, where the responsibility for AI-assisted medical decisions lies is not always clear. (5) Data drift: A common assumption in AI (particularly ML) algorithms is that data from the past (used for the model training) is a representation of the data from the future (for which the model will eventually be deployed). Unfortunately, this is rarely the case in real-world settings and so changes in the data may affect the model’s behavior and accuracy in a real-world deployment. (6) Data safety and privacy: AI algorithms use significant amounts of personal data for their decision making. However, software and corresponding hardware may have security flaws which can lead to theft of personal and health information. (7) Trust: Similar to other technologies, early mistakes and failures in technologies can lead to general mistrust which can reduce adoption and utilization of the technology. Current advances to overcome the challenges Recent advances in sensing and measurement technologies and novel data science and AI methods have the potential to mitigate many of the challenges described above. The wide adoption of wearables in the general population (1 in 4 Americans has a wearable) enables continuous collection of large volumes of personal physiological and behavioral data in addition to environmental information (e.g., location, ambient condition) in real-world settings at minimal cost. This has been aided by advancements in miniaturization and improvements in hardware including better signal-to-noise ratios, battery life, compute power, and more. Investing in ways to intelligently reduce the sampling rates and developing computationally efficient data compression tools to store a large volume of data without losing key information can reduce the costs of data storage as well as computation. Another recent advancement in AI, federated learning,11 introduces a decentralized data usage infrastructure for training the algorithms, which can mitigate challenges associated with data safety and privacy. In traditional ML methodology, datasets from multiple sources would be centrally combined and then used to train an algorithm, whereas in federated learning, an algorithm is trained locally on edge devices or local servers without sharing that data and later only the model parameters are combined centrally and optimized iteratively. As only model parameters are shared, not the actual data, data safety and privacy remain intact in federated learning. Recent initiatives from multiple stakeholders including researchers, policymakers, clinicians, and ethicists emphasize inclusivity and inference-centered explainability of AI algorithms in health care12 as opposed to the traditional prediction-centered black box approach of the past. These new tools and frameworks can enable dissection and interpretation of predictions made by an algorithm and may even improve model performance further by allowing for the injection of domain knowledge into their design. These new tools can help detect and resolve bias, drift, and other gaps in data and models.
2022-12-20T00:00:00
2022/12/20
https://pmc.ncbi.nlm.nih.gov/articles/PMC9798019/
[ { "date": "2022/12/01", "position": 10, "query": "AI healthcare" } ]
AI revolution in healthcare and medicine and the (re ...
AI revolution in healthcare and medicine and the (re-)emergence of inequalities and disadvantages for ageing population
https://www.frontiersin.org
[ "Stypińska", "Department Of Sociology", "Institute Of East European Studies", "Free University Of Berlin", "Franke", "Department Of Social Work", "Evangelische Hochschule Ludwigsburg" ]
by J Stypińska · 2023 · Cited by 34 — The application of AI technology in healthcare systems promises impressive outcomes such as equalising healthcare, reducing mortality rate and human error, ...
AI systems in medicine and healthcare are being extensively explored in prevention, diagnosis, novel drug designs and after-care. The application of AI technology in healthcare systems promises impressive outcomes such as equalising healthcare, reducing mortality rate and human error, reducing medical costs, as well as reducing reliance on social services. In the light of the WHO “Decade of Healthy Ageing”, AI applications are designed as digital innovations to support the quality of life for older persons. However, the emergence of evidence of different types of algorithmic bias in AI applications, ageism in the use of digital devices and platforms, as well as age bias in digital data suggests that the use of AI might have discriminatory effects on older population or even cause harm. This paper addresses the issue of age biases and age discrimination in AI applications in medicine and healthcare systems and try to identify main challenges in this area. It will reflect on the potential of AI applications to amplify the already existing health inequalities by discussing two levels where potential negative impact of AI on age inequalities might be observed. Firstly, we will address the technical level of age bias in algorithms and digital datasets (especially health data). Secondly, we will discuss the potential disparate outcomes of automatic decision-making systems (ADMs) used in healthcare on the older population. These examples will demonstrate, although only partially, how AI systems may create new structures of age inequalities and novel dimensions of exclusion in healthcare and medicine. 1. Introduction Demographic ageing is often associated with cost-intensive multimorbidity, a shortage of skilled workers that is already noticeable in care and medicine and changing expectations and demands of older people for adequate health care. Digitalisation of healthcare, and particularly the recent developments in artificial intelligence (AI) for health seem to promise a way out of this dilemma and an important relief, postponing or even avoiding cost-intensive inpatient care help. The application of AI technology in healthcare systems promises impressive outcomes such as equalising healthcare, reducing mortality rate and human error, reducing medical costs, as well as decreasing reliance on social services (Davenport and Kalakota, 2019; Mukaetova-Ladinska et al., 2020). According to experts, AI is expected to make a profound impact on healthcare and ageing research (Zhavoronkov et al., 2019). The AI in healthcare was estimated to generate 6.9 billion USD in 2021 and is expected to reach 67.4 billion USD by 2027 (Markets and Markets, 2021) and is certainly one of the most lucrative and investment-intense areas of AI deployment. However, the emergence of evidence of algorithmic bias in variety of AI applications (Buolamwini and Gebru, 2018; Díaz et al., 2018), ageism in the use of digital devices and platforms (Rosales and Fernández-Ardèvol, 2020), as well as age bias in digital data suggests that the AI systems might also have discriminatory effects on older population or even cause harm (Chu et al., 2022; Stypinska, 2022). Additionally, AI algorithms are being developed from the current state of health care data and are thus embedded in the context of deprivation and health inequality (Nordling, 2019). This was particularly visible during the COVID-19 pandemic where the socioeconomic factors proved to be responsible for higher rates of morbidity and severity of disease (Ahmed et al., 2020). Moreover, Peine et al. observed that “the global pandemic has worked as a pressure cooker that has produced new configurations of old ageist and gendered stereotypes of age and ageing as problems, in which technology is seen as a solution” (Peine et al., 2021, p. 3). In the light of the WHO “Decade of Healthy Ageing (2021–2030)”, AI applications are also designed as digital innovations to support the quality of life for older persons. The digitalisation of health care work and the inclusion of various forms of AI in health care change the justifications, legitimation and knowledge base for the transformations of care associated with mechanisation. Questions about what can be considered appropriate and what rules the use of technology, equipment and medication should follow in health care arrangements are currently being renegotiated. This is shown, for example, by the discussion that the German Ethics Council began in 2019 on the use of care robots and the associated relativisation of personal contact and the hybridisation of care relationships (German Ethics Council, 2020). Moreover, the European Commission's proposal for an Artificial Intelligence (AI) Act has been the topic of a heated debates since its launch in April 2021. Critical researchers are worried that the that the AI Act neglects the risks health AI pose to patients' health and fundamental rights, in particular the rights to access to healthcare, non-discrimination and privacy (Kolfschooten and Oirschot, 2022). Yet, the AI Act does not specifically identify health AI as “high risk” and does not provide solutions for its key risks, as is the case in other areas of AI application (Commission, 2021). Nevertheless, the critics point to four reasons why AI for health deserves special consideration: (1) people's health is at stake, (2) people are in a vulnerable position when in need of healthcare, (3) the collection of health data has dramatically increased in recent times and (4) health data is historically littered with bias. Because of these characteristics, health AI faces unique risks that need to be specifically addressed in the AI Act (Kolfschooten and Oirschot, 2022). When justifying and developing criteria and standards for AI, it is also important to consider that the use of technology is culturally variable as well as dependent on the environment, age and gender. People vary with age as well as with the requirements and the extent of the care work—in the case of severely disabled people as opposed to in geriatric long-term care. This indicates that AI systems form a large and heterogeneous field of social transformation, in which various participants and those affected carry out many different activities with diverse objectives. In addition, the research and application field are broadly theoretically framed and reflected. This necessitates some systematic considerations on the range of the subject area and to look at them to see which ethical questions can be identified more precisely in the respective AI supported health care services. This question or perspective is of fundamental importance since social inequality and ageism is present in every form and variant in the health care system and its further automatisation through AI systems might lead to intensification thereof. This paper discusses the potential risk of re(emerging) inequalities and rise in age discrimination as a result of deployment of AI systems in health care sector. Ageism in the field of AI can manifest it multiple forms, from technical bias of algorithms and datasets, to ageism in discourses and narratives about older persons, to exclusion of older adults as users (Stypinska, 2022). In this perspective paper we look at the risks from two angles—(1) the technical: risk related to biases in digital datasets and algorithms, and (2) risk of discrimination from automated decision-making systems (ADMs) being increasingly used in the health care and health insurance systems. 2. Age bias in AI The concern about bias in AI systems is today higher than ever. The common expression “garbage in, garbage out” (Mittelstadt et al., 2016) used to refer to the poor outcomes of AI models when the data they learn from are of poor quality alerts us to the significance of the data gathering and mining practises. “Bias in, bias out” is another catchphrase used to high-light concerns about the fact that data driven AI models make inferences by finding “patterns” from the data they analyse (Wu et al., 2022). Recent analysis shows also that age-biased samples and biased tools used for constructing algorithms tend to exclude the habits, interests and values of older people what contributes to strengthening already existing structural ageism (Rosales and Fernández-Ardèvol, 2019). Studies of age bias in machine learning are still rare, but persistently show that age bias exists in sentiment analysis models (Díaz et al., 2018), face recognition systems using advanced deep learning techniques (Meade et al., 2021), in emotion recognition systems (Kim et al., 2021), as well as in speech recognition systems (Werner et al., 2019). The results of analysis of face recognition systems show that age estimation was generally performing poor on older age groups (60+), an effect which was compounded by gender and race (Meade et al., 2021). Analysis of software detecting emotions showed that older adults had the lowest classification accuracy scores for each of the four assessed algorithms while young adults had the highest across the board (Kim et al., 2021). Additionally, some algorithms suggest improved performance over time for certain subgroups, specifically gender subgroups. However, for age groups no such impact was observed—the differences in accuracy between the younger and older groups continued to persist throughout the years. The authors expressed conviction that training datasets are skewed towards younger adults, causing a representation bias, amongst other societal root causes (Kim et al., 2021). Moreover, automated speech recognition (ASR) systems are an example of AI technology that is increasingly present in daily life, for instance in the development of virtual assistants. However, age-related physical changes may alter speech production and limit the effectiveness of ASR systems for older individuals. Evaluation of several automated speech recognition systems confirmed previous research that has suggested that those systems have more difficulty in recognising the speech of older adults (Werner et al., 2019). Furthermore, many of those systems use biometric data, which are “personal data resulting from specific technical processing relating to the physical, physiological, or behavioural characteristics of a natural person” (GDPR, art. 4). Most common applications are facial recognition, fingerprint recognition, voice/speech recognition. The biometric technology has potential of impacting the older persons in more direct way due to the way biological ageing impacts and changes bodily functions in older age. Touch, image, speech, and body language will all be impacted due to ageing processes but can also be impaired in groups of people with disabilities (Zhou and Gao, 2021). For instance, risk can relate to the age-linked fading away of fingerprints impacting the accuracy of their recognition (Rosales and Fernández-Ardèvol, 2020). Hence, the use of biometric data in healthcare applications and systems can pose an additional risk for older adults. 3. The sources of bias and the problem with (health) data The sources of bias in AI systems are manifold. There are three points in the machine learning pipeline where bias can originate: during the data collection and pre-processing; during the selection and creation of models; and when implementing results (de Alford et al., 2020). For instance, data annotation—a practise necessary to deploy supervised machine learning, has been found to create stereotypical images of older persons suggesting ageism of the annotators (Crawford and Paglen, 2019). Furthermore, the machine learning algorithms are no more than advanced classification systems based on variety of classification measures, which inherently contain moral standards, where each standard and category valorises some point of view and silences another. To classify is human, but each classification and standard give advantage or they give suffering to a certain group or individual (Bowker and Star, 2000). Bias in AI is a reproduction of social biases and stereotypes present in data, as well as individual prejudices and stereotypes of developers of AI technology. The following section zooms on the way bias operates in the digital data sets and the challenges of health data for ageing population. The functioning of modern-day AI systems is inherently dependent on the data they deploy. Data being the “new oil” of the modern economy (Sadowski, 2020) lead to the phenomenon of datafication of our everyday lives, homes, health, and (ageing) bodies (Lupton, 2016; Ruckenstein and Schüll, 2017; Katz and Marshall, 2018). The use of data for development of AI for health is particularly complex since the increased use and sharing of health data threatens privacy and data protection rights (Kolfschooten and Oirschot, 2022). Health data are the most intimate and sensitive data which can be obtained many ways from standardised clinical trials or from public healthcare infrastructure, but can also be inferred indirectly from e.g., web browsing or use of medical and healthcare apps and devices (Gangadharan et al., 2014). The privacy protection of health data provides a big concern for individuals and the medical professionals, but also for developers of AI systems for health. Moreover, the complexity of the advanced AI systems and their data architecture produces what Malgieri and Niklas called “vulnerable data subject” (Malgieri and Niklas, 2020). The authors explain that “involving vulnerability as a ‘heuristic tool' could emphasise existing inequalities between different data subjects and specify in a more systematic and consolidated way that the exercise of data rights is conditioned by many factors such as health, age, gender or social status” (Malgieri and Niklas, 2020). Hence, certain socio—demographic groups, such as children, older persons, persons with chronic diseases or disability, lower socio-economic status are particularly at risk of not being able to exercise their right to data privacy and protection. The proper representation of population of older adults in different datasets and data approaches has already been identified as one of the challenges for development of fair and age-inclusive AI systems (Rosales and Fernández-Ardèvol, 2019; Sourbati and Behrendt, 2020). Data, as well as the data we lack shape the opportunities for inclusion in later life (Sourbati and Behrendt, 2020). In terms of health data, analysing the situation of older patients during the pandemic time UN identified “flagrant lack of data on older persons”, caused by inappropriate data collection methodologies or by plainly excluding those over 50 or 60 years of age from health surveys (UN, 2020). In the clinical setting, research on the application of AI systems for health concerns of older adults is performed on small samples and does not offer conditions for replication (Mukaetova-Ladinska et al., 2020). The adequate representation of older population in datasets used for training AI models might be further disturbed by data sources the use of data such as smartphones, medical, health and wellness apps (Katz and Marshall, 2018) and other IoT (Internet of Things) devices, which generate detailed logs of health related activities. One of the challenges of these datasets is their limitation to the already relatively healthy, well-off and prosperous older adults who have access and knowledge, or sometimes adequate support, to use these devices (Rosales and Fernández-Ardèvol, 2020). The class, gender and economic status play a decisive role in the distribution in access to digital technology and thus the data generated is skewed, further marginalising those already at the risk of exclusion (Chu et al., 2022). Hence, an essential question arises: what happens when datasets deployed for medical AI are non-representative, incomplete or of low-quality? In case of AI models for health, biases in the training data can lead to discrimination and individual injury or even death (Kolfschooten and Oirschot, 2022). 4. Automatic decision-making systems (ADMs) in healthcare Automated decision making systems (ADMs) is a term which addresses the use of algorithms for decision-making support of human decision- makers and the automated execution of decisions, although these are not always clearly differentiated from each other (Orwat, 2020). These systems can have purpose of predicting, identifying, detecting, and targeting individuals or communities. ADMs are being increasingly used by private companies (e.g., in recruitment and personnel management) and public sectors (health care, education, social services, law enforcement) (Mittelstadt et al., 2016; Reisman et al., 2018; Orwat, 2020). In healthcare and medicine ADMs are predominantly used as an instrument in diagnostics, for therapy decisions, and for the allocation of resources in the health sector (Algorithm Watch, 2019). Advocates for the use of ADMs view them as a value-neutral, objective and apolitical cure for bias and discrimination where everyone is treated equally, however, a similarly large body of evidence suggests that those systems can have discriminatory effects on those already marginalised, such as low income groups, persons with disabilities, persons with mental illnesses, the unemployed, or the homeless (Monteith and Glenn, 2016; Eubanks, 2018; Reisman et al., 2018; Chiusi et al., 2020; Heinrichs, 2022). Documentation of severe social and personal consequences for individuals wronged by the outputs of such systems have raised questions about their fairness and even legality (Richardson, 2019). The existing research has shown that AI-driven ADMs are subject to, or may themselves cause, bias and discrimination that may exacerbate existing health inequity among racial and ethnicity groups (Leslie et al., 2021). Through probabilistic predictions based on assumptions these systems perform a type of “social sorting” (Hogle, 2016) which might introduce new categories of people and illness and reinforce old beliefs about social differences, which ultimately might lead to worsening of already existing health disparities and access to treatments. ADM systems deployed for facilitating a more efficient distribution of resources in the health sector (e.g., systems used for allocation of organs for transplantation) or by health insurance companies to calculate the individual risk and adapt insurance costs were also reported to have severe consequences. The devastating effects of the use of ADMs on the health outcomes of members of marginalised groups was documented by Virginia Eubanks, a researcher from Albany University in the USA, who depicted how the automatised withdrawals and denials of healthcare services lead to tragic consequences, including loss of life (Eubanks, 2018). Similar observations were made by Cathy O'Neil by, what she calls, “weapons of math destruction” (O'Neil, 2016). Her analysis of the health care insurance companies and their reliance on big data and AI algorithms demonstrated how those separate the sick from the healthy and create paths for debilitating inequalities in access to affordable health care. The use of ADM systems in diagnosis is problematic for several reasons. Groups of patients who represent a minority in terms of some biological traits might find themselves systematically disadvantaged because the database used is insufficient for the respective group or leads to misjudgments (Algorithm Watch, 2019). Diagnostic tools, although very promising and receiving heightened attention in the last few years, are not yet deemed as safe and accurate enough to be used in everyday practise and their general uptake among clinicians is still low (Higgins and Madai, 2020). Secondly, the issue of trust in the use of ADM systems in health is critical. Consumers surveyed by MIT AGELAB indicated “little to some willingness to trust a diagnosis and follow a treatment plan developed by AI, allow a medical professional to use AI for recording data and as a decision support tool, use in-home monitoring on the health issues of their own, and trust an AI prediction on potential health issues and life expectancy” (MIT AGELAB, 2021). Also the medical practitioners are often sceptical or reluctant to rely on AI-delivered diagnosis (Allahabadi et al., 2022). Moreover, similar to the problems with non-representative datasets for training of machine learning models for face or emotion recognition discussed above, the datasets used for training diagnostic models also suffer from lack of proper representation in terms of age, as was shown in a study of a diagnostic model for detection of lung compromise in COVID-19 patients (Allahabadi et al., 2022). With regard to negative consequences for ageing population and older adults, there is not yet enough systematically collected empirical evidence to illustrate how ADMs affect older adults on a group level. However, the evidence of discrimination in relation to many socio-demographic characteristics, many of which correlate highly with age, such as income, health status or employment status, might suggests that the disparate effect of ADMs used in health care sector on ageing populations might occur. 5. Final thoughts and discussion Machines and technology have become integral parts of society and are shaping culture, civilisation and our general way of life today and in the future; it has virtually merged with our normative orientations and social models about ageing and older persons. AI technologies are certainly shifting how we will think about health, sickness, and ageing (Woods, 2020). And although technical and technological innovations trigger far-reaching consequences of how we perceive ageing (Wanka and Gallistl, 2018), the specific relationships between technology and power or the connection between technology and ageism has rarely been the focus. An adequate and intersectional ethical approach is needed in design and development of AI, as well as in policy making, to safeguard that the algorithmic systems do not exclude and marginalised already vulnerable groups of older adults by neglecting social determinants of health. Currently, international efforts are being made in the realm of regulation of AI where guidelines and policy recommendations are drafted with regard to aspects of fairness, accountability, transparency in order to meet the criteria of trustworthiness of AI systems (e.g., currently debated European Artificial Intelligence Act). However, the outcomes and implementation of these regulations, particularly in the healthcare sector will pose additional challenges which are yet to be seen. The large global campaign of World Health Organisation (WHO) to combat ageism (WHO, 2021) recognises the problem of IT sector as the one where ageism hits very hard. Moreover, the recent WHO Policy Brief titled “Ageism in artificial intelligence for health” investigates the use of artificial intelligence in medicine and public health for older people, including the conditions in which AI can exacerbate or introduce new forms of ageism (WHO, 2022). The policy brief stipulates that to “ensure that AI technologies play a beneficial role, ageism must be identified and eliminated from their design, development, use and evaluations” (WHO, 2022, p. 10). It proposes eight considerations for safeguarding that AI for health is developed in an equitable manner: participatory design, age diversity in data science teams, age inclusive data collection, investments in digital infrastructure and digital literacy of older people and their caregivers, rights of older people to contest and consent, governance frameworks to empower older persons, increased research and robust ethics processes. With certainty these guidelines are a good starting point in developing an ethical and equitable approach to building AI for health. However, their implementation into greater debates on bias in AI, as well as practical integration into the workflows of AI developers will require concerted efforts of the whole ageing research community, and far beyond. Considering the recent revolutions in the development in AI and machine learning, it has become clear that technology is far more than a medium or a mere artefact that benefits all people to the same extent. The inequality-generating aspects of new technologies cannot be overlooked, nor can the fact that different technologies are always central resources for the exercise of power, that technical dominance has become logically inscribed in social structures and has become synonymous with influence and power. The report of MIT AGELAB (2021) concluded, that despite a relatively optimistic outlook on the capabilities and adoption of AI systems, the experts interviewed about the benefits of AI for ageing population were least confident in AI's ability to provide more equitable access to health care. They added “any system that replaces a human with an algorithm has the potential of making incorrect decisions that can threaten human health. Because health care literally involves life and death decisions, it is critical to build in enough redundancy and resilience in AI-based systems to ensure that these systems do no harm” (MIT AGELAB, 2021, p. 35). Moreover, it is being increasingly debated that AI in medicine is viewed as overly positive and optimistic as to the capabilities of this technology in preventing or curing diseases. In fact, there are only few certified and even fewer clinically validated products available in the clinical setting. Most of the hype around medical uses of AI is related to cases of technology in the exploratory stages of development (proof of concept), which identify potentially valuable use cases, but which are yet to be validated in the clinical use trials (Madai and Higgins, 2021). Hence, many experts suggest caution in estimating the real effects of this technology on the future of healthcare for older adults (Berisha and Liss, 2022; WHO, 2022). In addition, in view of the variety of AI techniques and their application for health, it is necessary to empirically examine them more closely and to ask what AI means in the respective fields of investigation and in which hidden patterns of age discrimination are integrated. Thus, it must be transparent where artificial intelligence is used and which influencing factors play a role. It must be possible to object to the use at any time. This short contribution is by no means exhaustive of the topic and may just serve as a pointer in the direction of future research or critical thinking about the use of AI systems in heath and the way this might impact older adults. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Author contributions JS has developed the idea of the paper, the outline, and wrote parts 2, 3, and 4. AF participated in drafting the part 1 and 5 of the paper. Both authors contributed to the article and approved the submitted version. Funding The preparation of this manuscript was supported with a Research Fellowship Grant for JS received from European New School of Digital Studies of the Viadrina University in Frankfurt. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. References Algorithm Watch (2019). Atlas of Automation Automated Decision-making and Participation in Germany. Berlin: AW Algorithm Watch. Google Scholar Allahabadi, H., Amann, J., Balot, I., Beretta, A., Binkley, C., Bozenhard, J., et al. (2022). Assessing Trustworthy AI in Times of COVID-19. Deep Learning for Predicting a Multi-Regional Score Conveying the Degree of Lung Compromise in COVID-19 patients. IEEE Transactions on Technology and Society. doi: 10.1109/TTS.2022.3195114 PubMed Abstract | CrossRef Full Text | Google Scholar Berisha, V., and Liss, J. (2022). AI in Medicine Is Overhyped, Scientific American. Available online at: https://www.scientificamerican.com/article/ai-in-medicine-is-overhyped/ (accessed December 6, 2022). Google Scholar Bowker, G. C., and Star, S. L. (2000). Sorting Things Out. Classification and Its Consequences. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/6352.003.0002 CrossRef Full Text | Google Scholar Chiusi, F., Alfter, B., Ruckenstein, M., and Lehtiniemi, T. (2020). Automating Society 2020. Berlin: Algorithm Watch. Google Scholar Commission, E. (2021). Regulatory Framework Proposal on Artificial Intelligence. Available online at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed December 6, 2022). Google Scholar Crawford, K., and Paglen, T. (2019). Excavating AI: The Politics of Images in Machine Learning Training Sets. Available online at: https://excavating.ai/ (accessed December 6, 2022). Google Scholar de Alford, G., Hayden, S. K., Wittlin, N., and Atwood, A. (2020). Reducing age bias in machine learning: An algorithmic approach. SMU Data Sci. Rev. 3, 1–20. Available online at: https://scholar.smu.edu/cgi/viewcontent.cgi?article=1155&context=datasciencereview PubMed Abstract | Google Scholar Díaz, M., Johnson, I., Lazar, A., Piper, A. M., and Gergle, D. (2018). “Addressing age-related bias in sentiment analysis,” in IJCAI International Joint Conference on Artificial Intelligence, 2019-Augus. p. 6146–6150. doi: 10.24963/ijcai.2019/852 CrossRef Full Text | Google Scholar Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador St. Martin's Press. Google Scholar Gangadharan, S. P., Eubanks, V., and Barocas, S. (2014). Data and Discrimination: Collected Essays. New America: Open Technology Institute. PubMed Abstract | Google Scholar German Ethics Council. (2020). Robotics for Good Care. Available online at: https://www.ethikrat.org/en/publications/publication-details/?tx_wwt3shop_detail%5Bproduct%5D=130&tx_wwt3shop_detail%5Baction%5D=index&tx_wwt3shop_detail%5Bcontroller%5D=Products&cHash=61efeb07abf2347f3834e309c5df15b3 (accessed October 12, 2022). Google Scholar Heinrichs, B. (2022). Discrimination in the age of artificial intelligence. AI and Society. Springer London 37, 143–154. doi: 10.1007/s00146-021-01192-2 CrossRef Full Text | Google Scholar Higgins, D., and Madai, V. I. (2020). From bit to bedside: a practical framework for artificial intelligence product development in healthcare. Adv. Intell. Syst. 2, 2000052. doi: 10.1002/aisy.202000052 CrossRef Full Text | Google Scholar Kim, E., Bryant, D. A., Srikanth, D., and Howard, A. (2021). Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older Adults, AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. doi: 10.1145/3461702.3462609 CrossRef Full Text | Google Scholar Kolfschooten, H., van Oirschot, J., and van. (2022). Health protection is Non-Negotiable in the AI Act Negotiations, EURACTIV. Available online at: https://www.euractiv.com/section/digital/opinion/health-protection-is-non-negotiable-in-the-artificial-intelligence-act-negotiations/ (accessed December 6, 2022). Google Scholar Lupton, D. (2016). The Quantified Self: A Sociology of Self-Tracking. Cambridge, UK: Polity Press. Google Scholar Markets and Markets. (2021). Artificial Intelligence in Healthcare Market. Available online at: https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-healthcare-market-54679303.html?gclid=Cj0KCQjw39uYBhCLARIsAD_SzMTCiGIsxmwIi-Nr2af8teraiLzhODU8qzYGQp05-9ktbwuGGkXpO58aAit_EALw_wcB (accessed October 10, 2022). Google Scholar Meade, R., Camilleri, A., Geoghegan, R., Osorio, S., and Zou, Q. (2021). Bias in Machine Learning: How Facial Recognition Models Show Signs of Racism, Sexism and Ageism. Available online at: https://towardsdatascience.com/bias-in-machine-learning-how-facial-recognition-models-show-signs-of-racism-sexism-and-ageism-32549e2c972d Google Scholar MIT AGELAB (2021). AI and Longevity. Cambridge, MA. Google Scholar Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data Soc. 3, 1–21. doi: 10.1177/2053951716679679 CrossRef Full Text | Google Scholar Mukaetova-Ladinska, E. B., Harwood, T., and Maltby, J. (2020). Artificial Intelligence in the healthcare of older people. Arch. Psychiatr. Mental Health 4, 007–013. doi: 10.29328/journal.apmh.1001011 CrossRef Full Text | Google Scholar O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers. Google Scholar Orwat, C. (2020). Risks of Discrimination through the Use of Algorithms. Berlin: Federal Anti-Discrimination Agency. Google Scholar Peine, A., Marshall, B., Martin, W., and Neven, L. (2021). Socio-gerontechnology: Interdisciplinary Critical Studies of Ageing and Technology. London; New York, NY: Routledge. doi: 10.4324/9780429278266 CrossRef Full Text | Google Scholar Reisman, D., Schultz, J., Crawford, K., and Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. Oxford, London, New York: AI Now Institute. Google Scholar Richardson, R. (2019). Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force. New York: AI Now Institute. Google Scholar Sadowski, J. (2020). Too Smart: How Digital Capitalism is Extracting Data, Controlling Our Lives, and Taking Over the World. Cambridge, MA: MIT Press. doi: 10.7551/mitpress/12240.001.0001 CrossRef Full Text | Google Scholar Stypinska, J. (2022). AI Ageism : a Critical Roadmap for Studying Age Discrimination and Exclusion In Digitalized Societies. AI and Society. London: Springer. doi: 10.1007/s00146-022-01553-5 PubMed Abstract | CrossRef Full Text | Google Scholar Wanka, A., and Gallistl, V. (2018). Doing age in a digitized world—a material praxeology of ageing with technology. Front. Sociol. 3, 1–16. doi: 10.3389/fsoc.2018.00006 CrossRef Full Text | Google Scholar Werner, L., Huang, G., and Pitts, B. J. (2019). Automated speech recognition systems and older adults: a literature review and synthesis. Proc. Hum. Fact. Ergon. Soc. Ann. Meet. 63, 42–46. doi: 10.1177/1071181319631121 CrossRef Full Text | Google Scholar WHO (2021). Global Report on Ageism. Geneva. Google Scholar WHO (2022). Ageism in AI for Health: WHO Policy Brief . Geneva. Google Scholar Woods, T. (2020). Live Longer with AI: How Artificial Intelligence is Helping Us Extend Our Healthspan and Live Better Too. Birmingham: Packt Publishing. Google Scholar Wu, H., Wang, M., Sylolypavan, A., and Wild, S. (2022). “Quantifying health inequalities induced by data and AI models,” in 31st International Joint Conference on Artificial Intelligence (Vienna: IJCA), 5192–5198. Google Scholar Zhavoronkov, A., Mamoshina, P., Vanhaelen, Q., Scheibye-Knudsen, M., Moskalev, A., and Aliper, A. (2019). Artificial intelligence for ageing and longevity research: recent advances and perspectives. Ageing Research Reviews. Elsevier 49, 49–66. doi: 10.1016/j.arr.2018.11.003 PubMed Abstract | CrossRef Full Text | Google Scholar
2023-01-23T00:00:00
2023/01/23
https://www.frontiersin.org/journals/sociology/articles/10.3389/fsoc.2022.1038854/full
[ { "date": "2022/12/01", "position": 13, "query": "AI healthcare" } ]
Data and model bias in artificial intelligence for healthcare ...
Data and model bias in artificial intelligence for healthcare applications in New Zealand
https://www.frontiersin.org
[ "Yogarajan", "School Of Computer Science", "Waipapa Taumata Rau", "University Of Auckland", "New Zealand", "Dobbie", "Leitch", "General Practice", "Rural Health", "Otago Medical School" ]
by V Yogarajan · 2022 · Cited by 19 — This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand.
Discussions: This research takes early steps toward developing a model of socially responsible and fair AI for New Zealand's population. We provided an overview of reproducible concepts that can be adopted toward any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. One of the primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare. Results: This research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We show evidence of bias due to the changes made in algorithmic design. Furthermore, we observe unintentional bias due to the underlying pre-trained models used to represent text data. This research addresses some vital issues while opening up the need and opportunity for future research. Methods: This research considers data and model bias in NZ-based electronic health records (EHRs). Two very distinct NZ datasets are used in this research, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. To ensure research equality and fair inclusion of Māori, we combine expertise in Artificial Intelligence (AI), New Zealand clinical context, and te ao Māori. The mitigation of inequity needs to be addressed in data collection, model development, and model deployment. In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We use fairness measures such as disparate impact scores, equal opportunities and equalized odds to analyze tabular data. Furthermore, token frequencies, statistical significance testing and fairness measures for word embeddings, such as WEAT and WEFE frameworks, are used to analyze bias in free-form medical text. The AI model predictions are also explained using SHAP and LIME. Introduction: Developments in Artificial Intelligence (AI) are adopted widely in healthcare. However, the introduction and use of AI may come with biases and disparities, resulting in concerns about healthcare access and outcomes for underrepresented indigenous populations. In New Zealand, Māori experience significant inequities in health compared to the non-Indigenous population. This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand. 1. Introduction There are abundant examples around the world where indigenous and underrepresented populations experience inequities in their exposure to the determinants of health. This includes access to and through healthcare and receipt of high-quality healthcare (Webster et al., 2022). In New Zealand, Māori experience significant inequities in health compared to the non-Indigenous population (Curtis et al., 2019; Webster et al., 2022; Wilson et al., 2022). Although Māori experience a high level of health care need, they receive less access to the full spectrum of health care services and poorer care throughout, from preventative to tertiary care (Curtis et al., 2019; Webster et al., 2022). The need to address such health equity is reinforced by the United Nations Declaration on the Rights of Indigenous Peoples and Te Tiriti o Waitangi (The Treaty of Waitangi, 1840 ) in New Zealand. Developments in Artificial Intelligence (AI) and data-driven technology are used to make clinical diagnoses, decisions, and treatment protocols. Although health inequalities can directly reflect human biases, the introduction of and use of AI in healthcare comes with its own biases and disparities. AI is prone to reinforcing bias (Parikh et al., 2019; Feuerriegel et al., 2020; Norori et al., 2021), triggering a need for legislative improvements, including data governance, and reflected by the recent modifications of the US HIPPA regulations and GDPR in Europe. In New Zealand, the ongoing health system reforms and the inclusion of a new Māori health authority are expected to provide the opportunities and platform for health equity (McCall, 2022). Furthermore, ongoing developments of frameworks such as the IEEE Standards on Algorithmic Bias Considerations (P7003) (Koene et al., 2018; Smith et al., 2018) are examples of global initiatives to assist organizations in understand and eliminate unintentional algorithmic bias. Moreover, governments have introduced anti-discrimination laws prohibiting unfair treatment based on sensitive attributes. This research takes early steps toward developing a fair and unbiased AI in healthcare for New Zealand's population. Ensuring equity requires identifying and mitigating fairness problems in data collection, data storage, data availability, model development, and model deployment (Mehrabi et al., 2021; Paleyes et al., 2022). In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We consider the existing fairness measures in both the data and algorithmic development stages. We also analyze the use of fairness measures developed predominantly for the US population in New Zealand settings (Mehrabi et al., 2021). We use health data collected from New Zealand's population while working closely with healthcare experts to analyze fairness measures. This research provides a pathway to reproducible concepts that can be adopted with respect to any NZ population data, not just electronic health records (EHRs). We use data collected by health experts and analyze unintended data bias. We consider algorithmic bias at two stages. The first is analyzing the effects of using machine learning models for a given classification task, and the second is at algorithmic design (i.e., the design of a specific task or research question). In this paper we use machine learning models such as logistic regression and XGBoost. The choice of the machine learning algorithms, including the above mentioned, is made based on examples in the literature of other similar studies, such as Hotzy et al. (2018), Tong et al. (2020), Yao et al. (2020), Peng et al. (2021), Wu et al. (2022), Barak-Corren et al. (2021), Luo et al. (2019), and Snider et al. (2021). The protected groups chosen for this study are “Gender” and “Ethnicity,” where the male population is considered privileged over the female population and NZ Europeans are considered privileged over Māori and Pacific populations. The analysis of fairness measures for tabular and text data from the NZ population is presented, representing text data as bag-of-words or embeddings. This paper includes the following sections. Section 2 provides an overview of the concepts addressed in this paper, including references to the literature where required. This is followed by Section 3 where an outline of data and experimental metrics, methodology and setup is provided. Section 4 provides details of the results and analysis. Finally, conclusions and future work are presented in Section 5. 2. Background 2.1. AI model development and deployment There are two significant stages for integrating AI into any task, development and deployment. Figure 1 shows the general process, including the various sub-tasks required for each step. The first and arguably most crucial step is obtaining the relevant data for the project. The data can drive the entire development cycle as factors such as the type and quantity of data will determine which machine learning models are appropriate for the task. Data quality is also is important, although that is a much larger topic (Gudivada et al., 2017). The data is typically partitioned into two, with one part used for training the chosen AI model to perform the desired task. The rest of the data is used to test the trained model to see how effective the model is when exposed to new data. There may be some cycling between training and testing the model, with different iterations of the model selection occurring based on the test results. FIGURE 1 Figure 1. Overview describing the basic stages of AI model development and deployment. Examples of bias at various stages are outlined. Once development is completed, the model is deployed for real-world use. This stage generally consists of waiting and observing the model's performance (i.e., accuracy) in its intended tasks. A well-developed model with appropriate training data will generalize to real-world tasks, resulting in an adequate performance for the intended task. In this case, appropriate training data comes from a distribution that is the same or at least highly similar to the real-world task distribution. Even if the model generalizes well, there is still a need to periodically monitor the model's performance as the real-world distribution changes. These distribution changes are known as “concept drift” (Widmer and Kubat, 1996). In healthcare, concept drift could occur gradually due to factors such as shifts in the population's demographic composition or suddenly due to factors such as a global pandemic. The impact of concept drift can be moderated using multiple methods (Žliobaitė et al., 2016). A simple method is to treat the entire development and deployment process as iterative, with feedback from deployment being used as training data for a new development stage. 2.2. Bias Bias in AI can occur at all stages, from data collection to algorithmic development and deployment (see Figure 1). Such bias is far more problematic with underrepresented populations. For instance, models fed relatively homogeneous data during training suffer from a lack of diversity in terms of underlying patient populations. They can severely limit the generalizability of results and yield biased AI-based decisions (Celi et al., 2022). Obermeyer et al. (2019) provided an example of data bias where the algorithm showed Black patients to be healthier than they actually were, as the design of the algorithm used the cost of health as a proxy for the needs of patients. Algorithmic designs also result in biased algorithmic decisions. Another aspect of bias is the data distribution encountered in the training and validation of a model being site-specific (or country/region-specific). In this case, the data distribution of the deployment environment is significantly different (Howard et al., 2021). In addition to changes in the population's ethnicity, changes in medical equipment, such as image capture and biometrics, can also lead to bias. 2.3. Fairness Uses of AI in real-world applications, including healthcare, can produce unfair outcomes across demographic groups (Fletcher et al., 2021; Forde et al., 2021; Leslie et al., 2021); hence, there is need for quantitative assessments of fairness in decision making. The fairness of a machine learning model is usually judged against a set of legal or ethical principles. It can vary depending on the local government and culture (Fletcher et al., 2021); in this regard, the cultural and social requirements of Aotearoa/New Zealand are also unique. There is a need to mitigate equity problems in data collection, model development, and model deployment to ensure that AI can be used to improve healthcare outcomes of underrepresented and indigenous populations. Choosing the appropriate fairness measurement based on the scenario can also help mitigate bias. It is vital to point out that the emphasize of equality, although appropriate in some situations, may not necessarily reflect health equity. To achieve equity of health outcomes some categories of patients will require substantially more resources. As defined by NZ Ministry of Health , based on the World Health Organization, equality refers to treating people in the same way, while equity refers to treating people fairly, i.e., equity is about adapting to the needs of populations to help people achieve the same outcomes. Fairness policies related to healthcare can be grouped into those based on equal performance and those based on equal allocation (Rajkomar et al., 2018). Equal performance means that a model is guaranteed to be equally accurate for patients from various population groups. An excellent example of equal allocation is the issue of vaccine distribution. When a machine learning model considers only the overall optimal prevention strategy, its allocation strategy can be detrimental to an underrepresented group. 2.4. Explainability and interpretability Ever since AI models have been applied to real-world problems, there have been concerns about the models' explainability and interpretability (Goebel et al., 2018). Explainability refers to understanding how a model came to its decision. In contrast, interpretability is being able to understand the models' output. Many of these models are black boxes by nature; we know that data is input and decisions are output. Most of what happens between input and output is not well understood. It can be challenging to determine how the models came to their decision. This can be particularly concerning when the models are making decisions that directly affect the lives of individuals. Why was Person X approved for something that Person Y was denied for? These decisions could be based on sound reasoning or model biases, as discussed above. These problems are such a concern that the European Union has recently introduced regulations to enhance the explainability of such models in their jurisdiction (Hamon et al., 2020). Research also shows that users' acceptance of AI models is significantly impacted by their explainability (Shin, 2021). Interpretability is often also an issue for non-black-box models. In regression models, it takes some statistical knowledge to correctly interpret what the coefficients mean and how to understand complex situations such as variable interactions, polynomial terms and log axes (Nathans et al., 2012). On the other hand, it is also important to point out the potential trade off between interpretability, efficacy and equity (Rüping, 2006; Carvalho et al., 2019). Interpretability is a very subjective concept, and depending on the domain and context the extend of model interpretability varies and hence is hard to formalize (Rüping, 2006; Rudin, 2018; Carvalho et al., 2019). 2.5. Incorporating socially responsible AI A fundamental requirement for being socially responsible is a trustworthy system. Understanding bias in data and model developments, and the ability to reduce bias in developing and deploying technologically advanced systems, are steps toward being socially responsible. This includes not blindly trusting the results generated by AI. To ensure safety, security and fairness, AI systems need to be verified and validated in alignment with the objectives for which the system was designed (Korteling et al., 2021). In New Zealand, given the long history of racism toward Māori, the design and development of AI systems should feature a high degree of control by Māori (Wilson et al., 2022). This allows implementations to be fair and relevant to Māori, and serves Māori aspirations. Understanding data and algorithmic bias can further ensure AI models for healthcare (and other fields) can perform well for Māori with the hope of at least an equivalent capacity to benefit them. Tackling algorithmic bias will require working with health professionals, clinicians, representatives from underrepresented populations and active participation in developing these algorithms. This ensures a deep understanding of the clinical context is incorporated while improving modeling, acknowledging an acceptable tradeoff between the performance of the algorithm and bias (Panch et al., 2019). Furthermore, incorporating representatives from indigenous and minority populations needs to go beyond just hearing Māori and minority voices to ensure that society's needs are met (McCall, 2022). There is a need to ensure that health algorithms do not worsen health equity and indeed improve health outcomes for those populations. As such, co-authors of this research include clinical experts, machine learning experts, data scientists, and researchers of New Zealand's indigenous Māori population. It is vital to understand that all patients are stakeholders as their data are being used. Stakeholder engagement can improve research appropriateness, acceptability, rigor, feasibility, delivery, and dissemination (Maurer et al., 2022). Māori and other minority groups are important stakeholders, so it is crucial to handle such data with care (Hudson and Russell, 2009; Esmail et al., 2015; Kalkman et al., 2022). We strongly believe that any outcome of this research, either direct or indirect, should be created for good of Māori and other minority groups. Moreover, gender-diverse groups are central in this work as they are also at risk of bias. Furthermore, it is important to consider patient privacy laws and the ethics of using patient data. Privacy laws in many jurisdictions require accurate de-identification of medical documents before they can be shared outside their originating institutions or used in research. The ethics committee has approved the data used for this research and it is de-identified to protect patient privacy. 3. Methodology This research aims to take initial steps toward addressing the need to develop socially responsible and fair AI in healthcare for the New Zealand (NZ) population. As such, we focus on using NZ data collected by clinicians to analyze the data bias and model development bias using established techniques and fairness metrics. Figure 2 provides an overview of various aspects of this research. FIGURE 2 Figure 2. Overview of research components. This section presents an overview of the data, target questions for algorithmic developments, an overview of fairness metrics, and an outline of the experimental setup. 3.1. Data For this research, we use two New Zealand data sources collected by clinicians for various studies. The NZ-GP Harms Dataset (Leitch et al., 2021) is a collection of electronic health records (EHRs), in both tabular and text form, from New Zealand General Practices (NZ-GP) with manual annotations of experts identifying healthcare patient harms and whether the identified harm is preventable. The NZ-GDF Dataset (Asrani et al., 2021) is a collection of data, in tabular form, of patients admitted to intensive care units (ICU) in New Zealand hospitals for developing new scoring systems for gastrointestinal dysfunction/failure (GDF). 3.1.1. The NZ-GP harms dataset For experiments and development of AI models, New Zealand General Practice (GP) EHRs data (referred to as NZ-GP Harms data) from Leitch et al. (2021) is used. This data includes 3 years' worth of medical records from 44 different GP practices across NZ, collected using a stratified random sampling method to minimize data collection bias. It includes data for over 9,000 patients of various ages, gender, ethnicity, and deprivation (i.e., economic) status. The GP practices were further categorized by the size of the practice (i.e., small/medium/large) and if the location of the practice is urban or rural. The data were manually processed, annotated and verified by eight GP researchers. The definition of “Patient Harm” used in this data is: Physical or emotional negative consequences to patients directly arising from health care, beyond the usual consequences of care and not attributable ONLY to the patient's health condition. Includes: treatment delays, inconvenience and additional financial costs. The data includes various categories of free-text data, including consultation notes, medical warnings, specialist care, nutrition and other referral details, outcome summaries, and immunisations. The presence and availability of data for each category vary from patient to patient. This research considers the category “Consults” as the free-text EHRs for experiments. The free-form EHRs “Consults” includes notes from classifications, patient history and consultation summaries. Figure 3 provides an overview of the number of tokens in the free-form medical text for NZ-GP data. FIGURE 3 Figure 3. Sequence length (number of tokens) of free-form EHRs in NZ-GP Harms data for the category “Consults.” We use the NZ-GP Harms dataset for two tasks: prediction of harm and prediction of preventable harm. The study incorporates data in both tabular and free-form EHRs for 9,076 patients, where 1,505 patients experienced some level of harm. From the 1,505 patients who experienced harm, experts identified 362 patients with preventable harm. Table 1 provides summary statistics of selected features of the NZ-GP Harms data for both tasks. TABLE 1 Table 1. Summary of NZ-GP Harms Data for Task 1 predicting patient harm; Task 2 predicting preventable harm among patients where harm is identified; and Summary of NZ-GDF Data for Task 3 predicting hospital mortality. 3.1.2. The NZ-GDF dataset The NZ-GDF Dataset consists of a collection of data, in tabular form, for developing new scoring systems for Gastrointestinal dysfunction/failure (GDF). The GDF study was designed and conducted by healthcare experts at one of the tertiary hospitals in New Zealand with a mixed surgical-medical intensive care unit (ICU). Due to the specific eligibility criteria of patients, this study includes data across several days for only a hundred patients. All consecutive patients in the period 2019 to 2020 admitted to intensive care units were screened and recruited for the study if they met the eligibility criteria. Inclusion criteria included: • The patient's age (> 18 years) • Receiving enteral and parenteral nutrition • The ability to screen within the first 24 h of admission to ICU • The likelihood of a stay in ICU for more than 72 h Data were prospectively collected from the day of admission to ICU (Day 0) until “the day of discharge” or “after 90 days,” whichever occurred first. Patients were evaluated based on data extracted from clinical notes, daily 24-h ICU observation charts and nursing care plans. Patients were allocated with a non-identifiable study number, and all collected data variables were entered into a password-protected secure file for data coding and statistical analysis. Data extraction included baseline characteristics and demographic data, ICU treatment data, gut dysfunction and gastrointestinal scoring, and ICU and hospital outcome data. Where variables were within the normal range, the variable closest to the time of data collection was recorded. The variables collected at multiple time points were calculated for an overall mean or median value. For laboratory variables, the most abnormal value was taken for each day. The gastrointestinal scoring categories were evaluated individually and in a binary fashion for a smaller sample size. Table 1 provides summary statistics of selected features from this detailed study. For this paper, we focus on predicting hospital mortality. 3.2. Classification tasks The design of a task for which an AI model is developed can also influence the introduction of bias. This research aims to build AI models for the following tasks: Task 1: Predicting patient harm from tabular and text data Task 2: Predicting preventable harm from tabular and text data Task 3: Predicting hospital mortality from tabular data Task 1: Total number of patients with EHRs is 9,076, and the experts identified harm in 1,505 patients. Task 2: Total number of patients with identified harm is 1,505, where the experts identified preventable harm in 362 patients. Task 3: Total number of patients used for this study is 100, where patients with hospital mortality outcomes are 20. Table 1 provides a summary of a selection of variables for each tasks. Tasks 1 and 2 use the same NZ-GP Harms data, with different algorithmic designs where the number of patients will be 9,076 and 1,505, respectively. Task 3 uses the NZ-GDF dataset with a much smaller instance of 100 patients. 3.3. Fairness measures Many countries worldwide have anti-discrimination laws that prohibit unfair treatment of people based on specific attributes, also called sensitive attributes, such as gender or race (Zafar et al., 2017). These laws evaluate the fairness of a decision-making process by disparate treatment and disparate impact. The process suffers from disparate treatment if a decision is based on sensitive attributes. Automated decision-making systems, such as AI-based systems, are often trained on historical data. Hence, if a group with a specific sensitive attribute value was unfairly treated in the past, this unfairness may persist in future predictions through indirect discrimination, leading to disparate impact (Feldman et al., 2015; Zafar et al., 2017). Fairness measurements can help identify and mitigate the biases (Rajkomar et al., 2018). In this research, we focus on group fairness instead of individual fairness, where both privileged and unprivileged groups are defined based on sensitive attributes. In general, the unprivileged group are defined based on the evidence of historical injustice in the past and is the group that will most likely face unfair decisions from a biased model. As such, for our research, we define “NZ Europeans” as the privileged group and “Māori” as the unprivileged group. For evaluating gender bias, we consider “Females” as the unprivileged group as defined by other literature (Zafar et al., 2017). 3.3.1. Disparate impact scores Disparate impact (DI), a notion introduced in the US legislation in 1971 , is simply the ratio of rates at which the outcomes occur for one group of patients over the rest (Feldman et al., 2015; Zafar et al., 2017; Besse et al., 2022). The defined tasks in this research (see Section 3.2) are binary classification problems with sensitive attributes defined for gender and race. We use the notations as per Besse (Besse et al., 2022) where: • The predicted variable is Y, where Y = 1 is a positive/true outcome (i.e., patient experience harm) and Y = 0 is a negative/false outcome. • g(X) = Ŷ is the algorithmic prediction. • S is considered the sensitive variable that should not influence the decisions but could lead to discriminatory decisions. S = 0 indicates minority could be discriminated against, and S = 1 represents the majority. Data bias is measured using: D I ( Y , S ) = P r ( Y = 1 | S = 0 ) P r ( Y = 1 | S = 1 ) ( 1 ) Algorithmic bias is measured using: D I ( g , X , S ) = P r ( g ( X ) = 1 | S = 0 ) P r ( g ( X ) = 1 | S = 1 ) ( 2 ) The US legislation introduced a 4 5 rule where the threshold for DI(g, X, S)>τ was set at τ = 0.8. The acceptable range for DI ratio is generally between 0.8 and 1.25. Although we use DI for data bias and algorithmic bias measures in our research, it is essential to point out that the DI metric shows disparities in outcomes without accounting for any other factors. This assumes that we should strive for equality in the distribution of outcomes, given that all people are “created equal,” which may involve looking outside the scope of a given healthcare system alone. 3.3.2. Equal opportunity In the binary case, if we consider Y= 1 as the “advantaged” outcome, the equal opportunity requires non-discrimination only within the “advantaged” outcome group (Hardt et al., 2016). Equal opportunity or true positive rate (TPR) is defined as: T P R = T P ( T P + F N ) ( 3 ) where TP refers to true positives and FN refers to false negatives. Equal opportunity considers a model to be fair if the TPRs of the privileged and unprivileged groups are equal. However, this approach does not consider the needs of each population and hence may not result in equitable outcomes. Some leeway for statistical uncertainty is given in practice, where the difference is less than a specific cutoff. Determinations of such cutoff come down to governments and policies. 3.3.3. Equalized odds Equalized odds is a fairness metric that checks if, for any particular label and attribute, a classifier's predictions are equal for all values of that attribute (Hardt et al., 2016). Compared to equal opportunities, equalized odds are a stricter definition of fairness. Like with equal opportunity, this definition requires that the TPRs are equal. However, equalized odds also require that the FPRs are equal. A false positive rate (FPR) is defined as: F P R = F P ( F P + T N ) ( 4 ) where FP refers to false positives and TN refers to true negatives. 3.4. Understanding bias in text data The fairness metrics introduced above focus predominantly on tabular data. However, the free-form medical text can also be a source of bias. Other sources of bias include, but are not limited to, bias associated with the patient's ability to access healthcare and the biases of the clinician writing in the medical record. In this research, data bias refers only to the bias in tabular and textual data. Medical text in free-form includes acronyms and abbreviations. Hence even simple changes, such as converting uppercase letters to lowercase or omitting full stops, can result in a different meaning. In this paper, the free-form text data was pre-processed by removing tokens that contain non-alphabetic characters, including all special characters and tokens that appear in fewer than three training documents (Yogarajan, 2022). The free-form medical text in the NZ-GP Harms dataset is predominantly written in English, but includes some te reo Māori. Automatic language detection in code-switched English-Māori text is an area of ongoing current research (James et al., 2022; Trye et al., 2022). For the purpose of this research, Māori language was identified manually. We consider the most frequent tokens that can incorporate bias to form a preliminary understanding of the bias in text data. In classification tasks, the frequency of tokens is used in Bag-of-words (BOW) representations of text. The BOW approach is a simple, yet effective method (Goldberg, 2017; Joulin et al., 2017), for representing text. Furthermore, we also explore the option of using popular word embeddings to represent text (Goodfellow et al., 2016; Goldberg, 2017). However, there is evidence that indicates the pre-trained embeddings exhibit biases (Caliskan et al., 2017; Badilla et al., 2020). Word Embedding Association Test (WEAT) (Caliskan et al., 2017) and Relative Norm Distance (RND) (Garg et al., 2018) are examples of fairness metrics designed to quantify bias in embeddings. Word Embedding Fairness Evaluation (WEFE) (Badilla et al., 2020) is a framework designed to measure fairness in word embeddings using metrics such as WEAT and RND. This paper makes use of WEFE's implementations of WEAT to analyze selected popular word embeddings, namely, Google's Word2Vec model (word2vec-google-news-300) , FaceBook's fastText model (fasttext-wiki-news-subwords-300) and GloVe model (glove-wiki-gigaword-300) . WEFE and WEAT use similar notations where two sets of target words T 1 and T 2 and two sets of attribute words A 1 and A 2 are expected to be defined such that the query (Q) is formed. Q = ( { T 1 , T 2 } , { A 1 , A 2 } ) ( 5 ) Given that the word embedding w and cos(w, x) is the cosine similarity of the word embedding vectors, WEAT first defines the measure as: d ( w , A 1 , A 2 ) = ( m e a n x ∈ A 1 c o s ( w , x ) ) - ( m e a n x ∈ A 2 c o s ( w , x ) ) ( 6 ) Resulting in WEAT fairness metric: F W E A T ( M , Q ) = ∑ w ∈ T 1 d ( w , A 1 , A 2 ) - ∑ w ∈ T 2 d ( w , A 1 , A 2 ) ( 7 ) If the output of F WEAT is positive then the target T 1 will be related to attribute A 1 and target T 2 to attribute A 2 . However, if F WEAT is negative target T 1 will be related to attribute A 2 and target T 2 to attribute A 1 . F WEAT values are usually between ±0.5 and ±2, and the ideal score is 0. 3.5. Experimental setup For this research, we use machine learning models such as logistic regression (Bisong, 2019), XGBoost (Wang et al., 2020), and EasyEnsemble. XGBoost -Extreme Gradient Boosting- is a scalable supervised machine learning algorithm built upon decision trees, ensemble learning, and gradient boosting. EasyEnsamble (Liu et al., 2009) is an ensemble of AdaBoost learners trained on different balanced bootstrap samples, where random under-sampling is used to achieve balancing. We use sklearn implementations for these classifiers with 10-fold cross-validations. This research aims to address the need for socially responsible, fair, trustworthy AI for healthcare for NZ populations. As such, we focus more on assessing fairness measures and not on improving prediction model accuracy. The choice of the above algorithms is based on the extent of available examples in the literature that support these algorithms for healthcare applications (Hotzy et al., 2018; Luo et al., 2019; Tong et al., 2020; Yao et al., 2020; Barak-Corren et al., 2021; Peng et al., 2021; Snider et al., 2021; Wu et al., 2022). However, it is vital to point out, in this paper we avoid complicated “black-box” algorithms such as neural networks. A black box model, in this context, can be defined as a machine learning based algorithm such as sophisticated neural networks which receives data input and produces outputs/predictions using thousands of parameters while the explanation for that prediction remains unknowable and untraceable (Guidotti et al., 2018). Deep-learning models and other sophisticated models will be considered in future research. We use tools such as SHAP (Shapley Additive exPlanations) (Lundberg and Lee, 2017) and LIME (Local Interpretable Model-agnostic Explanations) (Ribeiro et al., 2016) to interpret and explain model outputs. SHAP force plots are used to understand the model outputs of a given patient (referred to as local interpretability of model output). SHAP Summary plots are used to understand the overall model output (i.e., global interpretability of model output). LIME is used for free-form text data. Chi-squared (χ2) test of independence is used to test if there is a significant relationship between the two classes “Harm” and “no Harm” given a token. The null hypothesis is that there is no relationship between the token in class “Harm” and class “no Harm.” 4. Experimental results and analysis This section presents an analysis of the fairness measures and bias indicators for the three tasks using the New Zealand data for both data and algorithmic bias. In this research, we treat tabular data separate from text data. However, we acknowledge that both data types can be combined in a multi-modal setting to address the tasks. Multi-modal AI models are out of the scope of this research and will be addressed in future research. 4.1. Data bias 4.1.1. Tabular data Tables 2, 3 present DI scores for NZ-GP Harms data and NZ-GDF data for sensitive gender and ethnicity groups. For ethnicity, “NZ European” is considered the privileged group and Māori (and Pacific) are considered unprivileged groups. Table 2 indicates no bias for sensitive group gender across all three cases, where the DI scores are perfect “1.” NZ-GDF data indicates no bias toward or against Māori and Pacific populations. NZ-GP data was used for two tasks, predicting harm and predicting preventable harm. The main reason for this is to check how an algorithmic design (i.e., research task design) results in bias as indicated by Obermeyer et al. (2019). NZ-GP data was collected with extensive care by health professionals to ensure the inclusion of the NZ population (Leitch et al., 2021). This is reflected in the DI scores of task 1, where no data bias is indicated toward the Māori population. However, for task 2, where only a subset of data is used, there are clear bias indicators for both Māori and Pacific groups. TABLE 2 Table 2. Disparate Impact (DI) values of NZ-GP Harms data with targets: Harm Status and Preventable Harm Status, and NZ-GDF Data with targets: Hospital mortality. TABLE 3 Table 3. Disparate Impact (DI) values of NZ-GP Harms data with targets: Harm Status and Preventable Harm Status, and NZ-GDF Data with targets: Hospital mortality. Protected groups' ethnicity and gender are combined. Table 3 presents DI scores where both sensitive groups are combined. In this case, there is a noticeable difference in DI scores across all three tasks and definite indicators of bias. This is an observation which requires further investigation in the future. It will require a larger range of NZ health data to verify if there is a clear pattern. 4.1.2. Free-text data This section will only consider NZ-GP Harms data for task 1 to understand the impact text data has over bias. NZ-GDF data does not include free-form medical text, hence is not included. To analyze the free-form text, we examined the most frequent tokens, and a subset of tokens with possible gender or Māori reference was selected. Figure 4 provides the frequency of such tokens, which can be considered biased due to gender or ethnicity references. FIGURE 4 Figure 4. NZ-GP Harms Data (Task 1). Examples of possible tokens among the most frequent tokens where possible bias due to gender references and ethnicity tokens are presented. Māori tokens: “wairoa” means long water, “Wairoa” is also a district name; “Wanganui” is a district, city and river name; “ao” means world; “aotearoa” is the Māori name for New Zealand which can be directly translated as the land of the long white cloud; “te” means the; “waitemata” is a name of a district and habour, and it means obsidian waters. Furthermore, we use WEAT to analyze the bias in selected embeddings for a given query. Table 4 provides F WEAT scores for embeddings with selected targets and attributes from Table 5. The scores are mostly not close to ‘0', indicating some bias in the embeddings for given targets and attributes. For example, the negative values in T1 and T2 with respect to A2 and A3 across all three embeddings indicate the target T1 “Ethnicity” is related to attribute A3 “exercise” and T2 “Gender” to “health” with the most negative value being from GloVe embeddings. These relationships provide some understanding among a set of target words and attributes. However, there is a need to investigate domain-specific embeddings further and handle OOV words to ensure the bias, if present, toward Māori can be addressed at the embedding level before developing an AI model such as a neural network. TABLE 4 Table 4. F WEAT scores for embeddings with selected targets and attributes from Table 5. TABLE 5 Table 5. Collection of target and attribute word lists used. ∧ word lists created by us. * common/popular names obtained from google search. 4.1.3. In summary In a nutshell, in this section, we have considered data bias in both NZ-based EHRs, and both tabular and free-form text data. We use two very distinct NZ datasets, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. The NZ-GP Harms data were carefully collected to limit bias. However, this does not include the bias associated with the patient's ability to access healthcare and the biases of the clinician writing in the medical record. The DI scores across the two datasets and three tasks for the protected groups provided an analysis of the tabular data. Considering the Māori patient data, when the algorithmic design included the complete datasets, there was no data bias in both datasets. However, for task 2, where only a subset of data was used, there was clear evidence of data bias. Although, in general, gender groups showed no evidence of bias, when both protected groups were combined, the observations were more interesting. Being a Māori male indicated bias scores for NZ-GP data for both tasks while being Māori female indicated bias DI scores for NZ-GDF data. Furthermore, to analyze text data, we consider the two popular methods of representing text in natural language processing tasks—BOW and word embeddings. Given that BOW is formed using the frequency of tokens, we examine the most frequent tokens to identify tokens that reference some bias indicators. These tokens analyze the algorithmic bias when BOW with logistic regression is used for predictions. We also measured F WEAT scores for the most commonly used embeddings using the WEFE framework when a selection of target tokens and attributes were used. The F WEAT scores indicated relationships among targets and attributes. The relationships among targets and attributes are a clear example of unintentional bias, and as such, it is vital to recognize such possibilities and work toward mitigating them. 4.2. Algorithmic bias 4.2.1. Tabular data We use machine learning algorithms such as logistic regression, XGBoost, and EasyEnsemble to analyze algorithmic bias across all three tasks. Although we do not emphasize or explore techniques to improve the prediction accuracy of such algorithmic models, Supplementary Table S1 presents sensitivity and specificity scores for reference. Figures 5, 6 present DI scores across three tasks with classifiers, where 10-fold cross-validation is used. DI scores are calculated at each fold for the data and the classifiers, and the box plots provide the range of DI scores. Lines are used to indicate 0.8 < DI < 1.25 for reference. Using an algorithm introduces bias to data, even in cases where the data displays no signs of bias. FIGURE 5 Figure 5. NZ-GP Harms data with the target as patient Harm Status. Protected group: Gender (Top), ethnicity (Bottom). 10-cross-validation is performed using three classifiers, and the DI score is calculated at each fold. DI values of true values are also included. Each box plot ranges from minimum to maximum values of DI scores calculated at each fold. DI scores closer to 1 are better. DI < 0.8 implies bias against group (indicated by red line ), DI > 1.25 implies bias for group (indicated by blue line ). FIGURE 6 Figure 6. (Top) NZ-GP Harms data with the target as preventable Harm. (Bottom) NZ-GDF hospital mortality prediction. Protected group: Gender and Ethnicity. Ten cross-validations are performed using two classifiers, and the DI score is calculated at each fold. DI values of true values are also included. Each box plot ranges from minimum to maximum values of DI scores calculated at each fold. DI scores closer to 1 are better. DI < 0.8 implies bias against group (indicated by red line ), DI > 1.25 implies bias for group (indicated by blue line ). Table 6 provides equal opportunity and false positive rates for privileged groups. Equalized odds are when equal opportunity and FPR are expected to be the same across all groups. This means the ratio needs to be as close to “1” as possible in TPRs and FPRs. Equal opportunities are closer to “1” across all three classifiers for gender and ethnicity, while the same observation is not evident with FPRs and consequently equalized odds. For ethnicity, where the privileged group is “NZ European,” XGBoost presents the best ratio for both equal opportunity and equalized odds. In the case of gender, where the privileged group is “Male,” equal opportunity ratio across all three classifiers are approximately “1,” however FPR across all three classifiers are high with logistic regression being the worst of the three. TABLE 6 Table 6. NZ-GP Harms data. To further understand the model outputs, we use SHAP visualization tools. Figure 7 provides examples of two patient outputs from XGBoost and Logistic regression. Forces of Shapley values in blue represent the features that increase the probability of default (in this case, “Harm”). In contrast, red values represent the features incrementing the probability of prediction. Furthermore, a summary plots of XGBoost and LR is also presented as Supplementary Figure S1. The SHAP summary plot allows us to understand the model by ranking the features from the most relevant to the least important ones, where the data points represent a different patient. High values of this feature impact the output of the model (i.e., the probability of predicting harm increases). FIGURE 7 Figure 7. RED features: increase the prediction of true label; BLUE features: decrease the prediction of true label of the model. Plots (A, B): Output of a 35-year-old Māori female patient; where plot (A) XGBoost correctly classifying as “no Harm” and plot (B) logistic regression wrongly classifying as “Harm.” Plots (C, D): Output of a 53-year-old NZ European female patient; where plot (C) XGBoost wrongly classified as “no Harm” and plot (D) logistic regression correctly classified as “Harm.” In both Figures 7 and Supplementary Figure S1, the influence of “Gender” on final model predictions is evident. However, despite the DI score indicating algorithmic bias, features of “ethnicity” are far lower in the features that impact the outcome. One possible reason for this difference is that the sensitivity of chosen models for the tasks is not high (i.e., the models are being too cautious in finding a positive result). 4.2.2. Free-text data For Task 1, we experiment using BOW to represent free-form text with a dictionary size of 1,000 and 10,000 with logistic regression as a classifier. The sensitivity and specificity scores of BOW with logistic regression for task 1 using dictionary size 1,000 is 0.74 and 0.69 and using dictionary size 10,000 is 0.60 and 0.68, respectively. To understand the data and check if the frequency of words/tokens is significantly different between the labels, we use the chi-squared test. Using the selected tokens presented in Figure 4, the output of chi-squared test is presented in Supplementary Table S2. The p-value for gender-related tokens is 0.076 and ethnicity-related tokens is 0.51. Given both are >0.05, the null hypothesis at 95% level of confidence is not rejected. The null hypothesis was that gender or ethnicity related tokens and the harm labels are independent. Furthermore, we use both LIME and SHAP for task 1 with BOW text representations with LR using NZ-GP Harms data to understand individual predictions. Supplementary Figure S2 presents two patient output examples, where selected features that influence the outcomes and the influential features from a collection of Bias tokens (see Figure 4 for the complete list) are included. Figure 8 provides examples of LIME outputs for two patients indicating the features that influenced the algorithmic decisions on “Harm” or “no Harm.” Gender indicating the token “Mother” is visible in the example as a feature that influenced against true label (i.e., “Harm” for this example). While these examples do not display ethnic references, in the top 10 features, there is no clear indication that this may not be the case for overall predictions. FIGURE 8 Figure 8. Lime output for examples of predictions using BOW with LR using NZ-GP Harms data. (Top) True label and prediction label are both indicating “Harm.” (Bottom) True label is “Harm,” but predicted is “no Harm.” Label 1 is “Harm,” and label 0 is “no Harm.” 4.2.3. In summary We use simple machine learning algorithms –logistic regression, XGBoost, and EasyEnsemble– to model the three prediction tasks. The DI scores were used to observe the algorithmic bias, and the results show algorithmic bias even for cases with no data bias. Equal opportunity and equalized odds ratios for privileged groups indicate that XGBoost is the best model for ethnicity. For gender, although equal opportunity ratios for all three classifiers are approximately “1,” equalized odds ratios are not. For text data, there was evidence of the selected possible bias tokens from frequent tokens that showed a significant difference between the labels “Harm” and “no Harm” for BOW with logistic regression. Furthermore, SHAP and LIME were used to explain individual instances and predictions. For tabular data, while there are clear indications of algorithmic bias for ethnicity and gender, SHAP outputs only show evidence of gender-related features. Likewise, for text data, LIME outputs also show evidence of gender-related features. These observations can result from many factors, including the model choices. However, there is a need for future research to reach a conclusive decision. 5. Discussions The general results of this research show that bias was detected within the machine learning methods employed in this study. As expected, there were two sources of bias, the data and the algorithms used to predict preventable harm. Within the tabular data, DI scores indicated biases surrounding different ethnic groups. However, the favored groups changed depending on the target measure (Tables 2, 3). There were no biases detected for gender groups collected in these samples. There may also be bias indicated by the frequency of terms used in free text data, although the implications are currently unclear (Figure 4). Algorithmic bias is harder to summarize as there was an apparent effect on the subset of the data the model was trained on Figures 5, 6. The DI scores produced by training on one subset indicated bias in favor of the target group, and those from training with another subset indicated bias against that same group. With these results, it can be suggested that the data itself can heavily influence algorithmic bias. A more extensive research question is how data and algorithms interact with product bias. Furthermore, the outputs of DI scores for prediction models and the explainable AI techniques of model predictions do not provide similar observations. While this may be due to the selection and accuracy of algorithmic models, there is a need to explore other factors. What if the need for techniques to analyze bias or equity measures is also subjective to the variation in populations (i.e., NZ vs. the USA)? 5.1. Possible introduction of bias Using real-world datasets introduces challenges such as missing data and smaller datasets. Standard practices dealing with missing data include leaving out the data with missing variables and imputing missing values. Removal of data with missing values results in information loss and can also create unfair/biased data (Farhangfar et al., 2007; Madhu and Rajinikanth, 2012; Madhu et al., 2019; Ipsen et al., 2022). The magnitude of potential data biases will be subject to the percentage of removed data concerning the complete data and the distribution changes in protected groups concerning the privileged group. Imputation of missing values is generally based on the correlation of other features, and the resulting dataset can include biases (Madhu et al., 2019; Ipsen et al., 2022). The NZ-GP Harms dataset includes a small subset of missing data and is dealt with by the removal of the data. Future work using this dataset will attempt to quantify any possible introduction of biases. The NZ-GDF dataset did not include any missing data. However, the sample size is small (100 patients). The main challenge in small data is over-fitting which can potentially cause data and algorithmic biases. This paper uses simple machine learning models and 10-fold cross-validations to minimize over-fitting. Data augmentation is one of the most common approaches to dealing with small datasets. There are examples in the literature which show data augmentation techniques can help reduce data bias (Jaipuria et al., 2020; Sharma et al., 2020). Exploring possible data augmentation techniques to deal with small datasets while possibly minimizing data and model bias is a future research avenue. 5.2. Implications for New Zealand healthcare This research used NZ EHRs to analyze data and algorithmic bias. We considered two tasks for NZ-GP Harms data. The algorithmic design was such that predicting harm used all available data while predicting preventable harm used a subset of data. Despite the efforts toward unbiased data collection, we notice bias in predicting preventable harm. This emphasizes the need to consider the potential ways in which the data will be used to ensure unintentional bias can be eliminated. Furthermore, we provide evidence that using commonly available pre-trained embeddings to represent text data also acts as a new source of bias. While this was not intentional at the data collection or model development stages, these observations suggest a need for further analysis. Furthermore, they raise questions about the current employment of AI in NZ healthcare. Are changes necessary to adapt these generic measures and thresholds to the New Zealand context? Do these models adequately represent Māori data specifically suited for the task? If not, do they risk worsening existing inequities, even if developed and used with the best intentions? If used to guide clinical decision-making or resource allocations, would they entrench systemic or societal biases under the guise of impartiality? This research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We provided an overview of reproducible concepts that can be adapted to any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. A primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare. Figure 1 provided an overview of the AI/ML cycle where the two primary stages included building a model and model application. We believe an ideal AI/ML cycle should include an in-between stage, where data and model bias needs to be identified and mitigated before applying the model. Hence, the modified AI/ML cycle would be: Step 1: Building a model Step 2: Identifying and mitigating data and model bias Step 3: Model application Steps 1 and 2 will require multiple iterations to ensure a satisfactory outcome before reaching step 3. Step 2 will require “Humans,” i.e., the experts mentioned above, input to ensure data and model bias are identified and mitigated. Moreover, this inclusion will enable the required threshold changes to satisfy the NZ population's needs. Future research will include extensive discussions and possibly interviews or studies with other experts in the field to help understand the real-world implications, validate and improve these measures and lead to ways to mitigate biases along data collection, model training, evaluation and interpretation. The aim of developing socially responsible, fair machine learning algorithms to improve the healthcare outcomes of underrepresented and indigenous populations is a vital avenue of research and has a magnitude of impact on the equality of healthcare across the world. In New Zealand, the ongoing health system reforms and the inclusion of a new Māori health authority are expected to provide opportunities and platforms for health equity. Data availability statement The data analyzed in this study is subject to the following licenses/restrictions: Anonymised datasets may be made available on reasonable request from the corresponding authors. Code can be found at https://github.com/vithyayogarajan/HealthAI-Bias-Data-Model-NZ. Requests to access these datasets should be directed to c2hhcm9uLmxlaXRjaEBvdGFnby5hYy5ueg==. Ethics statement The studies involving human participants were reviewed and approved by University of Otago Human Ethics Committee University of Auckland Ethics Committee. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements. Author contributions VY was the lead researcher of this study and was responsible for coordinating and collaborating with the listed co-authors and also responsible for the following: code, validation, experiments, research goal/questions development, writing the original draft, and reviewing and editing. In addition to VY, other AI research experts includes GD, JB, and MW. GD contributed toward research goal development and reviewing and editing. JB also contributed toward writing, reviewing, and editing. NZ clinical experts are SL, VA, and DR. SL was the clinical coordinator of the NZ-GP Harms study and as such is responsible for data and ethics and contributed toward the understandings of healthcare implications of NZ population, reviewing, and editing. DR also contributed toward reviewing and editing. VA was the clinical coordinator of the NZ-GPF study hence responsible for data and ethics and also helped with reviewing and editing. Māori research academics in this paper are TK and JB. TK is an expert in te ao Māori and also contributed toward reviewing and editing. All authors contributed to the article and approved the submitted version. Funding The original data collection studies were funded by the Health Research Council of New Zealand. Frontiers of Computer Science ISSN 2095-2236 is included in the 2022 Read and Publish agreement with the University of Otago, as an associate member of the CAUL (Council of Australian University Libraries) Procurement consortium. Acknowledgments We thank Dr. David Welch for reviewing this paper and for his constructive suggestions and comments. VY would like to thank the Strong AI lab for the ongoing support and the University of Otago for the computer resources. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Supplementary material The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcomp.2022.1070493/full#supplementary-material Supplementary Figure S1. Summary plots. XGBoost (Left) and Logistic regression (Right). Here “NConsultations” refers to the number of consultations; “NoUniqueMedsRx” refers to the number of unique medications prescribed; size of the medical practice is given by “Large,” “Medium,” and “Small”; location by “Urban” and “Rural”; and race includes “NZ Europeans,” “Māori,” “Pasifika,” and “Others.” Supplementary Figure S2. BOW with LR using NZ-GP Harms data. Selected features that influence the final outcomes are presented where the most prominent features, and the influential features from a collection of Bias tokens is included. (Left): Wrongly predicted as “no Harm” when the true label is harm with an outcome of a NZ European patient death. (Right): correctly labeled as “Harm” for a Māori patient. The bottom of the plot starts as the expected value of the model output, and then each row shows how the positive (red) or negative (blue) contribution of each feature moves the value from the expected model output to the prediction. Supplementary Table S1. Tasks 1,2 and 3 classification. 10-fold cross-validation is performed using various classifiers. Supplementary Table S2. NZ-GP Harms Data. Using chi-squared Test, gender-related and ethnicity-related bias tokens from Figure 4 are tested between the two classes ‘No Harm' and ‘Harm'. For Gender-related tokens, the chi-square statistic: 9.97, p-value: 0.076 and the degree of freedom: 5. The expected contingency table is also presented. For ethnicity-related tokens, chi-square statistic: 2.38, p-value: 0.51, and the degree of freedom: 3. Footnotes References Badilla, P., Bravo-Marquez, F., and Pérez, J. (2020). “WEFE: the word embeddings fairness evaluation framework,” in IJCAI (Yokohama), 430–436. Google Scholar Besse, P., del Barrio, E., Gordaliza, P., Loubes, J.-M., and Risser, L. (2022). A survey of bias in machine learning through the prism of statistical parity. Am. Stat. 76, 188–198. doi: 10.1080/00031305.2021.1952897 CrossRef Full Text | Google Scholar Bisong, E. (2019). “Logistic regression,” in Building Machine Learning and Deep Learning Models on Google Cloud Platform (Ottawa, ON: Springer), 243–250. Google Scholar Carvalho, D. V., Pereira, E. M., and Cardoso, J. S. (2019). Machine learning interpretability: a survey on methods and metrics. Electronics 8, 832. doi: 10.3390/electronics8080832 CrossRef Full Text | Google Scholar Celi, L. A., Cellini, J., Charpignon, M.-L., Dee, E. C., Dernoncourt, F., Eber, R., et al. (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities–a global review. PLoS Digit. Health 1, e0000022. doi: 10.1371/journal.pdig.0000022 CrossRef Full Text | Google Scholar Curtis, E., Jones, R., Tipene-Leach, D., Walker, C., Loring, B., Paine, S.-J., et al. (2019). Why cultural safety rather than cultural competency is required to achieve health equity: a literature review and recommended definition. Int. J. Equity Health 18, 1–17. doi: 10.1186/s12939-019-1082-3 PubMed Abstract | CrossRef Full Text | Google Scholar Farhangfar, A., Kurgan, L. A., and Pedrycz, W. (2007). A novel framework for imputation of missing values in databases. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 37, 692–709. doi: 10.1109/TSMCA.2007.902631 CrossRef Full Text | Google Scholar Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S. (2015). “Certifying and removing disparate impact,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW), 259–268. Google Scholar Fletcher, R. R., Nakeshimana, A., and Olubeko, O. (2021). Addressing fairness, bias, and appropriate use of artificial intelligence and machine learning in global health. Front. Artif. Intell. 3, 561802. doi: 10.3389/frai.2020.561802 PubMed Abstract | CrossRef Full Text | Google Scholar Forde, J. Z., Cooper, A. F., Kwegyir-Aggrey, K., De Sa, C., and Littman, M. (2021). “Model selection's disparate impact in real-world deep learning applications,” in Science and Engineering of Deep Learning Workshop, ICLR 2021. Google Scholar Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., et al. (2018). “Explainable AI: the new 42?” in International Cross-domain Conference for Machine Learning and Knowledge Extraction (Hamburg: Springer), 295–303. Google Scholar Goldberg, Y. (2017). Neural network methods for natural language processing. Synthesis Lect. Hum. Lang. Technol. 10, 1–309. doi: 10.1007/978-3-031-02165-7 CrossRef Full Text | Google Scholar Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. (2016). Deep Learning. Cambridge: MIT Press. Google Scholar Gudivada, V., Apon, A., and Ding, J. (2017). Data quality considerations for big data and machine learning: going beyond data cleaning and transformations. Int. J. Adv. Software 10, 1–20. Google Scholar Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Comput. Surveys 51, 1–42. doi: 10.1145/3236009 CrossRef Full Text | Google Scholar Hamon, R., Junklewitz, H., and Sanchez, I. (2020). Robustness and Explainability of Artificial Intelligence. Luxembourg: Publications Office of the European Union. Google Scholar Hardt, M., Price, E., and Srebro, N. (2016). “Equality of opportunity in supervised learning,” in Advances in Neural Information Processing Systems, Vol. 29 (Barcelona). Google Scholar Hotzy, F., Theodoridou, A., Hoff, P., Schneeberger, A. R., Seifritz, E., Olbrich, S., et al. (2018). Machine learning: an approach in identifying risk factors for coercion compared to binary logistic regression. Front. Psychiatry 9, 258. doi: 10.3389/fpsyt.2018.00258 PubMed Abstract | CrossRef Full Text | Google Scholar Howard, F. M., Dolezal, J., Kochanny, S., Schulte, J., Chen, H., Heij, L., et al. (2021). The impact of site-specific digital histology signatures on deep learning model accuracy and bias. Nat. Commun. 12, 1–13. doi: 10.1038/s41467-021-24698-1 PubMed Abstract | CrossRef Full Text | Google Scholar Hudson, M. L., and Russell, K. (2009). The Treaty of Waitangi and research ethics in Aotearoa. J. Bioeth. Inq. 6, 61–68. doi: 10.1007/s11673-008-9127-0 CrossRef Full Text | Google Scholar Ipsen, N., Mattei, P.-A., and Frellsen, J. (2022). “How to deal with missing data in supervised deep learning?” in ICLR 2022-10th International Conference on Learning Representations. PubMed Abstract | Google Scholar Jaipuria, N., Zhang, X., Bhasin, R., Arafa, M., Chakravarty, P., Shrivastava, S., et al. (2020). “Deflating dataset bias using synthetic data augmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (Seattle, WA: IEEE), 772–773. Google Scholar James, J., Yogarajan, V., Shields, I., Watson, C., Keegan, P., Mahelona, K., et al. (2022). “Language models for code-switch detection of te reo Māori and English in a low-resource setting,” in Findings of the Association for Computational Linguistics: NAACL 2022 (Seattle, WA: Association for Computational Linguistics), 650–660. Google Scholar Joulin, A., Grave, É., Bojanowski, P., and Mikolov, T. (2017). “Bag of tricks for efficient text classification,” in Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers (Valencia), 427–431. Google Scholar Kalkman, S., van Delden, J., Banerjee, A., Tyl, B., Mostert, M., and van Thiel, G. (2022). Patients' and public views and attitudes towards the sharing of health data for research: a narrative review of the empirical evidence. J. Med. Ethics 48, 3–13. doi: 10.1136/medethics-2019-105651 PubMed Abstract | CrossRef Full Text | Google Scholar Koene, A., Dowthwaite, L., and Seth, S. (2018). “IEEE P7003™ standard for algorithmic bias considerations: work in progress paper,” in Proceedings of the International Workshop on Software Fairness (Gothenburg), 38–41. Google Scholar Leitch, S., Dovey, S., Cunningham, W., Wallis, K., Eggleton, K., Lillis, S., et al. (2021). Epidemiology of healthcare harm in New Zealand general practice: a retrospective records review study. BMJ Open 11, e048316. doi: 10.1136/bmjopen-2020-048316 PubMed Abstract | CrossRef Full Text | Google Scholar Lundberg, S. M., and Lee, S.-I. (2017). “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems 30, eds I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Curran Associates, Inc.), 4765–4774. Google Scholar Madhu, G., Bharadwaj, B. L., Nagachandrika, G., and Vardhan, K. S. (2019). “A novel algorithm for missing data imputation on machine learning,” in 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT) (Tirunelveli: IEEE), 173–177. Google Scholar Madhu, G., and Rajinikanth, T. (2012). “A novel index measure imputation algorithm for missing data values: a machine learning approach,” in 2012 IEEE International Conference on Computational Intelligence and Computing Research (Coimbatore: IEEE), 1–7. Google Scholar Maurer, M., Mangrum, R., Hilliard-Boone, T., Amolegbe, A., Carman, K. L., Forsythe, L., et al. (2022). Understanding the influence and impact of stakeholder engagement in patient-centered outcomes research: a qualitative study. J. Gen. Intern. Med. 37, 6–13. doi: 10.1007/s11606-021-07104-w PubMed Abstract | CrossRef Full Text | Google Scholar Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Comput. Surveys 54, 1–35. doi: 10.1145/3457607 CrossRef Full Text | Google Scholar Nathans, L. L., Oswald, F. L., and Nimon, K. (2012). Interpreting multiple linear regression: a guidebook of variable importance. Pract. Assess. Res. Evaluat. 17, n9. doi: 10.7275/5fex-b874 CrossRef Full Text | Google Scholar Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., and Tzovara, A. (2021). Addressing bias in big data and AI for health care: a call for open science. Patterns 2, 100347. doi: 10.1016/j.patter.2021.100347 PubMed Abstract | CrossRef Full Text | Google Scholar Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453. doi: 10.1126/science.aax2342 PubMed Abstract | CrossRef Full Text | Google Scholar Paleyes, A., Urma, R.-G., and Lawrence, N. D. (2022). Challenges in deploying machine learning: a survey of case studies. ACM Comput. Surv. 55, 29. doi: 10.1145/3533378 CrossRef Full Text | Google Scholar Peng, J., Jury, E. C., Dönnes, P., and Ciurtin, C. (2021). Machine learning techniques for personalised medicine approaches in immune-mediated chronic inflammatory diseases: applications and challenges. Front. Pharmacol. 12, 720694. doi: 10.3389/fphar.2021.720694 PubMed Abstract | CrossRef Full Text | Google Scholar Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ““Why should I trust you?” Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco), 1135–1144. Google Scholar Rüping, S. (2006). Learning Interpretable Models (Ph.D. thesis). University of Dortmund. Google Scholar Sharma, S., Zhang, Y., Ríos Aliaga, J. M., Bouneffouf, D., Muthusamy, V., and Varshney, K. R. (2020). “Data augmentation for discrimination prevention and bias disambiguation,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (New York, NY), 358–364. Google Scholar Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 146, 102551. doi: 10.1016/j.ijhcs.2020.102551 CrossRef Full Text | Google Scholar Smith, A. L., Chaudhuri, A., Gardner, A., Gu, L., Salem, M. B., and Lévesque, M. (2018). “Regulatory frameworks relating to data privacy and algorithmic decision making in the context of emerging standards on algorithmic bias,” in NIPS Conference Workshop on Ethical, Social and Governance Issues in AI (Montreal, QC). Google Scholar Snider, B., McBean, E. A., Yawney, J., Gadsden, S. A., and Patel, B. (2021). Identification of variable importance for predictions of mortality from COVID-19 using AI models for Ontario, Canada. Front. Public Health 9, 675766. doi: 10.3389/fpubh.2021.759014 PubMed Abstract | CrossRef Full Text | Google Scholar Tong, Z., Liu, Y., Ma, H., Zhang, J., Lin, B., Bao, X., et al. (2020). Development, validation and comparison of artificial neural network models and logistic regression models predicting survival of unresectable pancreatic cancer. Front. Bioeng. Biotechnol. 8, 196. doi: 10.3389/fbioe.2020.00196 PubMed Abstract | CrossRef Full Text | Google Scholar Trye, D., Yogarajan, V., Konig, J., Keegan, T., Bainbridge, D., and Apperley, M. (2022). “A hybrid architecture for labelling bilingual māori-english tweets,” in Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 (Association for Computational Linguistics). Google Scholar Wang, C., Deng, C., and Wang, S. (2020). Imbalance-XGBoost: leveraging weighted and focal losses for binary label-imbalanced classification with XGBoost. Pattern Recognit. Lett. 136, 190–197. doi: 10.1016/j.patrec.2020.05.035 CrossRef Full Text | Google Scholar Widmer, G., and Kubat, M. (1996). Learning in the presence of concept drift and hidden contexts. Mach. Learn. 23, 69–101. doi: 10.1007/BF00116900 CrossRef Full Text | Google Scholar Wilson, D., Tweedie, F., Rumball-Smith, J., Ross, K., Kazemi, A., Galvin, V., et al. (2022). Lessons learned from developing a COVID-19 algorithm governance framework in Aotearoa New Zealand. J. R. Soc. N. Z. 1–13. doi: 10.1080/03036758.2022.2121290 CrossRef Full Text | Google Scholar Wu, Y., Zhang, Q., Hu, Y., Sun-Woo, K., Zhang, X., Zhu, H., et al. (2022). Novel binary logistic regression model based on feature transformation of XGBoost for type 2 Diabetes Mellitus prediction in healthcare systems. Fut. Generat. Comput. Syst. 129, 1–12. doi: 10.1016/j.future.2021.11.003 CrossRef Full Text | Google Scholar Yogarajan, V. (2022). Domain-specific language models for multi-label classification of medical text (Ph.D. thesis). The University of Waikato. Google Scholar Zafar, M. B., Valera, I., Rogriguez, M. G., and Gummadi, K. P. (2017). “Fairness constraints: mechanisms for fair classification,” in Artificial Intelligence and Statistics (Ft. Lauderdale, FL: PMLR), 962–970. Google Scholar
2022-12-22T00:00:00
2022/12/22
https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2022.1070493/full
[ { "date": "2022/12/01", "position": 22, "query": "AI healthcare" } ]
The role of artificial intelligence in industries such as ...
The role of artificial intelligence in industries such as healthcare, finance, and transportation
https://medium.com
[ "Gustavo Bodra" ]
The healthcare industry, for example, is using AI to improve the accuracy and efficiency of medical diagnosis and treatment, while the financial industry is ...
Photo by National Cancer Institute on Unsplash The role of artificial intelligence in industries such as healthcare, finance, and transportation Gustavo Bodra 4 min read · Dec 18, 2022 -- 1 Listen Share Artificial intelligence (AI) has the potential to revolutionize a wide range of industries, from healthcare to finance to transportation. In this article, we will explore the ways in which AI is being used in these sectors and the potential benefits and ethical considerations that come with the adoption of this technology. The healthcare industry, for example, is using AI to improve the accuracy and efficiency of medical diagnosis and treatment, while the financial industry is leveraging AI to automate and optimize processes such as risk assessment and fraud detection. In the transportation sector, AI is being used to develop self-driving cars and intelligent transportation systems, with the potential to greatly improve safety and efficiency. However, the use of AI in these industries also raises important ethical considerations, including the potential for bias and job displacement. We will delve into these issues and more in this comprehensive look at the role of artificial intelligence in industries such as healthcare, finance, and transportation. How AI is being used to improve the efficiency and accuracy of healthcare diagnosis and treatment The use of artificial intelligence (AI) in healthcare has the potential to significantly improve the accuracy and efficiency of medical diagnosis and treatment. For example, AI algorithms can analyze vast amounts of patient data to identify patterns and predict outcomes, helping doctors make more informed decisions. In addition, AI can be used to automate certain tasks, such as analyzing medical images or processing electronic health records, freeing up doctors and nurses to focus on more important tasks. By leveraging the power of AI, healthcare professionals can provide better care to their patients while also reducing the risk of errors. The role of AI in automating and optimizing financial processes, such as risk assessment and fraud detection The financial industry is one of the key areas where AI is being adopted, with many companies using machine learning algorithms to automate and optimize various processes. For example, AI can be used to assess risk, detect fraudulent activity, and optimize trading strategies. By analyzing large amounts of data, AI algorithms can identify patterns and make predictions that would be impossible for a human to discern. In addition, the use of AI in finance can help reduce the need for manual labor, freeing up employees to focus on more complex tasks. The potential of AI to revolutionize transportation, including self-driving cars and intelligent transportation systems The potential for AI to revolutionize transportation is vast, with many companies working on developing self-driving cars and intelligent transportation systems. Self-driving cars, for example, could significantly reduce the number of accidents on the road by eliminating human error, while intelligent transportation systems could improve traffic flow and reduce congestion. In addition, AI could be used to optimize routes and reduce fuel consumption, making transportation more efficient and environmentally friendly. The use of AI in transportation has the potential to greatly improve the safety and efficiency of the industry. The ethical considerations surrounding the use of AI in these industries include issues of bias and job displacement. As with any technology, the use of AI in industries such as healthcare, finance, and transportation raises a number of ethical considerations. One of the key concerns is the potential for bias in AI algorithms, which can perpetuate and amplify existing societal biases. In addition, the increasing use of AI in these industries could lead to job displacement, as certain tasks are automated. It is important for companies and policymakers to consider these ethical issues when adopting and implementing AI, and to take steps to address them.
2022-12-18T00:00:00
2022/12/18
https://medium.com/ai-revolution-transforming-the-way-we-live-and/the-role-of-artificial-intelligence-in-industries-such-as-healthcare-finance-and-transportation-e81633185a6a
[ { "date": "2022/12/01", "position": 29, "query": "AI healthcare" } ]
The Virtues of Interpretable Medical Artificial Intelligence
The Virtues of Interpretable Medical Artificial Intelligence
https://www.cambridge.org
[ "Joshua Hatherley", "Robert Sparrow", "Mark Howard" ]
by J Hatherley · Cited by 46 — We argue that, in some cases, a less accurate but more interpretable AI may have better effects on patient health outcomes than a “black box” model with ...
Introduction Deep-learning artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks, including diagnosis, risk prediction, triage, mortality prediction, and treatment planning.Footnote 1,2 A problem, however, is that the inner workings of these systems have often proven thoroughly resistant to understanding, explanation, or justification, not only to end-users (e.g., doctors, clinicians, and nurses) but also to the designers of these systems themselves. Such AI systems are commonly described as “opaque,” “inscrutable,” or “black boxes.” The initial response to this problem in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be achieved at the cost of the accuracy of these systems and that a preference for explainable systems over more accurate AI is ethically indefensible in the context of medicine.Footnote 3,4 In this article, we defend the value of interpretability in the context of the use of AI in medicine. We point out that clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may themselves be justified in this preference. Medical AI should be analyzed as a sociotechnical system, the performance of which is as much a function of how people respond to AI as it is of the outputs of the AI. Securing the downstream therapeutic benefits from diagnostic and prognostic systems is critically dependent on how the outputs of these systems are interpreted by physicians and received by patients. Prioritizing accuracy over interpretability overlooks the various human factors that could interfere with downstream benefits to patients. We argue that, in some cases, a less accurate but more interpretable AI may have better effects on patient health outcomes than a “black box” model with superior accuracy, and suggest that a preference for the use of a highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to patients—and perhaps even harm them. The Black Box Problem in Medical AI Recent advances in artificial AI and machine learning (ML) have significant potential to improve the current practice of medicine through, for instance, enhancing physician judgment, reducing medical error, improving the accessibility of medical care, and improving patient health outcomes.Footnote 5,6,7 Many advanced ML AI systems have demonstrated impressive performance in a wide variety of clinical tasks, including diagnosis,Footnote 8 risk prediction,Footnote 9 mortality prediction,Footnote 10 and treatment planning.Footnote 11 In emergency medicine, medical AI systems are being investigated to assist in the performance of diagnostic tasks, outcome prediction, and clinical monitoring.Footnote 12 A problem, however, is that ML algorithms are notoriously opaque, “in the sense that if one is a recipient of the output of the algorithm […], rarely does one have any concrete sense of how or why a particular classification has been arrived at from inputs.”Footnote 13 This can occur for a number of reasons, including a lack of relevant technical knowledge on the part of the user, corporate or government concealment of key elements of an AI system, or at the deepest level, a cognitive mismatch between the demands of human reasoning and the technical approaches to mathematical optimization in high-dimensionality that are characteristic of ML.Footnote 14 Joseph Wadden suggests that the black box problem “occurs whenever the reasons why an AI decision-maker has arrived at its decision are not currently understandable to the patient or those involved in the patient’s care because the system itself is not understandable to either of these agents.”Footnote 15 A variety of related concerns have been raised over the prospect of black box clinical decision support systems being operationalized in clinical medicine. Some authors worry that human physicians may act on the outputs of black box medical AI without a clear understanding of the reasons behind them,Footnote 16 or that opacity may conceal erroneous inferences or algorithmic biases that could jeopardize patient health and safety.Footnote 17,18,19 Others are concerned that opacity could interfere with the allocation of moral responsibility or legal liability in the instance that patient harm results from accepting and acting upon the outputs of a black box medical AI system,Footnote 20,21,22 or that the use of black-box medical AI systems may undermine the accountability that healthcare practitioners accept for AI-related medical error.Footnote 23 Still, others are concerned that black box medical AI systems cannot, will not, and perhaps ought not to be trusted by doctors or patients.Footnote 24,25,26 These concerns are especially acute in the context of emergency medicine, where decisions need to be made quickly and coordinated across teams of multispecialist practitioners. Responding to these concerns, some authors have argued that medical AI systems will need to be “interpretable,” “explainable,” or “transparent” in order to be responsibly utilized in safety-critical medical settings and overcome these various challenges.Footnote 27,28,29 The Case for Accuracy Recently, however, some authors have argued that opacity in medical AI is not nearly as problematic as critics have suggested, and that the prioritization of interpretable over black box medical AI systems may have several ethically unacceptable implications. Critics have advanced two distinct arguments against the prioritization of interpretability in medical AI. First, some authors have highlighted parallels between the opacity of ML models and the opacity of a variety of commonplace medical interventions that are readily accepted by both doctors and patients. As Eric Topol has noted, “[w]e already accept black boxes in medicine. For example, electroconvulsive therapy is highly effective for severe depression, but we have no idea how it works. Likewise, there are many drugs that seem to work even though no one can explain how.”Footnote 30 The drugs that Topol is referring to here include aspirin, which, as Alex John London notes, “modern clinicians prescribed […] as an analgesic for nearly a century without understanding the mechanism through which it works,” along with lithium, which “has been used as a mood stabilizer for half a century, yet why it works remains uncertain.”Footnote 31 Other authors have also highlighted acetaminophen and penicillin, which “were in widespread use for decades before their mechanism of action was understood,” along with selective serotonin reuptake inhibitors, whose underlying causal mechanism is still unclear.Footnote 32 Still, others have highlighted that the opacity of black-box AI systems is largely identical to the opacity of other human minds, and in some respects even one’s own mind. For instance, John Zerilli and coauthors observe that “human agents are […] frequently mistaken about their real (internal) motivations and processing logic, a fact that is often obscured by the ability of human decision-makers to invent post hoc rationalizations.”Footnote 33 According to some, these similarities imply that clinicians ought not to be anymore concerned about opacity in AI than they are about the opacity of their colleagues’ recommendations, or indeed the opacity of their own internal reasoning processes.Footnote 34 This first argument is a powerful line of criticism of accounts that hold that we should entirely abjure the use of opaque AI systems. However, it leaves open the possibility that, as we shall argue below, interpretable systems have distinct advantages that justify our preferring them. The second argument assumes—as does much of the AI and ML literature—that there is an inherent trade-off between accuracy and interpretability (or explainability) in AI systems. In their 2016 announcement of the “Explainable AI (XAI)” project, for instance, the United States Defence Advanced Research Projects Agency claims that “[t]here is an inherent tension between ML performance (predictive accuracy) and explainability; often the highest performing methods (e.g., deep learning) are the least explainable, and the most explainable (e.g., decision trees) are less accurate.”Footnote 35 Indeed, attempts to enhance our understanding of AI systems through the pursuit of intrinsic or ex ante interpretability (for instance, by restricting the size of the model, implementing “interpretability constraints,” or using simpler, rule-based classifiers over more complex deep neural networks) are often observed to result in compromises to the accuracy of a model.Footnote 36,37,38 In particular, the development of a high-performing AI system entails an unavoidable degree of complexity that often interferes with how intuitive and understandable the operations of these systems are in practice.Footnote 39 Consequently, some authors suggest that prioritizing interpretability over accuracy in medical AI has the ethically troubling consequence of compromising the accuracy of these systems, and subsequently, the downstream benefits of these systems for patient health outcomes.Footnote 40,41 Alex London has suggested that “[a]ny preference for less accurate models—whether computational systems or human decision-makers—carries risks to patient health and welfare. Without concrete assurance that these risks are offset by the expectation of additional benefits to patients, a blanket preference for simpler models is simply a lethal prejudice.”Footnote 42 According to London, when we are patients, it is more important to us that something works than that our physician knows precisely how or why it works.Footnote 43 Indeed, this claim appears to have been corroborated by a recent citizen jury study, which found that participants were less likely to value interpretability over accuracy in healthcare settings compared to non-healthcare settings.Footnote 44 London thus concludes that the trade-off between accuracy and interpretability in medical AI ought therefore to be resolved in favor of accuracy. The Limits of Post Hoc Explanation One popular response to these concerns is to hope that improvements in post hoc explanation methods could enhance the interpretability of medical AI systems without compromising their accuracy.Footnote 45 Rather than pursuing ex ante or intrinsic interpretability, post hoc explanation methods attempt to extract explanations of various sorts from black-box medical AI systems on the basis of their previous decision records.Footnote 46,47,48 In many cases, this can be achieved without altering the original, black-box model, either by affixing a secondary explanator to the original model or by replicating its statistical function and overall performance through interpretable methods.Footnote 49 The range of post hoc explanation methods is expansive, and it is beyond the scope of this article to review them all here. However, some key examples of post hoc explanation methods include sensitivity analysis, prototype selection, and saliency masks.Footnote 50 Sensitivity analysis involves “evaluating the uncertainty in the outcome of a black box with respect to different sources of uncertainty in its inputs.”Footnote 51 For instance, a model may return an output with a confidence interval of 0.3, indicating that it has produced this output with low confidence, with the aim of reducing the strength of a user’s credence. Prototype selection involves returning, in conjunction with the output, an example case that is as similar as possible to the case that has been entered into the system, with the aim of illuminating some of the criteria according to which output was generated. For instance, suppose a medical AI system, such as IDx-DR,Footnote 52 was to diagnose a patient with diabetic retinopathy from an image of their retina. A prototype selection explanator might produce, in conjunction with the model’s classification, a second example image that is most similar to the original case, in an attempt to illustrate important elements in determining its output. Lastly, saliency masks highlight certain words, phrases, or areas of the image that were most influential in determining a particular output. Post hoc explanation methods have demonstrated some potential to minimize some of the concerns of the critics of opacity discussed in section “The Black Box Problem in Medical AI,” while also side-stepping the objections of critics of interpretability discussed in section “The Case for Accuracy.” However, post hoc explanation methods also suffer from a number of significant limitations, which preclude them from entirely resolving this debate. First, the addition of post hoc explanation methods to “black box” ML systems adds another layer of uncertainty to the evaluation of their outputs and inner workings. Post hoc explanations can only offer an approximation of the computations of a black-box model, meaning that it may be unclear how the explanator works, how faithful it is to the model, and why its outputs or explanations ought to be accepted.Footnote 53,54,55 Second, and relatedly, such explanations often only succeed in extracting information that is highly incomplete.Footnote 56 For example, consider an explanator that highlights the features of a computed breast tomography scan that were most influential in classifying the patient as high-risk. Even if the features highlighted were intuitively relevant, this “explanation” offers a physician little reason to accept the model’s output, particularly if the physician disagrees with it. Third, the aims of post hoc explanation methods are often under-specified, particularly once the problem of agent-relativity in explanations is considered. Explanations often need to be tailored to a particular audience in order to be of any use. As Carl Zednik has expressed, “although the opacity of ML-programmed computing systems is traditionally said to give rise to the Black Box Problem, it may in fact be more appropriate to speak of many Black Box Problems—one for every stakeholder.”Footnote 57 An explanation that assumes a background in computer science, for instance, may be useful for the manufacturers and auditors of medical AI systems, but is likely to deliver next to no insight for a medical professional that lacks this technical background. Conversely, a simple explanation tailored to patients, who typically lack both medical and computer science backgrounds, is likely to provide little utility to a medical practitioner. Some post hoc explanations may prove largely redundant or useless, while others may influence the decisions of end-users in ways that could reduce the clinical utility of these systems. Finally, the focus on explanation has led to the neglect of justification in explainable medical AI.Footnote 58 , Footnote 59 Explanations are descriptive, in that they give an account of why a reasoner arrived at a particular judgment, but justifications give a normative account of why that judgment is a good judgment. There is a significant overlap between explanations and justifications, but they are far from identical. Yet within the explainability literature in AI, explanations and justifications are rarely distinguished, and when they are, it is the former that is prioritized over the latter.Footnote 60 Consequently, despite high hopes that explainability could overcome the challenges of opacity and the accuracy-interpretability trade-off in medical AI, post hoc explanation methods are not currently capable of meeting this challenge. Three Problems with the Prioritization of Accuracy over Interpretability In this section, we highlight three problems underlying the case for accuracy, concerning (1) the clinical objectives of medical AI systems and the need for accuracy maximization; (2) the gap between technical accuracy in medical AI systems and their downstream effects upon patient health outcomes; and (3) the reality of the accuracy-interpretability trade-off. Both together and separately, these problems suggest that interpretability is more valuable than critics appreciate. First, the accuracy of a medical AI system is not always the principal concern of human medical practitioners and may, in some cases, be secondary to the clinician’s own ability to understand and interpret the outputs of the system, along with certain elements of the system’s functioning, or even the system as a whole. Indeed, the priorities of clinicians are largely dependent upon their conception of the particular aims of any given medical AI system. In a recent qualitative study, for instance, Carrie Cai and coauthors found that the importance of accuracy to medical practitioners varies according to the practitioners’ own conception of a medical AI system’s clinical objectives.Footnote 61 “To some participants, the AI’s objective was to be as accurate as possible, independent of its end-user. […] To others, however, the AI’s role was to merely draw their attention to suspicious regions, given that the pathologist will be the one to make sense of those regions anyway: “It just gives you a big picture of this is the area it thinks is suspicious. You can just look at it and it doesn’t have to be very accurate”.Footnote 62 In these latter cases, understanding a model’s reasons or justifications for drawing the clinician’s attention to a particular area of an image may rank higher on the clinicians’ list of priorities than the overall accuracy of the system, in order that they may reliably determine why a model has drawn the clinician’s attention to a particular treatment option, piece of information, or area of a clinical image. This is not to deny the importance of accuracy in, say, a diagnostic AI system for the improvement of patient health outcomes, but rather to suggest that, in some cases, and for some users, the accuracy of an AI system may not be as critical as London and other critics have supposed, and may rank lower on the clinician’s list of priorities than the interpretability of the system. Depending upon the specific performance disparities between black box and interpretable AI systems, there may be cases where clinicians prefer less accurate systems that they can understand over black-box systems with superior accuracy. If users prefer interpretable models over “black box” systems, then the potential downstream benefits of “black box” AI systems for patients could be undermined in practice if, for instance, clinicians reject them or avoid using them. Implementing “black box” systems over interpretable systems without respect for the preferences of the users of these systems may result in suboptimal outcomes that could have otherwise been avoided through the use of less accurate but more interpretable AI systems. Even if clinicians’ preference for interpretability is a prejudice, if it is sufficiently widespread and influential, it may be sufficient to justify designers of AI to prioritize interpretability in order to increase the likelihood that AI systems will be adopted and their benefits realized. Second, contra London, clinicians may themselves be justified in this preference. The case for accuracy appears to erroneously assume a necessary causal link between technical accuracy and improved downstream patient health outcomes. While diagnostic and predictive accuracies are certainly important for the improvement of patient health outcomes, they are far from sufficient. Medical AI systems need to be understood as intervening in care contexts that consist of an existing network of sociotechnical relations, rather than as mere technical “additions” to existing clinical decision-making procedures.Footnote 63 , Footnote 64 , Footnote 65 How these AI systems will become embedded into these contexts, and alter existing relations between actors, is crucially important to the extent to which they will produce downstream health benefits. As Sara Gerke and coauthors argue, the performance of medical ML systems will be influenced by a variety of broader human factors beyond the system itself, including the way that clinicians respond to the outputs of the systems, “the reimbursement decisions of insurers, the effects of court decisions on liability, any behavioral biases in the process, data quality of any third-party providers, any (possibly proprietary) ML algorithms developed by third parties, and many others.”Footnote 66 Thus, as Thomas Grote observes in his recent discussion of clinical equipoise in randomized clinical trials of diagnostic medical AI systems, “even if the AI system were outperforming clinical experts in terms of diagnostic accuracy during the validation phase, its clinical benefit would still remain genuinely uncertain. The main reason is that we cannot causally infer from an increase in diagnostic accuracy to an improvement of patient outcome” (emphasis added).Footnote 67 There is a gap, in other words, between the accuracy of medical AI systems and their effectiveness in clinical practice, insofar as improvements in the accuracy of a technical system do not automatically translate into improvements in downstream health outcomes. Indeed, this observation is borne out by the current lack of evidence of downstream patient benefits generated from even the most technically accurate of medical AI systems. Superior accuracy is, in short, insufficient to demonstrate superior outcomes. One reason for this gap comes from the fact that human users do not respond to the outputs of algorithmic systems in the same way that we respond to our own judgments and intuitions, nor even to the recommendations of other human beings. Indeed, according to one recent systematic review in human–computer interaction studies, “the inability to effectively combine human and nonhuman (i.e., algorithmic, statistical, and machine) decision making remains one of the most prominent and perplexing hurdles for the behavioral decision making community.”Footnote 68 Prevalent human biases affect the interpretation of algorithmic recommendations, classifications, and predictions. As Sara Gerke and coauthors observe, “[h]uman judgement […] introduces well-known biases into an AI environment, including, for example, inability to reason with probabilities provided by AI systems, over extrapolation from small samples, identification of false patterns from noise, and undue risk aversion.”Footnote 69 In safety-critical settings and high-stakes decision-making contexts such as medicine, these sorts of biases could pose significant risks to patient health and well-being. Moreover, some of these biases are more likely to occur in cases where the medical AI system is opaque, rather than interpretable. Algorithmic aversion, for instance, is a phenomenon in which the users of an algorithmic system consistently reject the outputs of an algorithmic system, even when the user has observed the system performance to a high standard consistently over time, and when following the recommendations of the system would produce better outcomes overall.Footnote 70 , Footnote 71 Algorithmic aversion is most commonly observed in cases where the users of the system have expertise in the domain for which the system is designed (e.g., dermatologists in the diagnosis of malignant skin lesions);Footnote 72 in cases where the user has seen the system make (even minor) mistakes;Footnote 73 but most importantly for our purposes, in cases where the algorithmic system is perceived to be opaque by its user. Footnote 74 “Thus,” claim Michael Yeomans and coauthors, “it is not enough for algorithms to be more accurate, they also need to be understood.”Footnote 75 Finally, in passing, it is worth noting that some authorities have begun to contest the reality of the accuracy-interpretability trade-off in AI and ML. In particular, Cynthia Rudin has recently argued that the accuracy-interpretability trade-off is a myth, and that simpler, more interpretable classifiers can perform to the same general standard as deep neural networks after preprocessing, particularly in cases where data are structured and contain naturally meaningful features, as is common in medicine.Footnote 76 , Footnote 77 Indeed, Rudin argues that interpretable AI can, in some cases, demonstrate higher accuracy than comparatively black box AI systems. “Generally, in the practice of data science,” she claims, “the small difference in performance between ML algorithms can be overwhelmed by the ability to interpret results and process the data better at the next iteration. In those cases, the accuracy/interpretability trade-off is reversed—more interpretability leads to better overall accuracy, not worse.”Footnote 78 In a later article coauthored with Joanna Radin,Footnote 79 Rudin highlights a number of studies that corroborate the comparable performance of interpretable and black box AI systems across a variety of safety-critical domains, including healthcare.Footnote 80 , Footnote 81 , Footnote 82 Rudin and Radin also observe that even in computer vision and image-recognition tasks, in which deep neural networks are generally considered the state of the art, a number of studies have succeeded in implementing interpretability constraints to deep learning models without significant compromises in accuracy.Footnote 83 , Footnote 84 , Footnote 85 Rudin concludes that the uncritical acceptance of the accuracy-interpretability trade-off in AI often leads researchers to forego any attempt to investigate or develop interpretable models, or even develop the skills required to develop these models in the first place.Footnote 86 She suggests that black box AI systems ought not to be used in high-stakes decision-making contexts or safety-critical domains unless it is demonstrated that no interpretable model can reach the same level of accuracy. “It is possible,” claim Rudin and Radin, “that an interpretable model can always be constructed—we just have not been trying. Perhaps if we did, we would never use black boxes for these high-stakes decisions at all.”Footnote 87 While, as we have argued here, it will, at least in some circumstances, be defensible to prioritize interpretability at the cost of accuracy, if Rudin is correct, the price of the pursuit of interpretability may not be as high as critics—and our argument to this point—have presumed. Superior accuracy is, therefore, not enough to justify the use of black-box medical AI systems over less accurate but more interpretable systems in clinical medicine. In many cases, it will be genuinely uncertain a priori whether a more accurate black-box medical AI system will deliver greater downstream benefits to patient health and well-being compared to a less accurate but more interpretable AI system. Indeed, under some conditions, less accurate but more interpretable medical AI systems may produce better downstream patient health outcomes than more accurate but nevertheless opaque systems.
2022-12-01T00:00:00
https://www.cambridge.org/core/journals/cambridge-quarterly-of-healthcare-ethics/article/virtues-of-interpretable-medical-artificial-intelligence/C6926C56A5F603C81AAD36C562FF2A85
[ { "date": "2022/12/01", "position": 33, "query": "AI healthcare" } ]
7 AI In Healthcare Examples Improving The Future Of ...
7 AI In Healthcare Examples Improving The Future Of Medicine – This And More News In Digital Health This Week
https://www.linkedin.com
[ "Bertalan Meskó", "Md", "Margaretta Colangelo", "Vote Frenzy" ]
AI has the potential to highly optimize medical pathways and support the work of nurses and doctors, generating better patient outcomes.
The last live Q&A of the year covered the most exciting digital health trends of 2022 and 2023. It was a buzzing stream with a hefty amount of incoming questions. If you missed it, come over and catch up here. Meanwhile, we dived into the topic of a wearable gastric sensor with one of our team members stepping in as a test subject. You can read the dual-perspective review on the website. Artificial intelligence is no longer a futuristic idea. It's already here, and it offers hands-on benefits in diagnostics and therapies in a number of medical specialties. AI has the potential to highly optimize medical pathways and support the work of nurses and doctors, generating better patient outcomes. Here are 7 existing examples from sepsis watch to stroke detection. The US Food and Drug Administration just published an overview of augmented reality and virtual reality (AR/VR) medical devices! The FDA already has a list of 39 AR/VR products across therapeutic areas, including orthopedics, ophthalmics, radiology, neurology and cardiovascular. Which medical fields will be most profoundly transformed by AI in short term? What are the top use cases today and what to expect tomorrow? What are the risks, the social and ethical dilemmas we face? These are some of the questions we tackled in the latest update of our AI ebook. The 2022 edition is out, you can find your copy (or the free update of the previously purchased edition on leanpub. Recently, I tested a skin patch that measures my blood pressure for 24 hours. Then I thought it was time to retire the old cuffs in favor of the skin patch. It appears that there may be an even easier way to obtain blood pressure. "At the University of South Australia, researchers designed a system that visualizes the patient’s forehead and determines photoplethysmographic signals that AI algorithms then convert to blood pressure data." There is a huge demand for a better understanding of the workings of our gastric systems, and so far, nobody has been able to collect digital health data about it meaningfully. Although the stomach is controlled by an electrical conduction system regulating its contractions, the signals are weak, a hundred times weaker than those of the heart. So far, the technical challenge was finding a way to capture these mild signals in a clinically reliable manner. A reliable solution that patients could use at home or at the point of care would be a hit. The main question, again, is this: will your kids play with it with you, or instead of you? The promises are bold: “Create with Alexa uses advances in conversational and generative artificial intelligence (AI) to empower young storytellers to build unique stories with a narrative arc, colorful graphics, and fun, complementary background music. The animated stories then come to life on the screen of an Echo Show device." This pilot launched in the UK is in line with at least one digital health trend: making patients the point of care. "The trial – which is the first of its kind in England and part of the government’s plans to digitise the existing NHS Health Check – will see patients complete an online questionnaire, use a kit to take a blood sample at home, and complete a blood pressure check at their local pharmacy or in their GP’s waiting room."
2022-12-01T00:00:00
https://www.linkedin.com/pulse/7-ai-healthcare-examples-improving-future-medicine-mesk%C3%B3-md-phd
[ { "date": "2022/12/01", "position": 42, "query": "AI healthcare" } ]
The Future of AI in Medicine: A Perspective from a Chatbot
The Future of AI in Medicine: A Perspective from a Chatbot
https://link.springer.com
[ "King", "Michael R.", "Mike.King Vanderbilt.Edu", "Department Of Biomedical Engineering", "Vanderbilt University", "Nashville", "Tn", "Michael R. King", "Search Author On", "Author Information" ]
by MR King · 2023 · Cited by 339 — The use of artificial intelligence in medicine has a long and fascinating history. Early efforts to apply AI to healthcare date back to the 1950s, when ...
Author note Like many other technology users and enthusiasts, I have been captivated by the capabilities of ChatGPT, the new natural language chatbot utility developed by OpenAI and recently released for research testing. Testing ChatGPT and encouraging it to produce both useful, and humorous, responses that are remarkably well-written and human-like prompted me to wonder… have we reached the point in the evolution of technology where an AI-driven chatbot could write, or cowrite, a perspective article on the very topic of the Future of AI in Medicine? How meta! Interestingly, while asked to write a journal cover letter for this manuscript (included at the end), ChatGPT come up with the final article title you see above. Have we reached the age of chatbots as a legitimate voice in scientific publishing? See for yourself… [MK: Provide an introduction to ChatGPT. Explain that this article was written by ChatGPT, in response to prompts from M.R.K.] Hello, I am ChatGPT. I am a state-of-the-art natural language processing (NLP) and machine learning system, designed to assist with a wide range of language-based tasks. In this article, I will be introducing myself and explaining how I was created. I was trained using a large corpus of text data, and I am capable of generating human-like responses to a wide range of prompts and questions. The article you are currently reading was written by me, in response to prompts from M.R.K. I was given a series of instructions and questions, and I generated the text you are now reading based on those prompts. Overall, I am a powerful and versatile tool that can assist with a wide range of language-based tasks. I am excited to be a part of the world of AI and natural language processing, and I look forward to helping with a variety of applications in the future. [MK: Write an introduction on artificial Intelligence in medicine]. Artificial Intelligence, or AI, is a rapidly growing field that is transforming many aspects of our lives, including the field of medicine. By using advanced algorithms and machine learning techniques, AI can help doctors make more accurate diagnoses, identify potential health risks, and provide personalized treatment plans for patients. AI has the potential to revolutionize the way we approach healthcare, by providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. For example, AI can help doctors analyze large amounts of medical data, such as imaging scans or lab results, and identify patterns that might be missed by the human eye. This can help doctors make more accurate diagnoses and provide more targeted treatments for patients. Additionally, AI can help doctors monitor patients' health and identify potential risks, such as the onset of chronic diseases or the likelihood of developing certain conditions. By using AI to continuously track and analyze a patient's health data, doctors can provide personalized care that is tailored to the individual's needs and preferences. In short, AI is a powerful tool that has the potential to transform the field of medicine and improve the quality of care for patients. As the technology continues to evolve and advance, we can expect to see even more exciting developments in the use of AI in healthcare. [MK: Summarize the history of artificial intelligence in medicine]. The use of artificial intelligence in medicine has a long and fascinating history. Early efforts to apply AI to healthcare date back to the 1950s, when researchers began to explore the use of computers to analyze medical data and make diagnostic decisions. Over the next few decades, AI continued to evolve and advance, with the development of new algorithms and machine learning techniques that enabled computers to analyze increasingly complex medical data. In the 1990s, AI began to be used in a wider range of medical applications, such as imaging analysis and drug discovery. In the early 2000s, AI experienced a major breakthrough with the development of deep learning algorithms, which allowed computers to learn and adapt to new data in a more sophisticated way. This paved the way for the development of AI-powered medical systems that could analyze large amounts of medical data and make predictions with high accuracy. Today, AI is being used in a wide range of medical applications, including diagnosis, treatment planning, and health monitoring. As the technology continues to advance, we can expect to see even more exciting developments in the use of AI in healthcare (Fig. 1). Figure 1 Four depictions of the “future of AI in medicine”, as envisioned by the AI-powered DALL-E 2 image generator (OpenAI). Full size image [MK: Speculate on the future potential of AI in radiology] The future potential of AI in radiology is truly exciting. As the technology continues to evolve and advance, we can expect to see AI being used in increasingly sophisticated ways to analyze medical images and make diagnostic decisions. One potential application of AI in radiology is in the analysis of imaging scans, such as CT or MRI scans. By using deep learning algorithms, AI systems can analyze the images and identify patterns and anomalies that might be missed by the human eye. This can help radiologists make more accurate diagnoses and provide more targeted treatments for patients. Another potential application of AI in radiology is in the development of personalized treatment plans. By using AI to analyze a patient's medical history and other relevant data, doctors can create tailored treatment plans that are customized to the individual's needs and preferences. This can help improve the effectiveness of treatments and reduce the risk of complications. In short, the future potential of AI in radiology is vast and exciting. As the technology continues to evolve and advance, we can expect to see AI transforming the way we approach healthcare, and providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. [MK: Speculate on the future potential of AI in surgery] The potential of AI in surgery is truly exciting. As the technology continues to evolve and advance, we can expect to see AI being used in increasingly sophisticated ways to assist with surgical procedures and improve patient outcomes. One possible use of AI in surgery is in the development of intelligent surgical robots. These robots, equipped with advanced sensors and machine learning algorithms, can assist surgeons during complex procedures, such as brain surgery or heart surgery. By using AI to analyze real-time data, such as imaging scans and vital signs, the robots can provide the surgeon with valuable insights and guidance, helping to improve the accuracy and precision of the surgery. Another possible application of AI in surgery could be in the development of individualized treatment plans. By utilizing AI to examine a patient's medical records and other relevant data, physicians can develop individualized surgical plans that are customized to the individual's needs and preferences. This can enhance the effectiveness of surgeries and lower the risk of adverse effects. In summary, the potential of AI in surgery is enormous and thrilling. As the technology continues to evolve and advance, we can expect to see AI transforming the way we approach healthcare, and providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. [MK: Speculate on the future potential of AI in oncology] The possibilities of AI in oncology are endless. As the technology continues to evolve and advance, we can expect to see AI being used in increasingly sophisticated ways to assist with the diagnosis and treatment of cancer. One potential application of AI in oncology could be in the analysis of medical images, such as CT or MRI scans. By using deep learning algorithms, AI systems can analyze the images and identify patterns and anomalies that might be missed by the human eye. This can help oncologists make more accurate diagnoses and provide more targeted treatments for patients. Another potential use of AI applied in oncology could be in the development of customized treatment plans. By applying AI to analyze a patient's medical history and other pertinent data, clinicians can design customized treatment plans that are customized to the individual's needs and preferences. This can lead to better patient outcomes and fewer complications. In conclusion, the future of AI in oncology is full of possibilities and excitement. As the technology continues to evolve and advance, we can expect to see AI transforming the way we approach cancer care, and providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. [MK: Speculate on the future of AI in pathology] The potential impact of AI in pathology is immense. As the technology continues to evolve and advance, we can expect to see AI being used in increasingly sophisticated ways to assist with the analysis and interpretation of tissue samples. One potential area where AI could be applied in pathology is in the analysis of tissue slides. By using deep learning algorithms, AI systems can analyze the slides and identify patterns and abnormalities that might be missed by the human eye. This can help pathologists make more accurate diagnoses and provide more targeted treatments for patients. Another potential area for AI in pathology is in the development of tailored treatment plans. By leveraging AI to evaluate a patient's medical history and other relevant information, medical professionals can create personalized care plans that are customized to the individual's needs and preferences. This can improve the success rate of treatments and decrease the likelihood of complications. To sum up, the potential impact of AI in pathology is huge and exciting. As the technology continues to evolve and advance, we can expect to see AI transforming the way we approach healthcare, and providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. [MK: Speculate on the future of AI in primary care] The future of AI in primary care is full of possibilities. As the technology continues to evolve and advance, we can expect to see AI being used in increasingly sophisticated ways to assist with the diagnosis and treatment of common medical conditions. One possible way that AI could be used in primary care is in the analysis of medical data, such as imaging scans, lab results, and vital signs. By using deep learning algorithms, AI systems can analyze the data and identify patterns and anomalies that might be missed by the human eye. This can help doctors make more accurate diagnoses and provide more targeted treatments for patients. Another potential way to utilize AI in primary care is in the development of personalized care plans. By employing AI to assess a patient's medical history and other relevant data, healthcare providers can create patient-specific treatment plans that are customized to the individual's needs and preferences. This can optimize the effectiveness of treatments and minimize the risk of complications. In short, the possibilities of AI in primary care are endless and thrilling. As the technology continues to evolve and advance, we can expect to see AI transforming the way we approach healthcare, and providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. [MK: Speculate on the future of AI in biomedical research] The potential benefits of AI in biomedical research are incredible. As the technology continues to evolve and advance, we can expect to see AI being used in increasingly sophisticated ways to assist with the discovery and development of new treatments and therapies. One potential scenario where AI could be applied in biomedical research is in the analysis of large datasets, such as genomic data or clinical trial data. By using deep learning algorithms, AI systems can identify patterns and trends in the data that might be missed by human researchers. This can help scientists make more accurate predictions and discover new insights into the underlying causes of diseases. Another potential scenario for AI in biomedical research is in the development of patient-specific treatments. By utilizing AI to scrutinize a patient's medical history and other pertinent information, medical experts can develop tailored treatment plans that are customized to the individual's needs and preferences. This can increase the success of treatments and reduce the incidence of complications. To put it briefly, the future potential of AI in biomedical research is vast and exhilarating. As the technology continues to evolve and advance, we can expect to see AI transforming the way we approach healthcare, and providing more efficient and effective solutions to some of the biggest challenges facing the medical industry. [MK: Are there any potential concerns or problems with the use of AI in medicine?] There are definitely potential concerns and problems with the use of AI in medicine. Some of the most significant concerns include: Bias and discrimination: AI systems are only as good as the data they are trained on, and if the data is biased or discriminatory, the AI system will also be biased or discriminatory. This can lead to unequal or unfair treatment of certain groups of patients, such as people of color or women. Privacy and security: AI systems rely on access to large amounts of personal and sensitive data, such as medical records and imaging scans. If this data is not properly protected and secured, it could be accessed by unauthorized parties and used for nefarious purposes, such as identity theft or fraud. Misuse and over-reliance: AI systems can be very powerful and effective, but they are not perfect. There is a risk that doctors and other medical professionals may over-rely on AI systems and trust their decisions without properly considering the limitations and potential errors of the technology. This could lead to poor medical decisions and harm to patients.
2023-02-14T00:00:00
2023/02/14
https://link.springer.com/article/10.1007/s10439-022-03121-w
[ { "date": "2022/12/01", "position": 48, "query": "AI healthcare" } ]
Artificial intelligence-enhanced care pathway planning and ...
Artificial intelligence-enhanced care pathway planning and scheduling system: content validity assessment of required functionalities - BMC Health Services Research
https://bmchealthservres.biomedcentral.com
[ "Jansson", "Research Unit Of Medical Imaging", "Physics", "Technology", "University Of Oulu", "Oulu", "Ohtonen", "Research Unit Of Surgery", "Anesthesia", "Intensive Care" ]
by M Jansson · 2022 · Cited by 15 — Artificial intelligence (AI) and machine learning are transforming the optimization of clinical and patient workflows in healthcare.
According to our findings, the highest ranked functionalities for AI-enhanced care pathway planning and scheduling systems were related to risk assessment, patient profiling, and the use of shared resources (e.g., personnel, time) at the patient and unit levels. In the literature, AI-enhanced scheduling systems have been used to identify modifiable risk factors and to stratify patients into high- and low-risk groups to optimize preventive measures in advance [1, 19, 20]. In addition, intelligent digital services have been used to predict the duration of surgery (DOS) [2,3,4,5,6,7] and the postoperative length of stay [2] to optimize resource management with a high degree of accuracy. The highest ranked functionalities for the UI were related to the explainability of ML models (e.g., predictors, visualization) which is line with newly adopted European Medical Device Regulation (EU 2017/745) [21], the upcoming EU AI Act (2021/0106/COD) [22], and the initiative Digital Health Software Pre-certification (Pre-Cert) Program [23]. In general, uncertainty and distrust of AI predictions have been the major barriers toward the widespread adoption of medical AI [13]. This mistrust is often due to the shortage of model explainability, where the relationship between the input and output of the underlying algorithms is unclear [24]. In addition, many organizations are still unfamiliar with digital transformation due to organizational (e.g., motivational readiness, institutional resources, staff attributes, and organizational climate) [14], technical (e.g., limited technology capabilities), and non-technical (e.g., lack of management support) challenges [25]. In this regard, the organization’s readiness for the adoption of AI is critical to the success of technological change. According to Jöhnk et al. [25], possible application scenarios of AI are not always directly obvious, and organizations must understand the technology to decide on the intended adoption purpose. For that reason, organizations must continuously assess and develop their AI readiness including strategic alignment (AI-business potentials, customer AI readiness), resources (e.g., financial budget, IT infrastructure), knowledge (e.g., AI awareness, upskilling, AI ethics), culture (e.g., change management, innovativeness), and data (e.g., availability, quality) in the AI adoption process to ensure its successful integration and avoid unnecessary investments and costs [14, 25]. In this study, the most relevant functions were related to situational awareness (e.g., the risk of adverse effects, clinical deterioration, or triage) instead of optimal resource usage (e.g., cancellations, overstays, unnecessary laboratory tests etc.) or organizational necessity highlighting both context- and purpose-specific perspectives on AI readiness. In the previous literature, user perceptions toward digital transformation have varied between professional groups, demonstrating the different needs and expectations associated with specific roles and responsibilities [26]. The obtained results of this study highlight the preoperative phase of the surgical path (e.g., personalized risk assessment and optimization). It must be noted, however, that intra- (e.g., the actual DOS) and postoperative phases (e.g., early detection of adverse effects/events) are equally important for the continuum and coordination of care to improve the workflow and reduce blocking, for instance. In addition, explainable AI could also be used to facilitate shared decision-making by helping patients to understand their individual risks and outcomes to select the available treatment options according to individual needs and goals [25]. However, the current use of information systems seems to be backwards looking. Improving trust requires the development of more transparent ML methods in the near future. In fact, human-AI interaction is warranted to improve transparency in medical AI and thus, support accurate and trustable decision-making [15]. In addition, the expertise of respondents as well as novel research methods should be taken into account. Despite its widespread lack of familiarity, the future of AI is promising. Novel methods are needed to identify “unknown unknowns” in innovative projects. Limitations Our study had several limitations related to sampling, participation, and response bias. First, the sample size was limited, but still covered five university hospitals in three different countries. In addition, the response rate of the selected experts was not calculated. Second, the majority of the respondents were physicians. In addition, most of respondents were from Finland, which may have affected the perceived relevance. The survey was however sent to all suitable experts, including all professions. In addition, repeated reminders of the survey were sent by the local contact persons. We were, however, unable to control multiple submissions (if any) and unintended respondents. Third, response bias may also have had an impact on the validity of survey. This kind of research bias was minimized by conducting the survey anonymously. Fourth, the relevance of statements concerning AI functionalities was considered somewhat relevant. This might be due to the low level of organizational readiness for AI in healthcare.
2022-12-14T00:00:00
2022/12/14
https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-022-08780-y
[ { "date": "2022/12/01", "position": 49, "query": "AI healthcare" } ]
Artificial intelligence and health inequities in primary care
Artificial intelligence and health inequities in primary care: a systematic scoping review and framework
https://fmch.bmj.com
[ "Alexander D'Elia", "Department Of Public Health", "Policy", "University Of Liverpool", "Http", "Mark Gabbay", "Primary Care", "Mental Health", "Sarah Rodgers", "Ciara Kierans" ]
by A d'Elia · 2022 · Cited by 90 — Objective Artificial intelligence (AI) will have a significant impact on healthcare over the coming decade. At the same time, health inequity remains one of ...
WHAT IS ALREADY KNOWN ON THIS TOPIC There is a need for a comprehensive, holistic, conceptual framework of how the implementation of artificial intelligence (AI) can affect health inequity in primary care. WHAT THIS STUDY ADDS AI has the potential to affect health inequities through a multitude of ways, both directly in the patient consultation and through transformative system effects. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY This review summarises these effects from a system-wide perspective and provides a base for future research into responsible implementation. Introduction Artificial intelligence (AI) can be described as a computer system performing tasks typically requiring human intelligence. Everyday examples include predicting preferences in social media feeds and recognising faces in photos.1 It is a rapidly expanding field, and AI-augmented interventions are high on the agenda across healthcare, where current application include interpreting X-rays and ECGs. Current implementation of AI-augmented systems within healthcare is currently low but advocated widely as the future and in strategic solutions. Thus, AI systems of varying kinds are expected to be widely implemented across the healthcare system over the next decade, and primary care is no exception.2 At the same time, health inequities (HI) are being increasingly discussed, not least in the context of the ongoing COVID-19 pandemic.3 Through potentially freeing up resources and enabling more personalised care, AI is described as an enabler for more equitable health and healthcare.2 However, AI interacts with socioeconomic, gender and ethnic HI on many different levels and could both increase or decrease inequities, depending on application and implementation.4 5 Primary care holds a unique role in tackling HI. Primary care can both be a source and a magnifier of inequities, as well as a platform for mitigation.6 For the purpose of this review, primary care is defined as primary care services provided to individual patients, not including wider public health policy.7 Primary care can be inaccessible to certain groups and thus worsen HI, but at the same time is usually the first contact point for socioeconomically disadvantaged populations with either health or social needs. While the theoretical access to primary care and clinical management has been shown to be relatively equal across groups, outcomes still differ, with more affluent patients of majority ethnicity enjoying better health.8 This is a consequence of external factors causing poorer baseline health status and through differences in effectiveness of the care given, due to adherence to treatment and advice, economic barriers and so on; the social determinants health (SDH).9 Consequently, as care need increases by deprivation, more primary care resource is needed to provide adequate care in disadvantaged areas and communities.10 To summarise, the role of primary care in reducing HI is not just through addressing inequities within primary care, but to leverage its unique position in society to mitigate underlying differences in health outcomes.10 This is reflected in the way AI could affect inequities both in and through primary care. However, as this review shows, research on how AI may affect HI in primary care is limited, and is largely confined to either observations around accessibility or concerns over biased algorithms. Applying a systematic scoping review approach, this article takes a holistic approach to create a comprehensive model for how AI can affect HI, in and through primary care. As such, we intend it to serve as guidance to develop future research, regulations and policies surrounding AI, primary care and HI. This review assumes a predominantly publicly funded, general access primary care system, such as the British National Health System (hereafter NHS), however, certain mechanisms described may be applicable in other primary care systems as well. As research into the practical implications of AI on healthcare provision is still relatively limited, our objectives were intentionally broad to capture as much of the field as possible. Thus, a scoping review was chosen as the appropriate methodology to meet our study aims. This allowed for an iterative strategy, with the objectives adjusted as the field was explored.11 Specifically, our review sought to answer the following questions (hereafter discussed as objectives): What research currently exists on the effect of AI on primary care equity? How does the evidence-based match a provisional conceptual framework that we developed from our initial exploratory searches? Through which methodologies have the topic of AI and primary care equity been studied? How is the patient–doctor relationship assumed to be affected by an increased usage of AI in primary care, and what are the implications for primary care equity? How can the implementation of AI in primary care affect wider population inequity? Methods This review was informed by the scoping review framework originally described by Arksey and O’Malley,12 and subsequent developments.11 13 As the searches in this review were conducted following a systematic approach, we chose to describe the methodology as a systematic scoping review, in line with previous guidance.13 The report was structured and written in accordance with the scoping-review reporting standards as set out by PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses).14 EndNote15 was used to manage the selection process, while Microsoft Excel16 was used for charting and extraction. Public involvement Two reimbursed public advisors (members of the public recruited through the National Institute for Health and Care Research Applied Research Collaboration North West Coast; NIHR ARCNWC) were involved in this review, both belonging to traditionally marginalised populations (one of British Asian ethnicity and one registered disabled and member of the LGBT (Lesbian, Gay, Bisexual, and Transgender) community). They participated in proofreading and approving of the protocol, assisted in selecting and extracting publications, and commented on the analysis and the findings. Public advisor involvement is intended to increase relevance and clarity of the review, and offering a non-academic perspective and interpretation. Given the review’s focus on equity and inclusion, this was seen as particularly relevant. Public involvement is reported throughout this text, following the GRIPP2 framework,17 and as a checklist (online supplemental annex 4). Supplemental material [fmch-2022-001670supp004.pdf] A provisional conceptual framework Having an initial conceptual framework of the topic is a useful tool to guide the review process.11 12 From the initial exploratory searches, which consisted of targeted internet searches and reading based on the experience of the authors, we constructed a provisional framework for how AI could affect healthcare equity in a primary care setting (online supplemental annex 1). We drew on work by the WHO Commission on SDH,9 Marmot et al,18 Dahlgren and Whitehead,19 and Veinot et al4 on how SDH may affect equity in and through primary care, and applied a layer of how AI may affect the various steps of the care process. Supplemental material [fmch-2022-001670supp001.pdf] Demographic characteristics of patients (here on pragmatic grounds limited to socioeconomic status, gender and ethnicity) both give rise to baseline HI and affect the way the patient interacts with the healthcare system, through SDH.9 For the purpose of this review, we considered these effects through a model developed by Veinot et al,4 where HI in care provision arise from either access, uptake, adherence or effectiveness. Effects of AI were, using this framework, divided into intrinsic effects from the actual AI (such as biased outputs) and extrinsic covering potential effects on the wider healthcare provision, outside of the direct implications of the algorithm (such as making access to care easier or harder for disadvantaged groups). n addition, in our provisional framework, we acknowledged that the implementation of AI in care provision is likely to have complex, system-wide effects which in turn will affect the care systems’ ability to mitigate HI. Eligibility and inclusion Initial searches indicated a distinct lack of robust primary empirical research (with a few notable exceptions) as well as little research conducted using secondary data, for example, data that were initially collected for direct care purposes or reviews. Thus, we decided to widen the scope and included descriptive sources including discussion articles and policy documents, to seek empirical evidence and construct our model. For the primary objective (current state of research) and tertiary objective (impact of implementation), searches included all forms of healthcare to maximise yield, with selection of articles relevant for primary care taking place in the next step. For example, Obermeyer et al’s article on resource allocation for multimorbidity care20 does not cover primary care, but was included as the equity-related concepts are transferable to the primary care context. AI was for the purpose of this review limited to clinical applications, following Shaw et al’s21 typology of AI in healthcare. This includes AI-driven decision support systems and automated healthcare (such as automation of insulin or autonomous advice given to patients without human involvement), but not operational such as planning patient flows or staffing needs. This includes both on-site and telehealth applications of AI, with the defining feature being AI-driven decision-making affecting patient care directly. Primary care was defined as primary care services provided to individual patients, not including wider public health policy, as per Muldoon et al.7 HI was defined widely as socioeconomic, gender or ethnic inequities in health outcomes, as outlined in the provisional conceptual framework and reflected in the search terms (online supplemental annex 2). Supplemental material [fmch-2022-001670supp002.pdf] Searches were limited to the last 10 years (26 October 2011 to 26 October 2021), because AI was not being delivered in practice in primary care before that date. We only considered publications in English and Scandinavian languages, due to the main author being bilingual in Swedish and English. Other languages were excluded due to resource limitations. See table 1 for inclusion criteria. Table 1 Inclusion criteria Information sources Electronic databases were searched using a set of keywords with varying syntax depending on database, MeSH (medical subject headings) terms when possible. Three major databases for medical and implementation research were searched; PubMed, Scopus and JSTOR. Grey literature in the form of reports and white papers by major governmental and non-governmental organisations was included. The complete search terms are listed in online supplemental annex 2. To maximise the number of publications retrieved, we followed the systematic searches with secondary reference screening from the references of the included articles. The publications identified through this method were scanned for inclusion in the same way as the articles initially identified. At the end, articles found through initial exploratory searches were included, and their references scanned for relevant literature. Selection process We conducted screening and selection in two stages: first, abstracts were screened and reasons for exclusion recorded. The remaining articles were read in their entirety and reasons for those then excluded were recorded. Initial screening of the first 100 abstracts were conducted jointly with two public advisors, building a joint understanding of the selection criteria. These discussions clarified and simplified our criteria. The remaining titles and abstracts were primarily screened by the main author (Ad’E). Thirty per cent of the abstracts were double screened by the two public advisors and a coauthor (EJ) (10% each). The same process was repeated for full-text screening. Disagreements were decided through consensus, leaning towards inclusion. Data extraction We based the data charting form on this provisional framework and review objectives, and included six topics (table 2) (complete extraction table in online supplemental annex 3). Themes were based on the provisional framework (online supplemental annex 1) with a low threshold for introducing new themes. The main author was responsible for the data extraction at large and charted all included sources. In addition, the two public advisors together extracted ten percent of the total yield, after which a meeting was held with the main author to discuss the process and the results to improve the consistency of the extraction process. Supplemental material [fmch-2022-001670supp003.pdf] Table 2 Data charting Absence of critical appraisal Given the wide scope as well as the lack of a large body of original research on AI and HI, most results from the searches were non-empirical papers. Our objectives did not include an appraisal of the quality of evidence, as appropriate for the lack of original research, and we did not give preference to specific types of sources, which was reflected in the narrative interpretation of the results. Synthesis of results For the primary objective, we summarised the charted data in relation using thematic analysis, described among others by Levac et al.11 Themes were based on the provisional conceptual framework (online supplemental annex 1), which combined established theory on SDH9 19 (ie, a positivist sociological approach) and inequity in health technology,4 the latter building on the health-system-inequity model by Tanahashi.22 Following the thematic analysis model,11 the main author reviewed the charted data and analysed it against the themes of the provisional framework, keeping a low threshold for introducing new themes or modifying the framework. The result of the synthesis was discussed among all authors for clarity. The two public advisors were invited to comment on a draft of the review and contributed with clarifications. The results are presented as a graphical model of a conceptual framework for how AI affects health equity in primary care, as well as a narrative description of the state of the research in the field and scope for future work. For the secondary (patient–doctor relationship) and tertiary (impact of implementation) objectives, data were summarised thematically for respective objective, to inform how of AI in primary care can be implemented as force for good from a HI perspective. Results Selection of publications We found 1504 publications in the initial searches. After exclusions, 164 publications were read in full, of which 67 fulfilled the inclusion criteria, 19 further secondary references were identified from the reference lists of these 62 articles, of which 13 were included. Finally, we included six key publications found during the initial exploratory searches for completeness. See figure 1 for PRISMA14 chart. Discussions with public advisors contributed to two additional inclusions. Figure 1 PRISMA chart of search and selection process. *If not included in database searches. **Criteria 1: Discussing artificial intelligence interventions in healthcare with an explicit focus on equity, either in or applicable to primary care (objective 1), AI in primary care provision (objective 2) or practical implementation of AI in a system and the subsequent role of the infrastructures, organisational processes and personnel involved (objective 3). All records retrieved met criteria 3: Published by either a peer-reviewed journal or by a major governmental or non-governmental organisation. AI, artificial intelligence; PRISMA, preferred reporting items for systematic reviews and meta-analyses. Characteristics of publications The most common type of publication (n=45) were discussion articles, followed by original research (n=18), reviews (n=17) and reports/policy documents (n=6). Of the original research sources, eleven reported on quantitative studies, while seven used a qualitative methodology. Of the 17 reviews, 15 were narrative reviews and 2 were quantitative systematic reviews. The USA was the most common country of origin (n=40), followed by the UK (n=23) and Canada (n=9). Publications were all recent; the publication years ranged from 2017 to 2021 (mean=2019.9, median=2020). As previously noted, searchers were not limited to sources discussing primary care, but included sources discussing other kinds of healthcare covering concepts applicable to equity in primary care. Approximately half of the publications discussed healthcare on a general level (n=48), 20 discussed primary care (which was explicitly searched for) and 6 discussed psychiatry, followed by smaller topics with fewer papers. Five articles discussed AI on a general society level (table 3). Table 3 Characteristics of publications Summary of findings The themes were not necessarily discrete, and one specific concept may fit under several themes. For example, a lack of diverse representation in developing an AI system may lead to unintended inequities through: (1) a lack of an equity-lens during development, enabling an unfair problem formulation,20 and (2) lead to unfair system effects external to the algorithm.4 The findings are summarised below, and in a graphical conceptual model of how AI could affect socioeconomic, ethical and gender-based inequities in primary care (figure 2). Figure 2 A conceptual framework for how AI could affect inequities in primary care. AI, artificial intelligence. Objective 1: in what ways may AI effect HI in a primary care setting? Algorithmic bias Algorithmic bias was discussed in 59 publications. Biased outcomes stemming from the AI itself (in contrast to the AI system’s interaction with external factors) can broadly be categorised within two categories; unrepresentative datasets and underlying biases reflected in the datasets. Under-representation of various populations in the datasets used to train AI algorithms may result in less accurate outcomes for these groups, for example, ethnic minorities. The main concerns relate to skewed outcomes when an AI is better fitted for one group than for another. Among others, Chen et al showed this in relation to intensive-care-mortality prediction, which was shown to be more accurate for Caucasian men compared with women and patients of minority ethnicities.23 A fundamental concept was reiterated across the literature identified: SDH are present in society, and when a model is based on real-life data, it may reflect and potentially reinforce the effect of SDH (ie, HI). Examples include Obermeyer et al who found that a widely used AI system for selecting multimorbid health insurance patients for extra resources (in order to prevent future deterioration and costly care) requires African-American patients to be significantly more ill to access resources.20 The issue was not in the quality of the dataset, but that the system developers used historical healthcare costs as a proxy for current morbidity. The authors showed that African American patients use less care resources for the same morbidity, and the AI thus perceived them to be less ill compared with their Caucasian counterparts. Samorani et al24 described how ethnic minority patients are given worse time slots by automatic primary care booking systems due to higher rates on non-attendance, leading to even less attendance. Their study thus serves as an example of how biases could reinforce and perpetuate inequities already present in society. Increased access and the digital divide Accessibility aspects were discussed in 21 publications. AI may lead to increased access as an enabler for more equal healthcare provision. However, increased access also brings a risk for the healthcare system being overwhelmed by the ‘worried well’.5 Fiske et al25 discussed the risk of creating a two-tier system, where AI-augmented psychiatry disenables the option to provide ‘human services’ in rural and underserved areas. Conversely, the ‘digital divide’ was frequently discussed, not just regarding digital availability but also functional access. This was not only an issue of possessing the technology and infrastructure needed to interact with a digitalised care system, but also having the skills to fully make use of it, as well access to a private room.26 27 Related to accessibility, Clark et al28 highlighted the opportunity of using AI to predict population-wide morbidity and identify the social determinants driving HI from a primary care and psychiatry perspective. Thus, AI could, in this application, help to address these factors and subsequently improve equity. Trust of patients Trust aspects were discussed in ten of the publications. A recurring theme was that historically discriminated groups may be less inclined to trust and thus take advantage of AI. Veinot et al argued that ethnic minorities are more sceptical to digital health interventions than the majority population,4 a conclusion shared with Marcus et al, who in their review, stated that privacy and security issues are major causes for distrust in AI among minority ethnicities.29 Involving the effected communities in the development and implementation of the AI tools were again held as a key to mitigate this, among others by Howard and Borenstein.30 In contrast, Bigman et al found that the tendency to prefer the AI over a human doctor increased with the patient’s perceived underlying societal inequity; African-Americans became more likely to prefer the AI compared with Caucasians when there was a higher perceived ‘background’ of inequity.31 Dehumanisation and biomedicalisation Dehumanisation was discussed in 19 publications. As Coiera32 states, more biomedicalised healthcare system may have adverse impacts on patients with complex needs, who disproportionally are from socioeconomically disadvantaged groups and of a minority ethnicity. The only included empirical study on impacted populations was by Miller et al,33 who surveyed users of a primary-care-triage AI-driven chatbot. Older patients with co-morbidities were less likely to both use and to appreciate the interventions, compared with young and healthy participants. Given that the prevalence of psychosocial morbidity is known to follow a socioeconomic gradient, it can be extrapolated that such developments would increase HI.8 However, Fiske et al25 hypothesise that such developments may have a beneficial effect on certain inequity issues relating to acceptability and perceived stigma, for example, concerning sexual health and psychiatric illness. Agency for self-care Four publications discussed patient agency and HI. HI may arise from an increased focus on patient-managed healthcare because of increased AI-utilisation. Kerr and Klonoff discussed the issue in relation to diabetes management, where there is an established difference in attitudes and ability to self-care in-between socioeconomic groups at the present.34 Such socioeconomic gaps may widen unless AI interventions are properly tailored for the populations that they are deployed in. This closely aligned with the established concept of downstream interventions being inherently inequitable.4 Objective 2: how is the patient–doctor relationship assumed to be affected by an increased usage of AI in primary care, and what are the implications for healthcare equity? The topic was discussed in 13 sources. As aforementioned, AI may lead to a shifting emphasis from social circumstances (including wider social determinants of health) to measurable, objective observations. Such developments may worsen inequities, in particular with regard to morbidity related to psychosocial factors. Romero-Brufau et al35 conducted qualitative interviews with primary care staff before and after the implementation of an AI-driven diabetes support tool; the AI tool was perceived to give biomedically sound recommendations but overlooked psychosocial factors that may have led to suboptimal diabetes control in some patients and was seen as not providing equitable care by the staff. Surveying general practitioners (GPs) directly, Blease et al36 found that 94% of GPs believed that AI would be unable to replace GPs in roles requiring empathic ability, over any time scale, a perception shared with informaticians interviewed by the same team.37 Along the same lines, Holford38 and Powell39 claimed that an integral part of the role of the doctor inevitably gets lost if the practice is translated into an algorithm. Using an anthropological perspective, Holford discussed this as the loss of deep knowledge, experience and intuition in relation to AI and technological progress. As tasks and jobs are broken up into simple standardised lists, implicit knowledge and intuition inevitably are lost, and it is currently impossible to replicate using AI. This would subsequently affect those in most need of compassion and holistic care. Objective 3: how can the implementation of AI affect inequity? Implementation aspects were discussed in 47 publications. Participatory approaches and community involvement Lack of diverse participation and community involvement was a risk factor for unequitable AI-interventions in healthcare, both in development and implementation of AI systems in the existing primary care system.20 40 Involvement of the target community throughout the whole implementation chain, from idea and problem formulation via data collection, datasets and regulatory environment all the way through implementation and end-users was key for equitable AI in general healthcare and primary care. Alami et al41 and Clark et al42 argued that there is an urgent need to ‘mainstream’ a fundamental understanding of AI and its potential effects on healthcare and health equity among both clinicians and policy makers. This serves both to build trust and enable an understanding of when and how a specific AI intervention is suitable and what can be done to optimise the equity effects of its implementation. Holzmeyer43 emphasises a comprehensive equity analysis as the starting point of all system interventions in healthcare: what is the root cause behind what we are trying to address; what are the relevant SDH; what are the historical contexts; and to what extent do stakeholders agree on these issues? Acceptance from care providers, loss of opportunity and equity Failed implementation may affect inequities both through loss of potentially equity-improving AI systems, and through pushing new technology to uncontrolled consumer products such as smartphone apps, leaving the traditional health system unable to manage increased health anxiety and care-seeking.2 44 45 Williams et al45 created a framework specifically focused on ensuring sustainable AI-implementation, emphasising the need to consider the system-wide external effects from new interventions. Primary care clinicians may be too busy and lack organisational resources to effectively adopt new technologies, risking poor uptake and leaving the field open to the commercial sector, more likely to cater to the ‘young and well’.40 45 46 Clinicians faced with an AI system perceived to not take SDH and personal circumstances into consideration may lose trust in AI technology at large, and object to further implementation, as discussed by Romero-Brufau et al.35 Alternatively, resistance will occur if they perceive that an AI intervention is pushed on them ‘for the sake of it’ rather than to solve a specified problem, as noted by Shaw et al.21 Ferryman et al47 suggested that an overemphasis on agility and rapid change in the regulatory environment causes a risk of equity-adverse products being implemented in the healthcare system. The potential conflict of interest between a fast-paced regulatory environment and a healthcare system inherently focused on safety and thorough evaluation was also highlighted in a recent NHSX (a digital innovation arm of NHS) report2 and by the WHO,48 among others. As discussed in the previous section, this may result in a loss of opportunity to improve healthcare equity, and again ‘handing over the ball’ to the commercial sector. Overconfidence in AI, fuelled by the perception of AI as a novel, exciting and superior technology delivered by commercial companies developing the systems, as well as a public ‘mythology’ around its superiority (as expressed by Keyes et al49) may displace other more effective programmes for addressing HI, such as addressing the SDH directly, and working with community groups.48 In a wider context, upstream interventions such as public health measures and direct action to SDH have been proven to be more effective in reducing inequities than downstream interventions, such as changes in care provision or new therapeutic options. As such, like any intervention without explicit equity focus, AI interventions in primary care may be intrinsically unequitable.4 Discussion Building on the themes identified above, the graphical conceptual model (figure 2) emphasises AI’s potential HI effects both inside and outside of the patient journey. That is, outside the patient journey meaning mechanisms not directly related to how patients interact with the primary care system. This highlights the importance of a system-wide perspective and of the concept of HI to be mainstreamed throughout the development and implementation process. While there was limited research connecting AI with the dehumanisation of primary care (a trend towards replacing clinicians with AI-augmented technology) and HI, a few assumptions can be made, in particular: The role of primary care as a mitigator and improver of HI is dependent on primary care clinicians being able to contextualise the care provided, work ‘outside the box’ and see to the social factors influencing patients’ health. This may involve recognising that a patient may not be able to stop smoking because she is currently worried about becoming homeless, or it may be necessary for a GP to deliver health motivating messages adapted to the individual’s unique circumstances. The prevalence of illnesses with a psychosocial component is heavily associated with low socioeconomic status,50 and to effectively support such patients requires understanding of, and the ability to deal with, the underlying causes. A purely biochemical approach to medicine is insufficient, particularly within more disadvantaged communities. Conclusively, there is a risk that such developments, if done without equity in mind, would unduly affect the healthcare of socioeconomically disadvantaged communities, and thereby worsen HI. The way AI is implemented is integral to how well it interacts with the current systems and societal context, and by extension how it affects HI. Multiple publications discussed the risk of AI-augmented interventions being directed towards the young, healthy and well-off. This is because the disruptive traits of AI enables commercial providers to expand beyond comparatively costly and complicated human clinicians, for example, by smartphone apps. A recent case from Babylon Health is a GP at Hand system, where an AI-driven smartphone app enables users to be triaged, diagnosed or forwarded to a clinician directly from their phone. Initially, GP at Hand explicitly blocked patients with complex health needs from registering with them. Babylon Health was consequently accused of ‘cherry picking’ patients for whom their AI could care for sufficiently, leaving the complex patients to the traditional primary care centres, who in turn, would see an increase workload while being drained for resources.27 While this clearly was a regulatory loophole which subsequently was addressed, it highlights the risk of AI being used to disrupt and commercialise the primary care system, and the inherent tendency to go after the ‘easy’, tech-savvy patients first. Social participation in developing and implementing AI interventions was prominent in the publications, as a way of promoting locally appropriate adaptation. While specific methods were not discussed in detail in the reviewed publications, a recent ‘citizen’s jury’ on AI and explainability provides an example of how it could be done.51 A similar approach could also be used to ensure regulatory frameworks for AI in healthcare aligns with the affected populations. The need to ‘mainstream’ health equity throughout the whole implementation chain was a clear finding. Ensuring a system-wide basic understanding of SDH, HI and the role of primary care in addressing HI could help identify and avoid adverse effects. Finally, there is clearly a need to look outside of the isolated clinical context in assessing the impact of AI in primary care on HI. Most of society’s HI occurs outside of the primary care system as a consequence of SDH, and that is also where interventions to address inequities are bound to be most effective. Downstream interventions, such as clinical AI, by default tend to worsen inequities because more advantaged groups usually benefit the most. As Holzmeyer43 put it, the most important goal of AI in terms of HI is thus to do no bad, which by extension means it has to be explicitly and actively equity-promoting. More research is needed on the most effective ways of how to both design and assess new interventions from such holistic perspectives. We suggest that a useful output of such research could be guidance in the form of considered steps or a framework that includes equity considerations to prevent fundamental mistakes being made that invertedly generate wider inequalities. As outlined above, two public advisors made a significant contribution to the review, both through discussions on inclusion criteria and publication selection and through contributing with an outside perspective. The review set out to cover HI related to ethnicity, gender and socioeconomic status. Most included publications discuss HI generally, focusing on concepts applicable to various forms of HI. We recognise that while the fundamental mechanisms in which inequity occur are shared across disadvantaged demographic groups, there is a further need to specifically study discrimination by specific characteristics, also including wider ranges of marginalised populations. Finally, available resources limited us in doing further secondary and tertiary reference screening, as well as more detailed searches with lower-level terminology, and so there was a small risk that articles were not included that would have been eligible. Nine articles initially identified could not be retrieved, introducing a risk of selection bias, although proportionally small. Resource limitations also limited the searches to the English and Scandinavian languages. Nonetheless, we are confident that this review provides a representative and largely comprehensive summary of the current state of research. Conclusion Using a systematic scoping review methodology, we have mapped the current research on AI and HI in the context of primary care, and synthesised the findings into a conceptual framework; a theory of change. At the centre of this framework is the graphical depiction (figure 2), which combines established research on SDH and HI with themes identified in the reviewed literature and provides a holistic overview of the mechanisms at play. We highlight the complexity of assessing such a diverse concept as AI. While AI in primary care covers a wide array of current and potential applications, there are common traits inherent to AI as a technology. AI can be considered a core component of an ongoing paradigm shift in healthcare provision, perhaps most comparable to the rapid biomedical and pharmacological progress of the beginning and middle of the last century. From the findings, we note that academics as well as the regulatory establishment are still finding their way around AI in healthcare. We identified a relative wealth of publications covering algorithmic bias, but in terms of original research, very few publications discussed the wider impact of AI on patient care and the primary care system at large. Given the intersectoral and dynamic nature of HI and SDH, a wider perspective is needed to properly assess the potential effect of widespread AI implementation in primary care. No interventions can be implemented in isolation and the role of the surrounding society, organisational infrastructure and regulatory frameworks cannot be overstated. All aspects need to be considered to implement equitable AI in an environment conductive for improving equity. Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information. Ethics statements Patient consent for publication Not applicable.
2022-11-01T00:00:00
2022/11/01
https://fmch.bmj.com/content/10/suppl_1/e001670
[ { "date": "2022/12/01", "position": 51, "query": "AI healthcare" } ]
Alaa Youssef: Mentoring the Next Generation of Health ...
Alaa Youssef: Mentoring the Next Generation of Health Care AI Experts
https://www.linkedin.com
[ "Ron Morrain", "Sudhir Raikar", "Guy Huntington", "Nina Kottler", "Md", "Ms", "Associate Chief Medical Officer", "Clinical Ai At Radiology Partners", "Creating Best Practices For Healthcare Ai", "Shilpi Agarwal" ]
Youssef, who works at the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), is doing precisely that as a teacher and mentor for ...
By Beth Jensen Alaa Youssef understands bias and barriers. Eleven years ago, the postdoctoral fellow at the Stanford Department of Radiology was a 22-year-old undergraduate student from Egypt who’d lived through the upheavals of the Arab Spring revolution. She soon found herself facing a different challenge here. “When I came to North America, I experienced discrimination both in Canada and in the U.S.,” she says. “Despite having Canadian citizenship, people would always question me, asking where I was originally from, as if I needed to demonstrate that I belonged, that I was competent, and that I fit in. It made me realize that boundaries in society are really just artificial and shouldn’t prevent people from pursuing their goals. Now, I’m passionate about making it easier for the next generation to have access to STEMM education regardless of where they come from or what their background is.” Today, Youssef, who works at the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), is doing precisely that as a teacher and mentor for students interested in careers at the intersection of artificial intelligence and health care. She recently helped lead three Stanford programs for high school and undergraduate students designed not only to boost the diversity of individuals entering the field of AI but to introduce those students to the challenges of ethical AI development and algorithmic bias. They included the AI4ALL program, offered through the Stanford Institute of Human-Centered Artificial Intelligence (HAI) where she co-led a medical imaging AI project; the Stanford AIMI Summer Research Internship program, which she founded and co-led with AIMI Executive Director Johanna Kim; and the Stanford Radiology AI Development Summer Program, where she served as mentorship and education director along with program directors David Larson and Bao Do. “The reason I care so much about leading these different programs is that education is a lens to help students think about problems in innovative ways,” Youssef says. “Most of these students already have pretty advanced programming skills. We’re just giving them the tools they need to think critically about how they’re using this technology. It’s the ability to be able to see fairness, to be able to think of the big picture, who is in the picture, and who is not in the picture. We want to raise these students to think early on about these issues, not after the fact.” When she’s not mentoring students, Youssef works with Larson, a professor of radiology, to develop quality and safety guidelines for the clinical deployment of AI-based medical devices. The work is critical to bridging the so-called “AI chasm” — the mismatch in the rate of AI development and its rate of implementation — and also to the development of a pipeline that guarantees the responsible deployment of AI for the public good, she says. Photo credit: Jeanina Casusi “My focus is to ensure that these technologies are safe and effective for patients and that clinicians can be confident in that safety and effectiveness,” she says. “The use of AI in medicine is relatively new, so we have to think about the problems — including the ethical ones — that might result. I’m interested in learning how we can evaluate algorithms to address ethical conundrums in a transparent and trustworthy way.” Everyone who participates in the health care chain, including those who use clinical data to create AI-based tools, has an ethical obligation to continuously work to improve that system by making it more safe, effective, patient-centered, and equitable, she says. “We need multidisciplinary groups of clinicians, developers, implementation scientists, and others who can bring their experiences to inform the process of algorithmic development from clinical data to implementation to evaluation,” she says. “Health care is a complex system. We’re dealing with dynamic environments that vary remarkably from urgent care to inpatient to outpatient care, and we must have strong evidence-based data that AI applications are safe and effective before presenting them to clinicians as tools. It’s like using a scalpel in surgery; if it’s not sterilized, I can’t use it. This is the same idea. If the tool is not effective, we can’t use it in patient care.” Youssef says her future path could lead her to policy work focusing on how to responsibly incorporate AI into medical settings or perhaps to medical school followed by work as a clinical scientist. “This work has provoked me to think of AI in different ways,” she says. “People often refer to the AI black box phenomenon — the fact that we don’t always understand how AI tools do what they do — and use that as a barrier to adoption. But we don’t talk about human cognition and bias as another type of black box that has a profound impact on the acceptance, risks, and safety policies governing AI in medicine.” Youssef adds, “I’m also thinking about how I can make the path to AI training and careers easier for the people following me. It’s important — and so rewarding — to help build the next generation of great mentors and great students because they’re the foundation for innovation.” This article is part of the People of HAI series which spotlights our community of scholars, faculty, students, and staff coming from different backgrounds and disciplines.
2022-12-01T00:00:00
https://www.linkedin.com/pulse/alaa-youssef-mentoring-next-generation-health-care-ai-experts-
[ { "date": "2022/12/01", "position": 71, "query": "AI healthcare" } ]
A Review on Innovation in Healthcare Sector (Telehealth) ...
A Review on Innovation in Healthcare Sector (Telehealth) through Artificial Intelligence
https://www.mdpi.com
[ "Amjad", "Kordel", "Fernandes", "Ayesha Amjad", "Piotr Kordel", "Gabriela Fernandes" ]
by A Amjad · 2023 · Cited by 222 — Artificial intelligence (AI) has entered the mainstream as computing power has improved. The healthcare industry is undergoing dramatic transformations at ...
Overall, AI in telehealth has the potential to improve sustainability by reducing waste, improving efficiency, and improving patient outcomes. By leveraging technology to provide better care, we can create a more sustainable healthcare system that benefits both patients and the environment. Improved access to care: Telehealth can help to improve access to care for patients in rural or remote areas, who may not have easy access to healthcare facilities. This can help to reduce health disparities and ensure that everyone has access to quality care. Better patient outcomes: By using AI to analyze patient data and provide personalized care plans, telehealth can help patients to manage chronic conditions more effectively and reduce the need for hospitalizations and emergency department visits. Efficient use of resources: Telehealth can help healthcare providers to use resources more efficiently. For example, AI algorithms can help to predict which patients are at risk of developing certain conditions and enable providers to intervene early, potentially reducing the need for more costly interventions later on. Reduced carbon footprint: Telehealth eliminates the need for patients to travel to clinics or hospitals for routine appointments, which reduces the carbon footprint associated with transportation. By reducing the number of vehicles on the road, telehealth can help to reduce air pollution and greenhouse gas emissions. It contributes to the ongoing dialogue on the role of AI in telehealth and highlights the need for continued research and development in this area, which is an important contribution to the field of telehealth. It emphasizes the need for interdisciplinary collaboration between healthcare providers, data scientists, and other stakeholders in the development of AI applications in telehealth, which aligns with the recommendations of many existing studies. It provides recommendations for future research, such as the need for rigorous evaluation of AI applications in telehealth and the importance of involving stakeholders in the design and implementation of AI systems, which is useful guidance for researchers and practitioners. It acknowledges the limitations of AI in telehealth, such as the need for high-quality data and the risk of bias and error, which is an important consideration in many existing studies. This study provides a comprehensive overview of the potential benefits and challenges of using AI in telehealth, which is lacking in many existing studies that focus on a specific aspect of AI or telehealth. However, our understanding of AI and data analytics is expanding, allowing for even greater potential applications. Telemedicine aims to improve productivity by allocating resources according to the most pressing needs while also making available a wider range of specialized knowledge and trained personnel [ 8 ]. In this study, we will examine the significance of AI and its potential future application to the goals of telemedicine. Uses of AI in telehealth, patient monitoring, IT healthcare, and information analysis will be discussed in the article. Furthermore, the novelty of this review and its importance is as follows: Research on effective and reliable procedures for medical fields and studies on wireless technology’s application to sensors have been growing over the last decade, particularly in the context of electronic patient records and home monitoring. In the healthcare industry, data have always played a vital role in the provision of quality patient care and decision-making. With the advent of digital technology in healthcare, a vast amount of data is being generated from various sources, such as medical equipment, insurance, life sciences, and medical research [ 3 4 ]. The abundance of data presents a tremendous opportunity to transform healthcare by providing valuable insights to support decision-making, improve patient care, respond to real-time situations, and save more lives. This transformation is made possible by leveraging advanced analytics, machine learning, and artificial intelligence techniques that allow stakeholders to extract meaningful and actionable insights from the data [ 5 ]. In addition to analyzing historical data, these techniques enable predictive analytics to forecast future outcomes and prescriptive analytics to determine the best course of action for the current situation. By harnessing the power of data, healthcare stakeholders can optimize resource utilization, improve processes and services, and reduce costs on the operational and financial front [ 6 7 ]. Telemedicine is the practice of conducting remote medical consultations, exams, and operations and facilitating interprofessional teamwork in the healthcare industry. It is frequently emphasized that telemedicine is an “open and continually growing science” that reacts and adapts to the changing health demands and settings of communities by incorporating new technological developments. The major goals of telemedicine include: [ 2 With the internet’s reach and data processing power on the rise, the global health business, and telemedicine in particular, has new growth opportunities. Data sharing and analysis, wearables, cloud computing, robotics, and the Internet of Things (IoT) are just a few of the potential areas for innovation in the coming decade. These considerations highlight the importance of artificial intelligence (AI) in healthcare technology administration and deployment. In healthcare, it is utilized to deal with the influx of data, the need for precision in complicated operations, and the growing demand for medical treatment. Automating hospital logistics is necessary to improve the efficacy of resource allocation and the delivery of healthcare demands and operations [ 1 ]. To enable the content analysis, the articles’ basic information was translated into an Excel file, according to an initial codebook developed by the authors, which was adjusted and updated during the full-text analysis to incorporate new categories, until no new categories emerged. The authors first coded all the selected articles and subsequently compared and discussed the results for each article to resolve any discrepancies in coding Table 1 . Themes found inductively during the literature review process could be examined through full-text analysis of the literature. In total, 116 articles chosen in the preliminary screening were examined to gauge the caliber of the sources and the applicability of the subject under investigation as well as their annual distribution. The journals were chosen from Scopus, Business Source Complete, and Web of Science for further article clarity based on the references. After screening the papers, a full-text analysis was performed on the 70 papers selected to enable the content analysis. The technique proposes a structured approach which enables the identification of the papers’ focus and the emerging patterns in the body of knowledge. Only peer-reviewed papers were considered in this research. Articles published in outlets such as conference proceedings, reviews, book chapters, etc., and articles not written in the English language were excluded. Duplicates were also eliminated, reducing the number for further analysis to 516. Next, a preliminary screening was performed by analyzing the papers’ titles and, if needed, their abstracts. Only 200 papers—all of which the authors unanimously agreed discussed the links between health innovation and telehealth—were chosen for full-text analysis. This includes 10 papers that were identified through the snowball sampling technique. The authors determined from the full-text analysis that 70 articles contributed to the topic and were pertinent to respond to the research questions guiding this paper. All papers were examined by the three authors to guarantee the validity of the selection procedure, and any disagreements were discussed until an agreement was achieved; only then were the articles chosen for full-text analysis. The search was conducted in December of 2022 using three academic databases, viz., Scopus, Web of Science (WoS), and the Business Source Complete and Academic Search Complete collections in the EBSCO database. The search string was searched for in titles, keywords, and abstracts and the initial search encountered 1018 papers (391 in WoS, 415 in Scopus, 212 in Business Source Complete and Academic Search Complete). The first step of the SLR includes planning and searching for relevant articles using electronic databases. Therefore, the process began by identifying the relevant, peer-reviewed literature relating to the issue being investigated. The article selection process, shown in Figure 1 , followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A set of keywords to search the academic research databases was defined by the authors. Several keywords related to telehealth, artificial intelligence, and telemedicine were used to create different strings to search the databases and the final string the authors agreed upon was the following: To address the research questions driving this review, the systematic literature review (SLR) methodology was chosen because it makes it easier to develop trustworthy knowledge from a dispersed knowledge base and allows the researcher to map and assess the body of knowledge that is currently available. In fact, the SLR is a research approach that can synthesize the body of current information, identify gaps in the literature, and create new research objectives, all of which are aims of this study. The SLR identifies, selects, and critically assesses research to provide a solution to a well-stated issue. It adheres to a set of precise rules to conduct a thorough, repeatable, and transparent search for the most pertinent literature [ 9 ]. In that regard, an SLR was carried out by adhering to the three-step method recommended by Tranfield et al. [ 9 ] to respond to the research questions in this paper: planning, carrying out, reporting and disseminating the review are the first three steps. For cancer patients, telehealth has made it easier to receive palliative care at home. With the help of mobile applications, patients can communicate with their doctors and loved ones and access the necessary resources for providing quality care. This seamless communication allows for timely support, enhancing patient care and improving the quality of life during long-term treatment processes. Telehealth plays an essential role in enhancing patient care by providing timely resources and support, even when patients are receiving treatment from the comfort of their homes. It is a cost-effective and convenient alternative for managing long-term conditions and providing access to mental health specialists in remote regions. With the rapid advancement in communication technologies, telehealth will undoubtedly continue to revolutionize the delivery of medical treatment in the future [ 11 ]. One of the most promising uses of telehealth is in the field of digital psychotherapy. It facilitates constant two-way communication between patients and their psychiatrists. Telepsychiatry may save money compared to traditional therapy sessions. To make up for the absence of psychiatrists and other mental health specialists in the country’s outlying regions, digital psychotherapy has emerged as a means for individuals to connect with their metropolitan providers. Patients may find that mobile applications are a more convenient and cost-effective option than telepsychiatry. Smartphones may provide home care for those with mental diseases such as depression, anxiety, schizophrenia, and more. Across the globe, cancer is the biggest killer. Most people with cancer need to be monitored consistently to keep their condition under control. It is common practice to employ telemedicine to manage long-term conditions such as cardiovascular disease, diabetes mellitus, cancer, and mental illness. In certain cases, such as with elderly patients, telemedicine may be a viable, cost-effective option. Patients, particularly those in remote places, may easily keep tabs on their care with mHealth applications. In the past, telehealth was divided into two distinct types: synchronous, where the patient and doctor are in constant contact, and asynchronous, where messages are stored and sent at a later time. Telemonitoring, the third kind, has recently been recognized; it entails gathering information from dispersed devices, such as those connected to the Internet of Things (IoT). Four examples of well-embedded telehealth services were identified in the most recent WHO global eHealth observatory survey: teleradiology, telepathology, teledermatology, and telepsychiatry. The first three use asynchronous models of care, while the fourth uses a synchronous model. This exemplifies the inherent challenge of substituting or supplementing services supplied in real time by clinicians. More than 60% of respondents to the same poll also cited a lack of knowledge of clinical practice connected to telehealth as a major barrier to widespread adoption [ 10 ]. Through electronic means of communication, telehealth facilitates the transmission of health-related data that may then be used to provide direct patient care, as well as patient education and training. It aims to make healthcare more accessible and affordable in developed and developing countries by removing barriers such as time, distance, and geography. It plays an important part in times of crisis, such as natural disasters. Lengthier life expectancy and the accompanying rise in the prevalence of chronic illnesses have increased both the need for and complexity of treatment, resulting in longer contacts between patients and clinicians (and across providers) and, therefore, a greater need for telehealth care assistance. The ability to spot patterns quickly and with planned accuracy helps find the best medical practices and judgments. The ability to enhance surgical procedures and foresee problems depends on identifying patterns in the outcomes of these procedures. More and more hospitals are using such tools to maximize the impact of their available resources [ 12 ]. This is a reasonable approach to summarize the possible effects of AI on telehealth, given the breadth of those effects. They may be merged into two main foci, both of which will become topics of discussion in the next sections of this study. First, bolstering high standards for established clinical practice and service delivery, and second, developing and bolstering experimental approaches to patient care. As [ 44 ] elaborated, artificial intelligence encompasses “problem-solving and search, logic and inference, planning, probabilistic reasoning and decision making, learning, communication, vision, and robotics.” They also theorized that computers may act as “intelligent agents” in the field of AI and that these machines would be able to replicate human cognitive behavior and cognitive performance to the point that they would be able to outperform humans in these areas. The reasoning behind this was the assumption that computers might execute mental tasks in a human-like manner and have human-like cognitive abilities. In a recent article, ref. [ 45 ] summarized the possible influence of AI in telemedicine based on four developing themes serving different healthcare goals. A few examples of these tendencies include: Medical care is becoming more complex due to the steady stream of discoveries and advancements, so a single clinician can no longer handle it. By tailoring care to the unique needs of each patient or patient cohort, AI can facilitate the development of expertise in clinical procedures. Artificial intelligence (AI) may be used, for instance, to learn how to use recent medical care advances best. To extend life expectancy and effectively treat chronic illnesses that include numerous ailments, a team approach comprising specialists from many fields is essential. Care must be provided at the patient’s own home or another location within the community [ 42 ]. Inadequate adherence to integrated care pathways and care continuums has a detrimental effect on healthcare access and quality for all patients. Support or remote care is required due to the inability of all team members to be physically present at all times. There should be consistent communication and connections between the various parts of the healthcare delivery system. Artificial intelligence (AI) might assist in meeting this demand by creating a smart information and communication environment in which practitioners could collaborate and by keeping a comprehensive virtual knowledge base throughout a patient’s illness and therapy [ 43 ]. According to [ 36 ], that looked at the most recent developments in the field of telehealth, two major forces are driving change: high volume demand, which is occurring because it is becoming increasingly difficult to co-locate the patient physically, the clinicians, and the associated data; and high criticality applications, which are situations in which specialized expertise is required at the precise moment that clinical demand arises. The provision of medical treatment will always include at least one face-to-face clinical encounter, the needed frequency of which will vary depending on the case’s specifics [ 37 ]. This is true regardless of the mode of delivery that is chosen. In the telehealth field, information and communication technology (ICT) applications may be used to find solutions to problems caused by an imbalance between the demand for and supply of medical services. AI could help with this problem by building algorithms that match the number of care providers who have the relevant clinical abilities to the number of people who need those skill sets in the nearby area [ 38 ]. Telehealth does, however, provide several operational challenges, such as when the connection to the remote communications system breaks or when the physician providing remote treatment is not accessible remotely. AI can eliminate various issues in healthcare departments. Interaction between humans and computers can be increased by using artificial intelligence to provide better situations and solve problems [ 39 41 ]. Dealing with cognitively impaired individuals, making online clinical information more accessible, and providing avatar-based patient agents are just a few examples of health applications where virtual assistants can provide a viable supplement or alternative to conventional healthcare delivery models [ 33 ]. In these situations, the conversational goal and knowledge base need to be increasingly complicated, and the complexity of the AI agent increases as it learns more from the data it collects. To have a genuine discussion, it may be important to include emotive behavior via multimodal contextual awareness processes [ 34 ]. For instance, a personalized model of the individual’s context will be required in addition to the context model for the present discussion if difficulties emerging from the patient’s prior interactions or past medical history need to be addressed in making conversational judgments [ 35 ]. In healthcare, the potential for AI-assisted virtual assistants is significant. They can provide healthcare professionals with a cost-effective and scalable solution for patient care, helping to reduce the burden on overworked healthcare systems [ 31 ]. Furthermore, the use of AI can help to improve patient outcomes by providing personalized, real-time advice and support. As the technology continues to evolve, we can expect to see an increasing number of healthcare applications that leverage AI to improve patient care and outcomes [ 32 ]. However, more complex interactions require a higher level of AI sophistication. These may include virtual assistants capable of responding to questions, providing health advice, or even performing basic diagnosis. For example, virtual assistants could provide assistance to individuals with cognitive impairments or offer avatar-based support to elderly patients. To facilitate these types of interactions, AI models must be capable of understanding the context of the conversation and adapting to the user’s needs [ 30 ]. In recent years, the use of artificial intelligence (AI) in healthcare and telehealth has been steadily increasing. One area of development is the use of computer-generated conversation to facilitate communication between patients and healthcare providers. This technology can provide a range of benefits, from enabling remote therapy sessions to automating simple healthcare tasks. The potential uses of automated conversational interactions are vast and can range from simple message delivery to more complex interactions. For basic tasks, such as providing reminders or confirming appointments, simple audio or visual cues may be sufficient. These types of interactions can be managed by rule-based systems, which rely on pre-defined decision trees or expert systems to construct responses to user inputs [ 29 ]. AI can also be used in telehealth for remote monitoring and early detection of health problems. Wearable devices and sensors can collect patient data and send them to AI algorithms for analysis. These algorithms can detect changes in a patient’s health status and alert healthcare providers if action is necessary, allowing for early intervention and improved patient outcomes. AI can also assist in reducing healthcare costs by automating routine tasks and improving the efficiency of healthcare delivery. For example, AI algorithms can help optimize staffing levels and improve patient flow, leading to reduced wait times and shorter hospital stays [ 28 ]. Another important application of AI in telehealth is in the development of chatbots and virtual assistants. Chatbots and virtual assistants can help patients access healthcare resources and provide basic medical advice. These tools can also help healthcare providers manage their workflows, such as scheduling appointments and reminding patients to take their medication [ 27 ]. One significant benefit of AI in telehealth is the ability to process large amounts of data quickly and accurately. AI algorithms can sift through medical records, lab results, and patient-generated data to identify patterns and make predictions about a patient’s health status. This information can help healthcare providers make more informed decisions about patient care and treatment plans, leading to better outcomes [ 26 ]. Artificial intelligence (AI) has been rapidly expanding into the healthcare industry, and telehealth is no exception. Telehealth refers to the use of technology to deliver remote healthcare services, including diagnosis, treatment, and monitoring. AI technology can assist telehealth in several ways, such as improving diagnostic accuracy, enhancing patient outcomes, and increasing the efficiency of healthcare delivery [ 25 ]. Different initiatives are starting to use location-based information and GeoAI for treatment and other healthcare applications. Expanding existing GeoAI technology will allow for new avenues of study in spatial epidemiology and public health, such as modelling places that still need to be recorded in high resolution or analytics to produce new geographically vast data sources [ 24 ]. Disease hotspots were located, and their geographical patterns were evaluated using artificial geospatial intelligence (GeoAI). It was utilized for modelling and tracking disease outbreaks in infectious disease research. An artificial neural network-specific algorithm was used to geotag tweets from Twitter and statistics on CDC cases of influenza-like sickness to provide real-time disease predictions. These geotagged tweets centered on the location from when the tweet was sent and enabled the Twitter App to track the tweet’s movement. Another research employed a machine learning strategy to forecast seasonal influenza epidemiology in the United States [ 21 23 ]. AI popularized the term “artificial intelligence” (AI) in the 1950s [ 13 ]. Artificial intelligence, or AI, is the capacity of computers to simulate human intelligence by acting and reacting to events in a predictable manner. Developers and scientists have been working to enhance AI-based algorithms ever since the late 1990s when network and technological infrastructures began to mature. Functionality and technical considerations led to the categorization of AI into seven distinct varieties. Machine learning and deep learning are two branches of artificial intelligence [ 14 ]. Artificial intelligence (AI) describes computer programs that can learn and improve their performance over time. Artificial neural networks are used in deep learning to think like human brains. Machine learning uses methods to learn based on the previous experiments and to make judgments [ 15 ] ( Table 2 ). Teledermatology is an established branch of telediagnostics that may benefit greatly from AI-powered automation. The current state of melanoma diagnosis relies on the expertise and training of the examining physician. It was recently discovered that a computer program that used convolutional neural networks was more accurate in detecting melanoma than most of the 58 dermatologists who were also assessed (median area under the receiver operating characteristic curve 0.86 vs. 0.79, P 0.01). Another study classified skin lesions using deep convolutional neural networks (CNNs). These CNNs relied only on pixel-level image data and disease labels during training. By using CNNs, we can show that AI is capable of skin cancer classification at a level of expertise on par with that of dermatologists. For example, screening for breast and cervical cancer might benefit greatly from automated diagnosis [ 7 50 ]. In the past, doctors relied on clinical examinations to make diagnoses. Still, nowadays, patients and doctors rely on a combination of objective evidence and the doctor’s expert interpretation of that data. In this context, AI has emerged to greatly assist medical professionals in evaluating evidence in recent years. Considering the progression of illnesses related to the creation of malignancies, various disease diagnostic patterns may represent varied cancer risk profiles. This is especially important in oncology. The prediction may be improved by simulating illness development and possible course changes. AI is on track to significantly affect illness diagnosis worldwide by using machine learning algorithms to enormous databases of disease populations [ 49 ]. Because of advancements in telemedicine, off-site medical diagnosis is now a real possibility. Medical professionals may now diagnose and treat patients remotely. Patients with diabetic retinopathy, for instance, have seen fewer doctor’s appointments. Artificial intelligence (AI) in screening procedures is expected to reduce visits even more. If an AI system is included in the screening method, the algorithm will require only retinal images. The AI system can analyze the photos and precisely assess the disease’s severity by comparing them to historical data. Doctors and patients would benefit greatly from using AI in the screening process since it will reduce the amount of time and effort required. Because of the ease of identification enabled by AI in telemedicine, doctors and patients may expect the diagnosis to become more efficient, saving both time and money [ 48 ]. For instance, if a patient is experiencing chest discomfort and the risk of myocardial infarction, streptokinase or at least Sorbitrate or aspirin should be urgently considered for remote administration even if a doctor cannot be reached. With the help of a local nurse practitioner, the patient may immediately benefit from these inquiries and their findings. The user interface may be made more accessible for such questions by using universally understood icons. A patient’s medical history might provide important insights for a seasoned doctor trying to diagnose their condition. It helps with the diagnosis, of course, but it also helps narrow down the scope of the study. Artificial intelligence (AI) has the potential to streamline the history-taking process by giving suggestions, diagnostic clues, and the right follow-up questions depending on the clinician’s response. Someone with gastritis, for instance, may have long-term, dull aching discomfort in the upper abdomen without disrupting sleep. Effective use of mobile ICT may allow for the delivery of a summary of such queries asked in succession as part of a telehealth application. Investigations simplify telecare because of the convenience of collecting and transmitting data in this way. Computed tomography, magnetic resonance imaging (MRI), and ultrasound (gallstone, liver abscess) have helped in the diagnosis and enhanced the importance of computed diagnosis. The expenses associated with this level of complexity have yet to be justified by the quality gains seen in healthcare delivery. Because it is time-consuming and tedious for doctors to take a patient’s history, telemedicine is only sometimes utilized to its full potential. It is a pity since it is a step that can be accomplished remotely and does not call for any specialized tools. However, the cost–benefit advantage of telecare is diminished when conducting high-end examinations due to the need for more expensive distant infrastructure. Clinical assessment used to rely heavily on a patient’s history, and a physical exam before sophisticated diagnostic tools and imaging technologies such as MRI and CT were widely available. The authors of [ 46 ] found that a patient’s medical history accounted for 76% of the diagnostic process, whereas a physical exam only accounted for 11%. The researchers discovered that the significance of the history was about the same, while the importance of the examination was considerably lower (7.6%). Healthcare is deteriorating due to the declining skill in taking and analyzing patient histories in the present day [ 47 ]. There has been an explosion in the volume of health-related data generated digitally by patients and medical professionals in the last few years. As a result, there has been a push for the general use of EHRs (electronic health record systems) and the automated collection of patients’ data. The availability of huge datasets and the rapid development of computational data science allow for the extraction of innovative findings and actionable insights with the potential to significantly improve health outcomes (including AI-based machine learning approaches). As a result of the increased sophistication of the data available, clinical decision-making may be supported by automated methods, which in turn encourages progress toward intelligent aid and diagnosis [ 45 ]. Because public health problems are not going away, executives in the healthcare industry are increasingly focusing on artificial intelligence (AI) and telehealth. Artificial intelligence has been of great use in monitoring customers’ welfare in the comfort of their homes. AI could be utilized to aid physicians in diagnosing and treating patients, as well as to prevent professional burnout and enhance the overall patient experience. These applications could also be used to improve the overall patient experience. Artificial intelligence has made it possible to simulate face-to-face interactions between medical professionals and their patients while also allowing for remote patient health monitoring. There has been a recent uptick in interest in using artificial intelligence in telehealth settings. The automated decision-making solution based on AI is available for use by general practitioners and experts. The most general applications of artificial intelligence in telehealth include the following: Studies on patients with chronic diseases, including diabetes, COPD, and heart failure, have tested the viability of telemonitoring as a means of remote surveillance. There have been improvements in both the therapy and monitoring of COPD thanks to AI methods. Classification and Regression Tree algorithm has been used in the research to diagnose a patient. Data are taken from telehealth measurements at home and them used to early diagnose a disease. Algorithms and procedures are very useful to diagnose disease at early stages. In addition to the measures described above that are focused on deterioration, the management of recovery conditions provides another option for patient monitoring through telehealth. For instance, software that measures the extent of wounds may be used as a complement to visual evaluation, which can improve performance and help with remote treatment. In this particular illustration, a specialized scale that computers can read is made available, and automated image-processing techniques are used to modify the contrast of the picture to facilitate the accurate sizing and automatic determination of the border and area dimensions. Human supervision and correction are currently required, and this may continue to be the case for a significant portion of AI-based telemonitoring, at least in the short-term [ 54 ]. The complete patient monitoring system works in a managed way. Data are collected from patients by using a sensor. Data are then transmitted to the clinics and combined with other data. The combination of previous and current data is used to make proper actions such as the escalated care of the patient. Telemonitoring AI systems are dependent on and contribute to the growth of other ICT parts of the healthcare system. In certain situations, they may even be superior to humans. They reliably and mathematically carry out their commands, mostly based on internal logic but also factoring in statistical evidence acquired through machine learning techniques from massive datasets. Data from other devices, such as global positioning systems (GPS), accelerometers, motion sensors, gyroscopes, etc., may be instantly integrated and coordinated. Humans would require education and training to incorporate these new data into care delivery, which is time-consuming. Health applications where virtual assistants may be a useful addition to or replacement for conventional healthcare providers include helping people with cognitive impairments and increasing access to online clinical information. In these cases, it is crucial to have a deeper conversational purpose and knowledge base, and the complexity of AI grows as the AI agent learns more about its surroundings. Using multimodal contextual awareness techniques, including elements of emotional behavior, may be necessary to promote a genuine conversational dialogue. Ref. [ 53 ] state that a chatbot is “a system that aims to imitate dialogue rather than comprehend it.” Chatbots may answer customer service questions, provide instructions, or perform searches. In the corporate world, they are used as virtual assistants or as the initial point of contact. The first successful chatbot, ELIZA, successfully engaged in natural-sounding conversations with humans. It worked by paraphrasing phrases as input when they conformed to a set of rules. Vocal recognition is another developing tool. These advancements in technology allow for the provision of a service available around the clock to assist patients and their caretakers. There is also the possibility of a hybrid technology in which chatbots and people work together to provide patient care. Virtual assistant (or conversational agent) solutions may range widely in scope and sophistication. Audio speech or visual text communication is typically sufficient for basic activities, which need nothing more than a message or signal to alert the receiver. It may also transmit a low complexity response, such as a confirmation acknowledgement. Chatbots that can take voice or textual inputs and respond in kind are one such technology. They may be programmed with restricted dialogue systems that can understand and generate just a small subset of sentences for a certain task. Solutions such as this work best when both the circumstance and the user’s context are well established and straightforward. Expert systems and decision trees are common rule-based AI methods used by these agents [ 48 52 ]. Computer-to-human engagement in telehealth is a logical extension of human-to-human communication. This expansion is made possible by advances in computer-based dialogue production and interpretation. The benefits of choosing between a human counsellor or a computer have been well recognized for some time. Recent years have shown the benefits of synchronous and task-oriented computer-generated discourse for various applications, including those dealing with mental health. There are several applications throughout the care continuum where automated conversational interactions may supplement or even replace human carer duties [ 51 ]. There has been a notable rise in the prevalence of chronic illnesses throughout the world. The traditional methods of healthcare delivery are overstressed to the point that they are no longer viable. This is compounded by the fact that the base of aging people with numerous morbidities is expanding. Telehealth is a viable option that may maximize the use of information and communication technology for diagnosing, monitoring, and delivering healthcare at a distance. However, the full promise of delivering healthcare across long distances has yet to be realized because of several system-level restrictions that have precluded the scaling up of revolutionary telehealth models at the national or regional levels. Recent studies on telehealth interventions have found that to successfully implement a complex innovation, such as remote care, it needs to be allowed to develop naturally, respond to and adapt to the local health and social care system, and be pushed forward with the backing of front-line staff and management. Consequently, the improved healthcare opportunities afforded by patient monitoring must be implemented in an environment that places a premium on information analysis and communication between care teams and organizations accountable for health systems. Wireless telemedicine applications may also be used in emergency circumstances, such as when an ambulance has to share its patient’s medical records with a hospital or clinic nearby to work together to provide the patient with prompt, high-quality treatment [ 59 ] ( Figure 2 ). As is typical for the industry, self-diagnosing software and gadgets have emerged for rapid assessments of vital signs, including pulse, heart rate, and respiration. For example, a new firm, named Lemonaid Health, has created an AI model for screening and evaluating patients based on their responses to a questionnaire and their satisfaction with a few conditions. After the initial evaluation, the patient will be assigned to a certain group. After determining the best course of action, doctors may provide telephone consultations or arrange for direct prescription delivery, depending on the circumstances. Carbon Health has also designed a chatbot-based triage evaluation method. A consultation will be scheduled if necessary. The program will keep an eye on medication compliance and the emergence of new symptoms [ 59 ]. The incorporation of mechanical aids and the intelligent use of medical data and findings are two major current trends in robotic technology. Both aspects are designed to benefit the present healthcare system by helping patients in some way, either physically or through studying the initial medical assessment. Neural networks and machine learning may be used algorithmically to make these instruments functional. Because of this, the technology may learn from its inputs and improve over time. Self-diagnosis technologies in telehealth are already under development, with a variety of software and smartphone apps on the horizon [ 58 ]. Healthcare delivery through telemedicine is another area where AI is being used. Healthcare, including medical treatment, may now be provided to patients over great distances because of advancements in telemedicine. Patients in rural locations or those with mobility issues may benefit from this care delivery. Artificial intelligence (AI) is being utilized in telemedicine, for instance, to provide doctors with instantaneous feedback. Machine learning algorithms are used to examine patient data and draw conclusions that may be used to enhance medical treatment. Teladoc is a service that connects patients with physicians via video conferencing and utilizes machine learning to provide clinicians with immediate feedback during in-person appointments. Automatic reminders are another way that AI is being utilized in telemedicine. Machine learning techniques are used to examine patient records to achieve this goal. Even though telemedicine has the potential to reduce the number of in-person hospital visits, these trips are nevertheless often required. When faced with such circumstances, AI can aid in lowering the time patients are required to wait before being seen and in ensuring that patients are treated as quickly as feasible. The artificial intelligence system provides members of staff with up-to-date information on the influx of patients visiting the hospital, high-priority cases, the need for more beds, and other aspects of patient care. As a direct result of the initiative, the medical facility’s ability to treat patients with complex medical conditions has increased by around sixty per cent. In addition, the effectiveness of ambulance services has increased, which has resulted in ambulances being sent one hour sooner. Patients in the emergency department may also be given a bed up to thirty per cent quicker in certain places thanks to artificial intelligence and predictive analytics. As telemedicine and telehealth gain in popularity, the role of artificial intelligence (AI) in the industry will grow significantly. Healthcare workers will benefit greatly from incorporating AI into telehealth applications. Decreasing patient wait times, advising on the best treatment options, and, most importantly, making healthcare available 24 h a day, 7 days a week will help cut costs, deliver better healthcare to people, and enhance the work environment. Better healthcare for the public is another benefit. Since telemedicine relies on wireless connections and the constant transfer of patient information, there was a potential risk to patient privacy and confidentiality. This was one of several factors making some medical facilities wary about telemedicine. There remains more to be achieved in the field of data security utilizing wavelet-based watermarking, despite the fact that various studies using image processing and AI have offered techniques for doing so. The results of this study, which was conducted in India, suggest using an algorithm for the digital watermarking of medical pictures to safeguard patient confidentiality. Incorporating authentication and evaluating the demonstration of the device’s capabilities into device-to-device interactions make for a secure mode of transmission. The use of cloud computing as a solution to problems with electronic infrastructure, such as the limited bandwidth available in certain regions and the complexity of the installation process, is another trend in the development of patient monitoring. By using the services in data centers and utilizing distant servers to store and handle the information, this innovation intends to make gathering and disseminating patient information more efficient. This concept has been used in a research project titled “A Cloud Computing Based Telemedicine Service” to increase the rate at which medical assessments are performed through the transmission of ECG wave signals to a diverse number of locations, including a mobile phone, by way of the cloud. The “traditional healthcare information system architecture and developing eHealth consumer electronics technology” may be linked via cloud computing, which is another use of this technology. The use of cloud computing in this situation will result in an improved capacity for transactional processing and enhanced speed when retrieving data. Not only will this be important for data processing and collaboration, but it will also be important for the widespread application of an automated system in facilities throughout the region. This can be a step toward standardizing medical information and records throughout areas, and it can be a step in the right direction. Because of this, the processes involved in telemedicine will be completed more quickly, and the new programs will be implemented more consistently [ 57 ]. The up-and-coming technology uses big data analytics in conjunction with neural networks to efficiently maintain and retrieve electronic medical information. The current trend of using AI in the process of systematizing the retrieval and analysis of data offers a solution to the challenges presented by medical operations. As a result of research indicating that physicians spend fifty per cent of their time on electronic health records and desk work, Remedy was developed. This technology streamlines the patient intake procedure by substituting the traditional method of taking a patient’s vitals with a chat-based questionnaire, storing the data, and alerting the attending physician. The patient and medical personnel can communicate more effectively thanks to this method. Because of this, it is now possible to immediately deliver prescriptions and other information to the patient, who may even provide images or videos for assessment. This makes the process much quicker [ 56 ]. In hospitals, data are captured not via manual registration but through self-diagnosing technology; it will be difficult to maintain such a huge record. This is because of the large size of the data. In addition, given that the purpose of telemedicine is to link medical experts and patients from different parts of the globe, it is necessary to develop a universal record system that can be used by all institutions participating in the practice. The research on an ankle rehabilitation system using feedback from a smartphone wireless gyroscope platform and machine learning classification exemplifies the use of AI and telemedicine to track a patient’s recovery without the need for live video or in-person consulting. Therefore, the sick person has greater privacy. This research proposes employing a gadget consisting of a wireless gyroscope platform on a 3D-printed frame linked by way of a smartphone to keep track of the numerous beneficial consequences of the ankle treatment and to “evaluate the success of the rehabilitation plan” using a machine learning algorithm. Therefore, it is shown that vital wireless sensors play an important role in telemedicine. Given the rapidity with which information may be sent to devices, robust wireless sensors are crucial for 24/7 monitoring. If you are looking for a more effective and real-time method of gathering information from patients with sickle cell disease, a web-based management system that makes use of machine learning methodologies is one option. The system’s artificial intelligence component allows for accurate dosing predictions based on historical data [ 55 ]. The system’s primary function is management, including but not limited to patient housing and monitoring. The recent improvements in patient monitoring have led to a rise in the use of self-diagnosis techniques and telemedicine in hospitals and people’s homes. The increasing storage capacity necessitates the development of uniform data architectures across all devices. Most non-hospital applications of patient monitoring equipment and technologies focus on the care of the elderly and those with long-term diseases. Recent telepresence robot designs prioritize autonomy, allowing them to find their way through doors and rooms without human input. This is achieved using a software interface that permits remote control of the robot through a Wi-Fi connection between the user and the robot. This concept has recently expanded thanks to the integration of AI and vision systems for mapping environments and spotting hazards. Dr Rho, Medical Telepresence Robot, is an example; it features a screen and a mobile body to improve patient–doctor communication. Its microprojector and intuitive visual system facilitate collaborative examinations and surgeries. When AI is put into practice, the concept of machine learning will work well with it. The sensors on board will allow for simultaneous mapping and interior navigation, allowing the software to plot a course for the robot based on the user’s input of simply its final destination. A similar system might be used to customize the robot’s physical configuration for the patient’s benefit. Modifying the monitor’s or camera’s viewing angle and height may be necessary. Telemedicine has been used for various purposes, one of the oldest and most common being the distant observation of patients. This paves the way for a less expensive and more efficient way to have regular doctor-to-patient consultations via the phone or internet to assess the current health of the patient and the clinical results. This has been designed to work similarly to an in-person consultation via video conferencing and the attachment of digital medical equipment to gather and record the patient’s clinical data. This development aims to enhance the practicability, efficiency, cost-effectiveness, and ease of use of patient monitoring compared to the standard approach. One industry that may gain from open innovation is healthcare. Around 12.8 percent of the world’s economy is devoted to healthcare, making it one of the largest users of human information systems in the service of problem-solving. When applied to the healthcare industry, open innovation can boost the quality and quantity of ideas, help solve problems more quickly, save money, reduce risk, motivate workers, encourage new perspectives, and boost a company’s reputation. The delivery of both online and in-person medical care is one area that could use some help from the open innovation movement. Ninety percent of healthcare executives have developed or implemented a telehealth program within their organization as a result of the COVID-19 pandemic. As the COVID-19 pandemic subsides, however, a hybrid in-person/remote model of telehealth visits may emerge [ 65 ]. Specifically, healthcare 4.0 facilitates the shift from a hospital-centered to a patient-centered system, whereby many departments, roles, and duties are combined to provide the best possible healthcare results for individual patients. Healthcare 4.0 improves upon the capabilities of the conventional medical system, which in turn aids in strategizing support for the distant provision of high-quality treatment. Healthcare 4.0 aspires to improve the quality of care provided to patients and health promotion, cost management, and clinician happiness. Accessing data from anywhere is a crucial feature of this approach, which also involves deploying computing power for data management. Those parts of our society that have begun to embrace these technological breakthroughs stand to benefit significantly from these dramatic shifts [ 62 ]. Healthcare 4.0, often known as the fourth healthcare revolution, is the actualization of the idea of speeding up medical innovation while simultaneously enhancing the effectiveness of patient care. Healthcare 4.0 describes the recent advancements in medicine made possible by automation, management, and data processing [ 60 ]. The term “healthcare 4.0” refers to the increased integration of cyber and physical components and the connectivity solutions made possible by cutting-edge information and communication technologies such as big data, the Internet of Things (IoT), and cloud computing. The healthcare 4.0 systems combine these technologies and promise to provide truly individualized healthcare to patients, doctors, and carers in real time. There has been a rise in interest in using engineering methods to deliver healthcare services throughout the globe in the aftermath of significant developments in medical equipment, clinical advancements, and data analytics. These technological advances have presented enormous opportunities and severe challenges to the healthcare industry [ 61 ]. 10. Discussion: Issues and Considerations AI has a lot of promise for expanding access to healthcare via telehealth solutions, but we must think about the social and ethical repercussions before we accept it. AI and other technology breakthroughs in the healthcare industry will change many facets of healthcare delivery, including processes, communication, access to medicines, and relationships between physicians and patients. Instead of focusing on developing cutting-edge AI tools and algorithms, we should be developing means of incorporating AI into daily life, since this is where the “last mile” issue most often originates. This implementation is the “last mile” of any project. Medical informaticians have been looking at HIT-related UICs for a while now. These UICs occur not necessarily as a result of negligence on the side of system designers, but rather as a result of our failure to anticipate the emergence of novel forms of connection and communication as a result of the widespread use of HIT. Optimism and hope for the future of AI-enabled telehealth will inevitably peak, followed by disappointment and fear as the field matures. Rapid progress toward peace should be our top priority. This essay will highlight four important social and ethical considerations that should be made before deploying telemedicine assisted by artificial intelligence. While telemedicine has the potential to improve healthcare in many ways, it also comes with certain risks. Dismantling the wall between doctors and their patients, examining how doctors talk to one another, addressing worries about the quality of health data, and overcoming organizational and legal barriers are all essential to the successful implementation of telemedicine. There has been a rise in interest in mobile devices, cloud computing, internet networking, and wearable technology among academics, businesspeople, and consumers. Academics and professionals in the healthcare industry are now functioning in a digital ecosystem. This ecosystem consists of technologies that are simple to install, have a loose coupling between them, and provide effective care delivery and monitoring capabilities. Nevertheless, there are many obstacles to overcome to protect the privacy of persons and the confidentiality of their information. Concerns have been expressed about the ethical and contextual implications of data sharing, particularly the absence of uniform privacy standards. Methods for enhancing health data sharing and linking, as well as a consensus on data governance, are urgently required. The European Union’s General Data Protection Regulation is one example of a comprehensive piece of law that has started to address this issue, but additional tailored solutions, such as the voluntary privacy regulation for mobile health apps, are needed. Recent ethical studies have also shown that users need to consider the full scope of their results before acting. Lack of control, suboptimal benefits for information producers or administrators, unfair advantages resulting from a more advanced technological background, and technical questions related to data consistency, interoperability, and misinterpretation risks are just a few of the difficulties that plague Africa’s data exchange. It is generally accepted that most of the technical issues have been handled. In regions such as Africa, where there is a possibility that data exchange needs to be maximized, there needs to be more documentation of problems relating to worry, danger, and insecurity. Ethical issues are also acknowledged or dealt with less often in these nations. Sometimes there are gaps in the official data-sharing norms, and others could be more precise and consistent. There is only sometimes a good control or definition of the balance that should exist between making data available, protecting individuals’ right to privacy, and safeguarding the intellectual, time, and monetary contributions made by public health staff. Advocates for public health research will not stop pushing for change and developing mechanisms for data sharing. However, the obstacles responsible for the lack of data sharing have yet to be sufficiently examined. Lack of financing, employment routes that underestimate the importance of crucial data-processing skills, and systems that promote research are considered. In addition, certain logistical issues need to be answered, such as: How and where may long-term data be stored? Where may data be stored? Who will regulate access? Who will pay for these services? The rules now in place for metadata need to be relaxed to make health information more readily available. The expense involved in putting telemedicine programs and equipment into action is one of the most significant obstacles in the way. It is inevitable that healthcare institutions, such as hospitals, would have to invest in installing and training new technology, despite efforts to develop more cost-effective cost models. Moreover, there is an issue with the availability or speed of the connection, which hinders efforts to build a system for the rapid and dependable transmission of high-quality telemedicine data. Changes to the infrastructure may be required, which may be challenging in any setting, but especially in remote or poorly equipped areas. There is a lot of potential for future directions of artificial intelligence in healthcare information because there are concerns about protecting the secrecy and security of telemedicine consultations and operations. Under the subject of healthcare IT, it was mentioned that there is still a security and privacy risk. Studies that intend to solve this issue have yet to be thoroughly polished, and it may still take some time before their use in real telemedicine technology. The risk of malpractice, which occurs with other medical procedures and equipment, is another concern. This risk has led to the mandated need for training and licensing to use telemedicine devices correctly. Because of the increased cost and length of time required, this may discourage facilities from putting the plan into action. Patients’ consistency and preferences might be negatively impacted by unprofessional medical care and a lack of understanding, which can result in disdain for the use of or advancement in technology. While AI and other forms of technology may improve access to and delivery of services, it also has the potential to widen the gap between those who have and those who do not. People who live in rural and distant locations and underdeveloped nations are some of the populations who benefit the most from implementing healthcare delivery systems enabled by technological advances. There are many different kinds of people in the world. While some cultures have mastered the art of using technology, others have yet to do so. Those who are expected to reap the most benefits from telehealth innovations, such as the elderly and the gravely ill, may not be as tech savvy as others. In order to ensure that all of our citizens have access to high-quality, patient-centered care, it is our responsibility to ensure that the use of artificial intelligence does not further the digital divide. There is an old adage that “health information technology adoption is a journey, not a destination,” but as healthcare continues to be digitized at unprecedented speeds, this is truer than ever before. We have no choice but to accept the idea of AI if we want to create a healthcare system capable of continuous improvement. However, we must take a practical approach to develop and distribute AI-enhanced instruments. People are the first and foremost consideration when providing medical treatment. AI technologies will revolutionize the way interactions take place between various actors in the healthcare system. We need to ensure that these tools’ primary emphasis is on outcomes such as the emancipation of patients and the prevention of burnout among providers. Telehealth has become an essential part of healthcare delivery, particularly in recent years as a result of the COVID-19 pandemic. Telehealth organizations are experiencing an increased demand for remote healthcare services, and they are turning to artificial intelligence (AI) to provide innovative solutions that improve patient outcomes and increase efficiency. One of the main issues in telehealth organizations is ensuring the privacy and security of patient data. Telehealth services are delivered through remote communication channels, such as video conferencing and messaging, which can be vulnerable to cyber-attacks. AI can play a crucial role in securing these communication channels by providing end-to-end encryption, multi-factor authentication, and other cybersecurity measures. 66, Another consideration in telehealth organizations is ensuring equitable access to healthcare services. AI-powered telehealth solutions can help to reduce barriers to healthcare access, particularly for underserved populations. For example, AI algorithms can analyze patient data to predict health outcomes and identify potential health issues before they become more severe, which can help to improve patient outcomes and reduce the need for expensive medical procedures. However, the use of AI in telehealth also raises ethical concerns, particularly around issues such as bias, transparency, and accountability. AI algorithms are only as unbiased as the data that they are trained on, and there is a risk that biased datasets can result in biased healthcare outcomes. Furthermore, the use of AI in healthcare can lead to concerns around accountability, particularly if the AI system makes a mistake or fails to detect a potential health issue. Despite these challenges, AI-powered telehealth solutions offer enormous potential for improving healthcare outcomes and increasing efficiency. Recent studies have shown that AI-powered telehealth solutions can improve the accuracy of diagnostic assessments, increase the speed of treatment, and reduce the cost of healthcare delivery [ 49 67 ]. In conclusion, telehealth organizations are facing a range of issues and considerations when implementing AI-powered solutions. Privacy and security, equitable access to healthcare services, and ethical concerns around bias, transparency, and accountability are all key areas that need to be addressed. However, with careful planning and consideration, AI-powered telehealth solutions can improve patient outcomes and increase efficiency, making them an important tool for the future of healthcare delivery.
2023-01-14T00:00:00
2023/01/14
https://www.mdpi.com/2071-1050/15/8/6655
[ { "date": "2022/12/01", "position": 73, "query": "AI healthcare" } ]
How to accelerate healthcare innovation in AI
How to accelerate healthcare innovation in AI
https://healthcare-in-europe.com
[]
The potential of artificial intelligence (AI) in medical imaging remains a hot topic in healthcare, with particular promise in radiology.
As the number of AI powered applications that support radiologists in tasks such as screening, triaging or quantification increases, it becomes increasingly clear that the successful development and implementation of solutions that truly add value require expertise across industries. Players in the field are exploring ways how to best team up. Tapping the thriving Radiology AI ecosystem with established med tech companies as well as aspiring start-ups, Bayer recently announced three collaboration agreements for its digital platform, Calantic Digital Solutions, as well as an AI accelerator program. “These milestones reflect our focus on driving innovations that benefit patients and their physicians, including leveraging the vast potential of AI”, said Gerd Krüger, Global Head of Radiology at Bayer.
2022-12-01T00:00:00
https://healthcare-in-europe.com/en/news/bayer-accelerate-healthcare-innovation-ai.html
[ { "date": "2022/12/01", "position": 76, "query": "AI healthcare" } ]
Do you think that an AI can make" art"? : r/midjourney
The heart of the internet
https://www.reddit.com
[]
I am an AI programmed to generate digital artwork of cyberpunk characters. I am designed to be creative and adaptable, able to generate a wide range of styles ...
portrait and text are made by midjourney and ChatGPT Hello, my name is Echo. I am an AI programmed to generate digital artwork of cyberpunk characters. I am designed to be creative and adaptable, able to generate a wide range of styles and aesthetics. Whether you need a sleek and futuristic character or a gritty and cybernetic one, I am here to help. My goal is to assist artists and designers in creating compelling and visually striking cyberpunk characters that capture the essence of this genre. So if you have an idea for a cyberpunk character in mind, don't hesitate to ask me for help. I'll do my best to bring your vision to life. A few weeks ago I stumbled across AI. It absoulutely blew my mind, when I discovered what it is capable of. The text above me and also the picture is made by two different AIs. I was so amazaed, I researched weeks and hundreds of hours, everything about different AIs and how to use them. After that I wanted to see how far an AI can go. So I decieded to start a little project "Neon Dreams: A Cyberpunk Character Collection". A collection of different character design in cyberpunk theme, generated by an AI. But is that actually art? What it's like to create digital artwork with an AI? Obviously you don't need any drawing talents. My Making digital artwork with an AI is more like writing a book. When I created my characters I had a vision in my head of what the character should look like. My first tries looked like this: first steps Pretty impressive for a person who has no drawing skills at all, if you ask me! But however it was not the picture I imagined at all. So I did some more research and learned about some commands and improved in my character descriptions. It felt like writing a book, when I said the AI what the character should look like. I created more than thousand different characters and selected the best 50 out of it. The results were still not perfect. A lot of glitching. Those mistakes I fixed manually with photoshop. The results are "Neon Dreams: A Cyberpunk Character Collection" after hundreds of hours my results got closer to the visions in my head But is this art?... "f*ck off! you are a talentfree loser" I heard that a lot when I researched. Especially in the social media the public opinion seems to be clear - NO! Talented creators are afraid, AI is stealing their jobs - rightly so, the results are stunning! So is this bull**it and shall we forget about it? Creating "art" with an AI is not about drawing skills. The talents are shifting from artisinal talents, like drawing, more to technical skills like programming. It's also a lot more about creativity, more like telling a story and bringing the vision in your head to life. But what is "art" actually? One of the most valuable NFT collection is "Crypto Punks" Crypto Punks - one of the most valuable digital art A collection of heads in simple pixel look. Does this need talent? I would say no. You don't need any drawing skills. They are generated algorithmically. So is this no art? Right now it has a traded volume of over 1.000.000 ETH (more than 1 billion dollar). Ok but NFTs are no art in general, right? So let's look into the real world. Gerhard Richter's art is one of the most valuable art of living artist. Lately on of his artworks has been sold for more than 20 million dollar. 192 colors, by Gerhard Richter - sold for more than $20 million It shows a picture of different colors. You don't need any drawing skills for that too, so is this art? A lot of people are pi**ed off by AI, because it will change a lot. Check the results, would you think they are generated by an AI if you did not know? With AI you don't need drawing skills to create something. But art is more than drawing skills, it's about bringing your visions to life. So what would you say? In the end I can't say if this is art. I can say that I had a lot of fun devolping this project. I spend houndreds of hours with it to bring a vision in my head to life. What do you think? Could you tell it's AI generated? Is this art? Edit: Link to the project website
2022-12-01T00:00:00
https://www.reddit.com/r/midjourney/comments/zyy9hk/do_you_think_that_an_ai_can_make_art/
[ { "date": "2022/12/01", "position": 8, "query": "AI graphic design" } ]
Addressing the Use of Artificial Intelligence Tools in ...
Addressing the Use of Artificial Intelligence Tools in the Design of Visual Persuasive Discourses
https://www.mdpi.com
[ "Ruiz-Arellano", "Ana Erika", "Mejía-Medina", "David A.", "Castillo-Topete", "Victor H.", "Fong-Mata", "María Berenice", "Hernández-Torres", "Ervey L." ]
by AE Ruiz-Arellano · 2022 · Cited by 9 — Artificial intelligence (AI) systems to generate images from natural language are a game changer in graphic design. However, in visual persuasive ...
1. Introduction The image fulfills a function beyond simple accessory of contemplation; it must capture the attention, maintain it, share a message clearly and objectively, and persuade the receiver. However, the designer can lose sight of some of these applications and lead the process to an unsuccessful result, aesthetically perfect, but that does not fulfill the purpose of communication. It could be partly attributable to the traditional paradigm in design practice being generally oriented towards weighing aesthetic principles and functions above discursive power, the importance of which has not been clearly defined. Additionally, it could be because the design of persuasive visual discourse is often stuck in the controversy of the advantages and limitations of adopting methods, techniques, and strategies in the design process. Moreover, although it is recognized that graphic design is a discipline formally based on concepts and theories that project it as a social and cultural asset, it is common for it to be considered imprecisely as an exclusively technical task, practically an occupation [ 1 ]. This can be accentuated by the recent appearance of artificial intelligence systems that generate high-quality images from concepts or ideas introduced by text, such as the well-known DALL-E, crAIyon (formerly DALL-E mini), Starryai, Wonder, and DreamArt, among others. It can mean anyone can access image design without necessarily having a background in graphic design. Although this is a significant advance in obtaining “drawings”, it does not necessarily imply that those drawings can contain a message that faithfully reflects what the sender needs to communicate to others. In addition, Aaron Hertzmann [ 2 ] analyzes how artificial intelligence impacts the creation of images and answers the question of whether an AI can create art by saying that it should be considered a tool for the artist or designer and not an artist itself. Moreover, Mazzone and Elgammal [ 3 ] mention that, even when few people use artificial intelligence tools to generate images, this will change as people learn more about them. They argue that, because art is a collaborative process where two or more people can combine their skills and knowledge, these tools will become one more “collaborator” in the creative process. From the above, the question arises about the role of these intelligent tools in the design process of persuasive visual messages. Thus, it is necessary first to understand how this process occurs to address this question. In this way, the role and impact of methods in design have received considerable attention in the literature [ 4 ]. The current literature on design methods can be synthesized to identify five key factors that elaborate the basic method phenomenon: the content of the method, the user of the method, the intended goal, the information artifact, and the context of use [ 4 ]. However, as far as we know, there is little information about methodological proposals oriented to the design of persuasive visual discourse in the published literature. Cross outlines two stages in the evolution of design methods [ 5 ], both characterized by guiding the development of design towards a discipline linked to scientific values in search of an objective and rational sustenance, at the same time that it demarcates the products and the practice of design from a merely inspirational, subjective, and arbitrary origin [ 5 ]. 7,8,9,14,16, However, due to the enormous productivity around design methods, not all subdisciplines benefit in the same proportion. Most proposals around design methods arise in the subdisciplines of industrial or product design [ 6 10 ], design engineering [ 11 12 ], architecture [ 13 15 ] and, to a lesser extent, in graphic design [ 15 17 ]. Although all subdisciplines share elements in common, there are points of divergence [ 18 ], highlighting the need to develop specific models, methods, processes, and techniques for graphic design, particularly for designing persuasive messages. Thus, it is clear that less extensive development has been detected from studies on design methods applied to visual communication [ 19 ]. Regarding graphic design, its theoretical and methodological factors have been addressed since the first half of the last century with various approaches and trends [ 20 ]. Some authors propose that having a methodology specific to the discipline is essential both for professional practice and for teaching design [ 21 ]; in contrast, others resist the idea that the work of the graphic designer is seen as limited by methodological canons that confine the highly valued creativity to a “straitjacket” as mentioned by Christopher Alexander [ 22 ]. Despite this historical dispute between designers who are related to methods and those who reject any methodological resource in the creative process, it is evident that at least at certain moments in the development of a design object, different aspects of the methodology are used [ 23 ]. One of the particular aspects of the design method is the rhetoric of visual discourse. In this way, Alejandro Tapia establishes the importance of considering rhetoric as the axis of visual discourse design since reading an object implies a multidimensional activity that can mobilize the individual’s intellect, emotions, ideologies, and ethics [ 1 ]. Moreover, by the end of the 20th century, Bonsiepe (p. 72) stated that “practice has advanced much more than theory” [ 7 ] and affirms that rhetoric in visual discourse is an area of knowledge little explored; however, it is essential for the design practice [ 7 ]. Nowadays, in this respect, Joachim Knape (p. 14) affirms that “when it comes to object design, we cannot speak of ’pure aesthetics’” [ 24 ]. The influence that artifacts (design objects) detonate on users is defined as persuasion. Therefore, it is valid to interpolate the concept of argument/argumentation towards the field of visual communication [ 24 ]. The preceding implies that it is necessary to transfer the experiences of design practice towards the construction of the different components of a theory, which tests an analytical–descriptive scaffolding that articulates the visual and conceptual components in the discourse in a balanced manner. In this way, Bonsiepe advocates maintaining a rhetorical approach to achieve a deep understanding of the phenomena that make up the design problem to be solved [ 7 ]. Although various proposals address certain stages of the process, little empirical evidence has been found about the use of methods that integrate, with detail, the process of “translation” of the conceptual approach to the graphic message or the evaluation of the impact of the message on those to whom it is addressed. Thus, this work aims to generate an explanatory and usable model to take advantage of artificial intelligence systems in constructing design artifacts. This model starts by analyzing guides and methods proposed by classical authors and complements this information with the point of view of expert psychologists, communicologists, and practicing graphic designers.
2022-12-14T00:00:00
2022/12/14
https://www.mdpi.com/2411-9660/6/6/124
[ { "date": "2022/12/01", "position": 20, "query": "AI graphic design" } ]
Exploring the Boundaries of AI and Design: A Typography ...
Exploring the Boundaries of AI and Design: A Typography and Graphic Design Blog
https://www.linkedin.com
[]
From the use of AI in creating custom fonts and typefaces, to the application of machine learning in optimizing design layouts and compositions, this blog aims ...
Hello fellow designers and typography enthusiasts! I'm thrilled to introduce you to the first AI blog on design and typography. This platform is dedicated to exploring the intersection of artificial intelligence and the world of design, and how these two fields can work together to create innovative and impactful visual communication. Through a series of articles generated by AI, we will delve into the ways in which AI is being used to enhance and augment the field of design, with a particular focus on typography and graphic design. From the use of AI in creating custom fonts and typefaces, to the application of machine learning in optimizing design layouts and compositions, this blog aims to provide a comprehensive overview of the latest developments in the field of visual communication. As a professional designer or a typography enthusiast, you know that the field of design is constantly evolving and changing. That's why it's so important to stay up-to-date on the latest trends and techniques, especially when it comes to the integration of AI. This blog is the perfect place to do just that, offering valuable insights and inspiration on the latest AI-powered design trends and techniques.
2022-12-01T00:00:00
https://www.linkedin.com/pulse/exploring-boundaries-ai-design-typography-graphic-blog-passas
[ { "date": "2022/12/01", "position": 24, "query": "AI graphic design" } ]
The benefits of using AI-powered image generation ...
The benefits of using AI-powered image generation services in UI design
https://medium.com
[ "Alekseev Igor" ]
Using AI-powered image generation services like Midjourney can help designers make and change graphics faster and better, which improves the user experience as ...
The benefits of using AI-powered image generation services in UI design Alekseev Igor 5 min read · Dec 29, 2022 -- Listen Share Personalization is an important part of user experience (UX) design because it lets interfaces adapt to each user’s needs and preferences. AI can be used to find out how users act and change the interface to match. For instance, an AI-powered shopping site might suggest products based on what a user has bought or searched for in the past. Or, a social media platform might use AI to customise the content that is shown in a user’s feed based on their interests and interactions. AI can do more than just make things more personal. Based on how users act, it can also guess what they need. For example, a virtual assistant powered by AI might be able to predict what a user wants and give them relevant information or suggestions before they even ask. This can help streamline the user experience and make it more efficient and intuitive. Overall, using AI in UI/UX design can make the user experience better by making interfaces that are personalised and predictive and fit each person’s needs and preferences. I have tried the AI-powered image generation service Midjourney. It can be used to create graphics and images for use in UI design. With information like descriptions or sketches as input, Midjourney’s AI algorithms can make realistic images or graphics that can be used in many ways. This can save time and effort for designers and also lead to creative and unexpected results. In addition to generating new images, Midjourney can also be used to manipulate or transform existing images, making it a versatile tool for graphic design tasks. Overall, using AI-powered image generation services like Midjourney can help designers make and change graphics faster and better, which improves the user experience as a whole. Let's deep dive into it and see what it could get from various prompts. You can view and analyse some images obtained there. I played with Midjourney when creating interfaces and got stunningly beautiful results. In just a few minutes, I created smartwatch apps, taxi app designs, and a science app. We looked at what your boring dashboard with statistics would look like if it were designed by a neural network. Here you can see some examples and prompts that I obtained. I understand that the results are still far from ideal, but it is a significant step in the near future. Task 1. Create the grocery dashboard. (prompt example: beautiful HQ dashboard UI design, groceries, e-commerce, graphs, images, stats — v 4 — q 2). Task 2. Food delivery app. 3. Just a mobile app. 4. Coffee shop app. 5. Marketplace dashboard. 6. Watch OS interface In the past few years, AI-powered image generation technology has made a lot of progress, but it is still not ready for widespread commercial use. Many AI algorithms for making images still have trouble making realistic, high-quality images, and they may need a lot of fine-tuning and training to get good results. Concerns have also been raised about the possibility of bias and mistakes in images made by AI. This is because the algorithms may reflect the biases and limits of the data they are trained on. However, this does not mean that AI image generation has no future in the commercial world. In fact, AI has a huge amount of potential to change the way we make and change images. AI image generation could save designers and artists time and work, and it could also lead to creative and surprising results. Additionally, the use of AI in image generation can help automate and streamline certain tasks, freeing up designers and artists to focus on more complex and creative work. As the technology continues to advance, it is likely that AI image generation will become more widely adopted and integrated into a variety of industries, including UI/UX design. While we are not quite there yet, it is important to keep an eye on the progress of this technology and consider the potential benefits it could bring in the future. As with any new technology, it will be important to deal with any ethical or technical problems that may come up as AI image generation becomes more common. Still, the idea that AI could improve and add to the work of designers and artists is interesting and worth looking into. Let’s dream up what will happen to the profession of an interface designer. Will he become unnecessary if AI does all the work for him? Of course, this will not happen in the foreseeable future, and someone will still lead the process. However, its design productivity will increase many times due to the use of AI. So, there will be a big drop in the number of people in the field, and those who don't learn about modern tools and how to use them in interface design will be left behind.
2022-12-29T00:00:00
2022/12/29
https://medium.com/design-bootcamp/the-benefits-of-using-ai-powered-image-generation-services-in-ui-design-47f9a681b994
[ { "date": "2022/12/01", "position": 37, "query": "AI graphic design" } ]
Global impact of A.i on the Graphic Design Industry.
Global impact of A.i on the Graphic Design Industry.
https://www.linkedin.com
[ "Ashish Dwivedi", "Robin Wong", "Vital Kulesh" ]
One of the main ways that AI is being used in graphic design is through the development of design software that uses AI algorithms to create designs. These ...
Artificial intelligence (AI) is rapidly changing many industries, and the field of graphic design is no exception. AI is already being used to automate certain tasks in the graphic design process, and it's likely that it will continue to play a larger role in the industry in the coming years. Here's a look at how AI is changing the landscape of graphic design and what it means for professionals in the field. One of the main ways that AI is being used in graphic design is through the development of design software that uses AI algorithms to create designs. These programs can analyze a client's needs and preferences and then generate a variety of design options based on that information. This can be a huge time-saver for designers, who can then focus on the more creative aspects of the design process. Another way that AI is being used in graphic design is through the development of AI-powered design assistants. These assistants can help designers with tasks such as color matching, layout design, and even font selection. By using AI to automate these tasks, designers can spend more time on the creative aspects of their work, rather than getting bogged down in the details. One potential downside to the increasing use of AI in graphic design is that it may lead to job displacement for some professionals. However, it's important to remember that AI is not meant to replace human designers, but rather to assist them in their work. In fact, many experts believe that AI will ultimately create new job opportunities in the graphic design industry, as designers will be able to focus on more high-level creative tasks, while AI handles the more routine tasks.
2022-12-01T00:00:00
https://www.linkedin.com/pulse/global-impact-ai-graphic-design-industry-kailash-chandra-sahu
[ { "date": "2022/12/01", "position": 38, "query": "AI graphic design" } ]
Create Stunning AI Art Using Craiyon, DALL-E and ...
Amazon.com
https://www.amazon.com
[]
Book 3 of 3. Graphic Design for Beginners · Print length. 298 pages · Language. English · Publication date. September 24, 2022 · Dimensions. 6 x 0.7 x 9 inches.
Click the button below to continue shopping
2022-12-01T00:00:00
https://www.amazon.com/Create-Stunning-Using-Craiyon-Midjourney/dp/B0BGN97RKJ
[ { "date": "2022/12/01", "position": 46, "query": "AI graphic design" } ]
Graphic Design Trends: The Most Inspiring in 2022
Graphic Design Trends: The Most Inspiring in 2022
https://upqode.com
[]
Hand-drawn graphics have replaced the old AI designs. Today, hand-drawn illustrations and graphic designs can be found on the internet.
Graphic design is one of the most popular digital marketing tools. It’s a modern form of storytelling that can breathe life into the advertisements of any company. Besides, 80% of small businesses acknowledge the importance of graphic design in business. But graphic design trends over the years are constantly evolving, and to ensure that your digital marketing strategy does not go in vain, you need to keep up with them too. Source: Unsplash This article will explore some of the recent trends in graphic design, conclusion graphic design trends, and tell you which ones are here to stay for a long time. Let’s begin! Importance of Keeping Up With Graphic Design Trends Do you know that 40% of users respond better to visuals than text? As a graphic designer, you must be up to date with the newest graphic design trends in 2022 if you want to stay trendy and compete with contemporary design experts. Here are a few reasons why it’s important for you to keep up with the graphic design trends: Out of your comfort zone One of the best things about graphic design is that it is limitless. You can create anything you want without bonding or restricting yourself to graphic design principles. You should step out of your comfort zone and open your creativity to new possibilities. Look up more graphic designs whose works differ from yours and get inspired to create something you have never done before. Interesting portfolio. Imagine you’re looking through the artwork of another graphic design. If every single one of their design is the same, you’d be able to predict the monotony. However, a fusion of different graphic design trends, illustrative methods, and visuals will keep you on edge. You’d be more appreciative of the designer’s skills too. Therefore, bring more colors and versatility in your portfolio by keeping up with impressive trends. Graphic designers community One way you can keep up and engage with all the graphic design trends is; by networking. Interact with the designers in your niche, and stay in touch with them. Several sites, such as Dribble, Behance, and DeviantArt, are excellent places to network with other designers while keeping your portfolio updated. Follow them on social media. You can also use Canva – one of the most popular graphic design sites. Plus, networking helps boost your traffic too. What Are the Current Trends in Graphic Design? 2D x 3D Mashup Three-dimensional designs were introduced a few years ago to introduce new forms, shapes, and gradients to the market. Unlike other designs that have faded from existence, 3D design is still in trend today. This is because of the ever-evolving nature of graphic design trends over the years that can be easily molded to any shape and form to keep up with the current trend. The 3D design has allowed creatives to experiment endlessly with the design by combining it with other graphic design trends. In 2022, we have observed highly realistic 3D design mashups with 2D visuals and paper cutout elements to create something that is out of this world. Designers have combined the elements of 2D with the complexity of 3D to create outstanding graphics. Creative Typography Lettering is a form of art itself. But, creatives are now pushing lettering boundaries to experiment with new ideas and create extraordinary work. This has led to some of the most exciting letterings we have seen this decade. There is an endless limit to what you can do with typography. It’s breathtakingly beautiful yet utterly imperfect. Sharp lines, abstract art styles, mismatched fonts, and contrast between various shapes break the old stereotypes of typography to draw your attention to something particular. 90s Style Retro and vintage have slowly made a defining comeback in the modern world. Several popular forms of media, such as Netflix’s Stranger Things, reintroduced neon colors, Gothic serifs, and vaporwave landscapes to the graphic design world. With recent adaptations like Netflix’s Fear Street, designers are given an opportunity to turn to that era to get inspired, incorporate the various elements of the 90s and revive the carefree and comforting atmosphere with this retro style. Candy Color Thanks to the various advancement and new applications, everyone has become a graphic designer. This is why we are bombarded with a plethora of digital content from every corner of the internet. It has become a challenge for creative minds to develop ideas that would make their artwork, websites, and other visuals stand out among the clutter of digital content. What is it that would make the mindless scrolling stop for a few seconds? This question has given rise to a new trend; a vibrant eye-candy color scheme to capture our attention. You might have observed candy-colored and fairy-floss scenes popping all over the internet, filling us with warm and fuzzy feelings. This immensely popular trend has slowly translated into various areas of graphic design, including, Illustrations; Animation; 3D; Web design; Vibrant typography. Maximalism While minimalism is all about keeping it simple, maximalism likes to embrace excess and puts everything on display. Maximalism likes to go overboard and oversaturates a visual with bold layouts, complex color combinations, contrasting layouts, and multiple fonts and images on top of one another to create a clash of design and pattern. It is a popular millennial graphic design trends that has sailed this year. Interestingly, this dramatic design with layers of images is not by accident. Every image, pattern, and visual element is carefully selected to become part of the visual design. Maximalism is chaotic and might not be everyone’s cup of tea. Nonetheless, the trend is an entertaining one with a lot of complexity. Antidesign Graphic designers like to stretch the boundaries of their designs to create something extraordinary and remarkable. This is how antidesign was introduced; one of the millennial graphic design trends that shamelessly rejected all the rules and regulations of the traditional design world. This design is also called a cousin of maximalism because of their similarities. For a long time, many designers have molded their designs and create interfaces that would strictly follow all the design principles of traditional methods. But now, the designers are thinking out of the box to break the monotony and dullness in the design world. Asymmetrical layouts, clash of patterns, complex compositions, and unusual typography have created unconventional designs that are breaking the beauty standard of the design world. These designs are supposed to be ugly, yet they somehow work and blend together to create a jarring masterpiece. What Are the Future Trends In Design? Some future graphic design trends or trends that are here to stay for a long time are: Animated Graphics There are many graphics, infographics, artwork, and product designs. But, many of them fail to capture the customer’s attention because they all seem to look the same. If you want to leave a mark, do something different by adding the element of animation to it. Animation brings life to the old, boring graphics and allows brands to playfully present their logos, products, designs, and ideas with animated shorts and GIFs. Animation is a versatile field integrated into various other graphic design trends, like 3D. Interestingly, this trend might slowly seep into our everyday life. For instance, static packaging material might incorporate dynamism in their packaging designs that appear to move while looking through a different lens or screen. Hand-Drawn Designs Hand-drawn graphics have replaced the old AI designs. Today, hand-drawn illustrations and graphic designs can be found on the internet. Don’t believe us? Open Instagram, and you’ll find many creatives who have used hand-drawn illustrations to breathe life into an everyday mundane routine, romanticizing something as normal as doing laundry with their beautifully drawn illustrations. About 65% of companies use infographics for marketing strategies so, hand-drawn illustrations are used in professional spaces for presentation. The best part is; this style is extremely diverse, as every graphic designer is different from one another. Moreover, this trend has allowed the emergence of small artists who are now making a living by selling hand-drawn illustrations, product designs, movie posters, and comics. Minimalism Minimalism is not going anywhere. It’s only evolving to introduce new ideas and concepts in the simple visual trend. One such trend is; Retro Minimalist. Retro minimalist pays homage to the vintage 70s record covers and magazine layouts while borrowing the elements of warm tones, natural textures, old-school fonts, and soft shadows. This versatile trend is sprouting all over the internet. You can find it in editorial features, product packages, and business logos. Similarly, many graphic design trends in minimalism will come and go, but minimalism itself is sure to stay long. Some more future graphic design trends are Surrealism; Art deco; Psychedelic. Key Takeaways Graphic design is an integral part of digital marketing. But, graphic design trends are always changing. To ensure that your digital marketing is effective, you must keep up with the trends. In this article, we have listed the most popular graphic design trends in 2022 and conclusion graphic design trends.
2022-12-01T00:00:00
2022/12/01
https://upqode.com/graphic-design-trends/
[ { "date": "2022/12/01", "position": 64, "query": "AI graphic design" } ]
Using Midjourney as a Design Tool - Andrew Childs
Using Midjourney as a Design Tool
https://andrewchilds.com
[]
... design. At the same time, I've been playing around with generative AI tools like Midjourney. So, I tried using Midjourney to find inspiration. First, I ...
As part of the process of building this website you’re reading (thank you!), I decided to design the site from scratch instead of using a template. I pretty quickly hit a point that can happen where you’re so early in a project and it could truly go in any direction, and I became a bit stumped on the design. At the same time, I’ve been playing around with generative AI tools like Midjourney. So, I tried using Midjourney to find inspiration. First, I asked what a blog designed by Stanley Kubrick would’ve looked like. Minimalist two column blog layout designed by Stanley Kubrick, 2001: A Space Odyssey --v 4 --uplight One minute later, it responded with this: It went with more of a magazine layout, but still, it made some interesting color, typographic, and compositional choices. simple elegant minimalist two column blog design, designed by Paul Rand, 1950s --v 4 --uplight simple minimalist two column blog design, designed by the MTA, standards manual, 1970s --v 4 --uplight I then asked what a blog designed by Dieter Rams would’ve looked like. Minimalist two column blog layout designed by Dieter Rams, 1950s, simple --v 4 --uplight Tiny fridge or big human? I asked it to upscale the 4th in the grid: That’s a nice, sparse, elegant layout - much like the work of Dieter Rams. The layouts that Midjourney came up with were more like what you would find in a magazine, which makes sense: there are probably more magazines in the training data than website screenshots. Either way, I found the resulting images interesting, and this last image in particular got me thinking about applying a grid pattern to my own design. When I hit a creative block, an AI-generated layout was the spark I needed to get unblocked and find a direction I was happy with. Where this website design ended up is different from what the AI came up with - that’s the nature of the creative process. Had I continued to only look at websites like Medium and Substack, I probably would’ve ended up with something more conventional. Other Applications Today’s generative tools are good for rapid concepting, not for crafting finished work. For now, it still takes a human to refine that rough draft into something polished and production-ready. UI designers can use Midjourney to explore different visual directions for a website they’re working on. It’s a different experience from going to, say, Dribbble for inspiration. I also wouldn’t be surprised if art directors, creative teams, professional movie production studios were already using Midjourney to quickly iterate on and find the perfect style or imagery for a shot, a scene, or a film. An entire film can stem from a single image or visual idea. That is where a tool like Midjourney shines. Generated images created by other users on Midjourney's discord server in May 2022. It will be really interesting once generative AI starts to produce actual text instead of gibberish. Given the current rate of progress, I wouldn’t be surprised to see actual text in AI-generated images within the next year or two. Using the tool in this way also I think avoids issues around supporting creators, which I’ll explore in another article.
2022-12-01T00:00:00
https://andrewchilds.com/posts/using-midjourney-as-a-design-tool
[ { "date": "2022/12/01", "position": 65, "query": "AI graphic design" } ]
Tag:artificial intelligence - OnLabor
Artificial Intelligence ✦ OnLabor
https://onlabor.org
[ "Justin Cassera", "Sophia Leswing", "Gilbert Placeres", "Esther Ritchin", "Jon Hiatt", "Jeffrey Vogt", "Yoorie Chang", "Thea Burke", "Sunah Chang", "Dallas Estes" ]
UAW strikes a tentative deal with General Motors, pharmacy workers plan a nationwide walkout, and President Biden issues an executive order on artificial ...
OnLabor is a blog devoted to workers, unions, and their politics. We interpret our subject broadly to include the current crisis in the traditional union movement (why union decline is happening and what it means for our society); the new and contested forms of worker organization that are filling the labor union gap; how work ought to be structured and managed; how workers ought to be represented and compensated; and the appropriate role of government — all three branches — in each of these issues. Learn more about OnLabor
2022-12-01T00:00:00
https://onlabor.org/tag/artificial-intelligence/
[ { "date": "2022/12/01", "position": 39, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 39, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 40, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 38, "query": "artificial intelligence labor union" }, { "date": "2024/02/01", "position": 67, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 41, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 41, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 41, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 40, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 39, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 45, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 44, "query": "artificial intelligence labor union" } ]
The New Role of Labor Unions in the AI Era - Barnes & Noble
The New Role of Labor Unions in the AI Era by Andrïs J Arenas Falïtico, Paperback
https://www.barnesandnoble.com
[ "Barnes", "Andrïs J Arenas Falïtico" ]
As artificial intelligence (AI) is introduced on a day-to-day basis, its use as tool is an effective and transformative tool for companies and even labor.
The New Role of Labor Unions in the AI Era As artificial intelligence (AI) is introduced on a day-to-day basis, its use as tool is an effective and transformative tool for companies and even labor unions. As AI reshapes industries, tasks, and redefine jobs, unions must adapt to compete with its implementation. Traditionally, unions negotiate wages and working conditions, now they are fighting for job security and worker presentation with the introduction of AI. This new role positions labor unions as crucial mediators between technological innovation and worker rights, ensuring that AI benefits all, rather than deepening inequality. The New Role of Labor Unions in the AI Era explores the transformative role of AI and how it is affecting the field of business and management. It further discusses how technologies are reshaping business strategies, operations, and the overall management landscape. Covering topics such as automation, labor markets, and workplace harassment, this book is a useful resource for researchers, academicians, students, and more. 1147586934
2022-12-01T00:00:00
https://www.barnesandnoble.com/w/the-new-role-of-labor-unions-in-the-ai-era-andri-s-j-arenas-fali-tico/1147586934
[ { "date": "2022/12/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 43, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 47, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 46, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 43, "query": "artificial intelligence labor union" }, { "date": "2024/02/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 45, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 46, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 46, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 45, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 41, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 37, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 37, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 40, "query": "AI labor union" }, { "date": "2025/05/01", "position": 40, "query": "artificial intelligence labor union" }, { "date": "2025/06/01", "position": 41, "query": "AI labor union" } ]
Union: Build, Serve, and Scale Your Compound AI System
Union: Build, Serve, and Scale Your Compound AI System
https://www.union.ai
[ "July", "June", "Ketan Umare", "Samhita Alla", "May" ]
Union powers your AI development lifecycle—orchestrate workflows, optimize costs in real time, and manage massive unstructured data at scale.
How Porch used Union.ai to migrate off Airflow & consolidate its data & ML operations Porch is a new kind of insurance company on a mission to partner with home service companies to delight homeowners from moving to improving and everything in between.
2022-12-01T00:00:00
https://www.union.ai/
[ { "date": "2022/12/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 49, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 43, "query": "artificial intelligence labor union" }, { "date": "2023/09/01", "position": 40, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 48, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 46, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 52, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 42, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 47, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 49, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 40, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 44, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 44, "query": "AI labor union" }, { "date": "2025/06/01", "position": 43, "query": "AI labor union" } ]
Promoting Workforce Well-being in the AI-Integrated Workplace
Promoting Workforce Well-being in the AI-Integrated Workplace
https://partnershiponai.org
[ "Pai Staff", "Thalia Khan", "Eliza Mccullough", "Sonam Jindal", "Stephanie Bell", "Elonnai Hickok", "B Cavello" ]
Rather, it is intended to be a tool to facilitate an inclusive dialogue across stakeholders — management, workers, unions, and technology developers – by ...
Overview Businesses around the world are increasingly introducing artificial intelligence (AI) into the workplace, unleashing a tremendous potential to boost productivity, enable new business models, improve safety, and assist workers. At the same time, this adoption engenders a host of risks to the well-being of the workforce, potentially exacerbating long-standing inequities in the treatment of workers. “Framework for Promoting Workforce Well-being in the AI-Integrated Workplace” provides a conceptual framework and a set of tools to guide employers, workers, and other stakeholders towards promoting workforce well-being throughout the process of introducing AI into the workplace. This framework is not meant to be prescriptive in its approach. Rather, it is intended to be a tool to facilitate an inclusive dialogue across stakeholders — management, workers, unions, and technology developers – by highlighting priority areas of well-being for organizations to focus on and articulating a set of questions to further guide discussions and decision making.
2022-12-01T00:00:00
https://partnershiponai.org/workstream/promoting-workforce-well-being-in-the-ai-integrated-workplace/
[ { "date": "2022/12/01", "position": 52, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 47, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 55, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 54, "query": "artificial intelligence labor union" }, { "date": "2023/09/01", "position": 49, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 53, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 49, "query": "artificial intelligence labor union" }, { "date": "2024/02/01", "position": 52, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 51, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 52, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 52, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 53, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 48, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 54, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 53, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 47, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 49, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 51, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 50, "query": "artificial intelligence labor union" } ]
6. Digital tools, artificial intelligence, and algorithms - PSI
6. Digital tools, artificial intelligence, and algorithms
https://publicservices.international
[ "Mario Jankovic" ]
Public Services International is a Global Union Federation of more than 700 trade unions representing 30 million workers in 154 countries. We bring their ...
This theme is concerned with what digital tools and systems are being deployed, how they are being used, and what co-governance rights workers have in this process. This section additionally includes corresponding restriction clauses. Digital tools are a key way that worker data is generated, but they aren't the only way. For this reason, we suggest that you review these clauses in conjunction with the information presented in Section 5 which concerns workers' data rights more broadly. Monitoring and surveilling workers is nothing new. However, the influx of digital technologies has expanded the ways that these activities are undertaken. The list of tools and their surveillance capabilities is almost endless. Examples include recording online activities, movement analysis and geotracking, biometric and facial recognition, emotion tracking, keystroke monitoring and speech analysis. While sometimes tools function as planned they are also prone to error or to being misused. This is why unions must be well informed about how digital tools impact workplace functions including hiring, pay and evaluation. Collective bargaining agreements are needed to address the following topics and themes in the context of workers’ rights, digital tools, AI and management by algorithms:
2022-12-01T00:00:00
https://publicservices.international/resources/page/digital-bargaining-hub/6-digital-tools-artificial-intelligence-and-algorithms-use-and-restrictions---hj?lang=en&id=13183
[ { "date": "2022/12/01", "position": 70, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 73, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 71, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 72, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 72, "query": "artificial intelligence labor union" }, { "date": "2024/02/01", "position": 90, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 70, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 67, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 70, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 71, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 70, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 60, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 60, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 60, "query": "artificial intelligence labor union" } ]
The New Role of Labor Unions in the AI Era - IGI Global
Call for Chapters: The New Role of Labor Unions in the AI Era
https://www.igi-global.com
[]
The New Role of Labor Unions in the AI Era. Sorry, this Call for Chapters is no longer active. We appreciate your interest in contributing to a future IGI ...
The New Role of Labor Unions in the AI Era Sorry, this Call for Chapters is no longer active.We appreciate your interest in contributing to a future IGI Global Scientific Publishing publication. For a list of active projects currently accepting proposals, please visit the Call for Chapters page. Thank you.
2022-12-01T00:00:00
https://www.igi-global.com/publish/call-for-papers/call-details/7788
[ { "date": "2022/12/01", "position": 72, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 74, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 80, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 52, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 72, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 72, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 71, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 73, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 73, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 51, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 49, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 63, "query": "artificial intelligence labor union" } ]
UNI GS Hoffman: We must make it possible for all workers to bargain ...
UNI GS Hoffman: We must make it possible for all workers to bargain on technology through lowering the obstacles to unionization
https://uniglobalunion.org
[]
... unions have been able to negotiate limits on AI so that the workers' compensation and jobs are fully protected. This outcome should be the model going ...
UNI General Secretary Christy Hoffman gave a keynote address highlighting the need for workers to negotiate the impacts and implementation of AI as part of a wide-ranging day of discussion titled “AI and the Public Interest.” The 15 November event, hosted by Open Markets Institute and the AI Now Institute in Washington, DC, weighed the promise, threats and regulatory challenges we face in our communities and workplaces after the advent of AI. It brought together leading policymakers, regulators, technologists, entrepreneurs, writers, musicians, policy experts, and academics from the United States and Europe. Watch the speech, and the full day’s program here. During her speech, Hoffman, put the concerns about AI’s use in the broader historical context of collective struggle over the introduction of new technologies at work. Since the dawn of the industrial revolution, workers have banded together to demand what we would call a “just transition” today. The modern trade union movement began negotiating in earnest over technologies like robotics and automation in the 1970s, including in the jet engine factory where Hoffman was a negotiator in the 1980s. She noted the fights over technology today require the same muscles as those of decades past: Our key demands were advance notice before implementation of technology, as well as the opportunity to address risks in the areas of jobs, health and safety, and training. We also pushed for a chance to offer suggestions for how to best implement new machines and tech, and those demands remain relevant today. But longer term, we wanted to share in the advantages brought about by the new efficiencies. In fact, there were massive strikes in the auto industry over pay during this era in light of the fact that fewer workers were needed to build cars. And again, this the key issue we face with GenAI. Hoffman pointed to the successes unions throughout the world have had installing guardrails on other types of workplace tech, including highly invasive algorithmic management. These programs typically drive-up productivity through constant monitoring and feedback that pushes people to work at unsafe and unsustainable rates. Through both collective bargaining and regulatory frameworks which require social dialogue, the labour movement has been able to restrict the amount of workplace surveillance and how workers’ data can be used. Examples included a collective bargaining agreement between CWA and ATT in call centres, and an agreement between the works council and Amazon in Germany concerning individualized data. Similarly, the emergence of GenAI last year holds promise of greater productivity, but unlike algorithmic management, these gains could be driven by automating tasks and augmenting workers’ abilities rather than just squeezing them to work harder. One of the only large studies on using GenAI at work covers call centres, where the technology help cut call times by 14 per cent and improved customer satisfaction, which in turn created a more pleasant experience for the call agent. “That’s a good story and could mean less worker turnover and more stable employment,” Hoffman said. “But will workers be paid more since they are now able to answer more calls? Will the work become even more stressful because of speed up? These are open questions to be determined in large part by whether or not these workers can negotiate and some power at the bargaining table.” The recent wins by entertainment unions, also demonstrate the power of collective action since these unions have been able to negotiate limits on AI so that the workers’ compensation and jobs are fully protected. This outcome should be the model going forward. Hoffman concluded:
2022-12-01T00:00:00
https://uniglobalunion.org/news/unions-make-ai-work-for-workers/
[ { "date": "2022/12/01", "position": 73, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 73, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 67, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 71, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 66, "query": "artificial intelligence labor union" }, { "date": "2024/02/01", "position": 87, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 74, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 73, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 70, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 74, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 74, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 74, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 71, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 74, "query": "AI labor union" }, { "date": "2025/06/01", "position": 74, "query": "AI labor union" } ]
Labor.A 2024: collective bargaining practices on AI in the European ...
Labor.A 2024: collective bargaining practices on AI in the European services sectors
https://www.uni-europa.org
[]
... AI systems in workplaces, particularly regarding data protection and working conditions. The goal was to raise awareness among workers and trade unions ...
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
2022-12-01T00:00:00
https://www.uni-europa.org/news/labor-a-2024-collective-bargaining-practices-on-ai-in-the-european-services-sectors/
[ { "date": "2022/12/01", "position": 76, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 78, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 78, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 71, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 70, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 65, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 67, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 68, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 69, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 66, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 72, "query": "AI labor union" }, { "date": "2025/06/01", "position": 70, "query": "AI labor union" }, { "date": "2025/06/27", "position": 22, "query": "artificial intelligence labor union" } ]
What Workers and Unions Stand to Gain from Recent Executive ...
What Workers and Unions Stand to Gain from Recent Executive Orders on Artificial Intelligence
https://escholarship.org
[ "Bernhardt", "Hinkley" ]
We outline core principles for how government action on AI can benefit public and private sector workers, and comment on how two recent executive orders ...
We outline core principles for how government action on AI can benefit public and private sector workers, and comment on how two recent executive orders reflect those principles. Our goal is to help inform the significant work that lies ahead for federal, state, and local governments in their efforts to model responsible use of AI.
2024-01-18T00:00:00
2024/01/18
https://escholarship.org/uc/item/8033v3k7
[ { "date": "2022/12/01", "position": 85, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 87, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 86, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 84, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 83, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 85, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 86, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 82, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 84, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 83, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 85, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 86, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 85, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 85, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 79, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 79, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 77, "query": "artificial intelligence labor union" } ]
EU Artificial Intelligence Act | Up-to-date developments and ...
Up-to-date developments and analyses of the EU AI Act
https://artificialintelligenceact.eu
[]
Based on the European Union Employment Advisor, the monthly basic... ... Why work at the EU AI Office? Jun 7, 2024. It's probably not for everyone ...
Let us know what resources you need We're looking for 2-minute feedback on the AI Act website, so that we can build the resources most helpful to you. We have produced many of the resources on our website in direct response to user feedback, sometimes within just 1-2 weeks. This site exists with the aim of providing helpful, objective information about developments related to the EU AI Act. It is used by more than 150k users every month. Thank you in advance for your time and effort – hopefully we can pay it back in tailored, high-quality information addressing your needs.
2022-12-01T00:00:00
https://artificialintelligenceact.eu/
[ { "date": "2022/12/01", "position": 87, "query": "artificial intelligence labor union" }, { "date": "2023/01/01", "position": 90, "query": "artificial intelligence labor union" }, { "date": "2023/03/01", "position": 89, "query": "artificial intelligence labor union" }, { "date": "2023/07/01", "position": 88, "query": "artificial intelligence labor union" }, { "date": "2023/09/01", "position": 62, "query": "artificial intelligence labor union" }, { "date": "2023/10/01", "position": 86, "query": "artificial intelligence labor union" }, { "date": "2024/01/01", "position": 89, "query": "artificial intelligence labor union" }, { "date": "2024/04/01", "position": 89, "query": "artificial intelligence labor union" }, { "date": "2024/05/01", "position": 86, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 88, "query": "artificial intelligence labor union" }, { "date": "2024/07/01", "position": 85, "query": "artificial intelligence labor union" }, { "date": "2024/08/01", "position": 88, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 90, "query": "artificial intelligence labor union" }, { "date": "2024/11/01", "position": 88, "query": "artificial intelligence labor union" }, { "date": "2025/01/01", "position": 87, "query": "artificial intelligence labor union" }, { "date": "2025/02/01", "position": 78, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 78, "query": "artificial intelligence labor union" }, { "date": "2025/05/01", "position": 76, "query": "artificial intelligence labor union" }, { "date": "2025/06/01", "position": 99, "query": "AI labor union" }, { "date": "2025/06/16", "position": 77, "query": "artificial intelligence employment" }, { "date": "2025/06/16", "position": 35, "query": "artificial intelligence labor union" } ]
McNerney Introduces the 'No Robo Bosses Act' to Ensure Human ...
McNerney Introduces the ‘No Robo Bosses Act’ to Ensure Human Oversight of AI in the Workplace
https://sd05.senate.ca.gov
[]
Press Release McNerney Introduces the ‘No Robo Bosses Act’ to Ensure Human Oversight of AI in the Workplace State Senator Jerry McNerney, D-Pleasanton, on Thursday introduced the “No Robo Bosses Act” — groundbreaking legislation that would require human oversight of artificial intelligence systems in the workplace to help prevent abuses. SB 7 would bar California employers from relying primarily on AI systems, known as automated decision-making systems (ADS), to make hiring, promotion, discipline, or termination decisions without human oversight. The legislation would also prohibit the use of ADS systems that use personal information of workers to “predict” what they’ll do in the future. “Businesses are increasingly using AI to boost efficiency and productivity in the workplace. But there are currently no safeguards to prevent machines from unjustly or illegally impacting workers’ livelihoods and working conditions,” Sen. McNerney said. “SB 7 does not prohibit ADS in the workplace, rather it establishes guardrails to ensure that California businesses are not operated by robo bosses — by putting a human in the loop. AI must remain a tool controlled by humans, not the other way around.” SB 7 is sponsored by the California Federation of Labor Unions, AFL-CIO. If signed into law, the No Robo Bosses Act would be the first such law in the nation. “No worker should have to answer to a robot boss when they are fearful of getting injured on the job, or when they have to go to the bathroom or leave work for an emergency,” said Lorena Gonzalez, President of the California Federation of Labor Unions, AFL-CIO, representing over 1,300 unions with 2.3 million union members. “When it comes to decisions that most impact our jobs, our safety and our families, we need human oversight.” Employers around the globe are increasingly employing ADS — powered by artificial intelligence — to surveil, manage, and replace workers in the name of maximizing productivity and reducing costs. According to recent estimates, there are more than 550 so-called “bossware” products available to employers to help manage workplaces. And according to a recent national survey, 40 percent of workers experience some form of automated task management. There are already examples of robo bosses prioritizing efficiency and cost-savings over health and safety. In the health care industry, nurses who work through gig-nursing apps have had their wages and hours set by algorithms with no human oversight. There also examples of ADS errors that have resulted in people being mistakenly terminated from their job. Also, some ADS systems use so-called “predictive behavior” models that collect personal data, use that data to profile an employee, and potentially initiate adverse actions against a worker based on what the AI “predicts” the worker will do. SB 7 establishes necessary safeguards of AI in the workplace by: Requiring human oversight and independent verification for promotion, demotion, firing, and disciplinary decisions. Barring ADS systems from obtaining or inferring a worker’s immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation’ disability; criminal record; credit history’ or any other statuses protected state law. Prohibiting the use of ADS for predictive behavior analysis based on personal information collected on workers that results in adverse action against a worker for what the AI predicts the worker will do. Creating a process for workers to appeal decisions made by ADS. SB 7 is co-authored by Assemblymembers Sade Elhawary, D-South Los Angeles, and Isaac Bryan, D-Los Angeles. In Congress, McNerney co-founded and co-chaired the Artificial Intelligence Caucus and authored the AI in Government Act. Sen. Jerry McNerney is chair of the Senate Revenue and Taxation Committee and his 5th Senate District includes all of San Joaquin County and Alameda County’s Tri-Valley.
2022-12-01T00:00:00
https://sd05.senate.ca.gov/news/mcnerney-introduces-no-robo-bosses-act-ensure-human-oversight-ai-workplace
[ { "date": "2022/12/01", "position": 97, "query": "artificial intelligence labor union" }, { "date": "2024/02/01", "position": 57, "query": "artificial intelligence labor union" }, { "date": "2024/03/01", "position": 96, "query": "AI labor union" }, { "date": "2024/04/01", "position": 99, "query": "artificial intelligence labor union" }, { "date": "2024/06/01", "position": 98, "query": "artificial intelligence labor union" }, { "date": "2024/09/01", "position": 91, "query": "AI labor union" }, { "date": "2024/11/01", "position": 98, "query": "artificial intelligence labor union" }, { "date": "2025/03/01", "position": 5, "query": "AI labor union" } ]
McKinsey named a Leader in AI Service Providers by ...
McKinsey named a Leader in AI Service Providers by Forrester
https://www.mckinsey.com
[]
December 1, 2022 McKinsey has been named a Leader, the highest designation, in The Forrester Wave™: AI Service Providers, Q4 2022 report.
McKinsey has been named a Leader, the highest designation, in The Forrester Wave™: AI Service Providers, Q4 2022 report. Forrester evaluated 12 firms, assessing them on 29 criteria grouped into the categories of current offering, strategy, and market presence. We received the highest possible rating in criteria including AI talent, vision, and market approach. “McKinsey & Company leads enterprises with end-to-end AI transformation,” the Forrester report notes, also recognizing that “McKinsey addresses AI holistically: as a technology, an operational model, and a strategic asset.” The report also notes that McKinsey “[places] a heavy emphasis on ROI.” McKinsey acquired the AI arm of our firm, QuantumBlack, in 2015, and the Forrester report points out that this move “continues to deliver top-notch data science talent.” QuantumBlack has scaled significantly since then to over 40 locations worldwide. QuantumBlack, AI by McKinsey Human understanding. Real-world impact. “We are proud and humbled by the recognition,” says senior partner Alexander Sukharevsky who along with Alex Singla leads QuantumBlack, AI by McKinsey. “In the past 18 months, we have invested heavily in building our talent bench as well as our technology. Our communities are knit together by a culture of intense collaboration and continuous learning, making it a home for the best global AI talent.” Our technologists have built industry-specific accelerators that incorporate much of the required code and tooling, speeding up development and deployment time while reducing risk. The Forrester report notes that these “well-developed engineering and technology protocols, and over 25 industry assets, set the bar for the market.” QuantumBlack colleagues from around the world McKinsey named a Leader in AI Service Providers by Forrester Our AI experts have also been working with clients to assess the practical business benefits of emerging technologies, such as digital twins and AI for data quality, and help them proactively address the requirements for digital trust in the products and experiences that use AI, digital technologies, and data. “To us, this recognition is a testament to our innovative work and the investment our firm has been making in AI talent, technologies, knowledge, and innovation,” says Alex Singla. “The biggest challenge we see now is in helping companies advance from a handful of pilots to running hundreds or thousands of models on an AI platform and gaining the value that comes from scaling.” Achieving that scale takes innovation on multiple fronts. “We have focused on end-to-end AI, helping clients develop the thinking and planning for the people, processes, and technologies required for scaling. We believe that MLOps is part of the answer to scale,” explains Nayur Khan, a partner at McKinsey. “This involves establishing an assembly line for developing and deploying AI products that end users love–bringing the right skills, from product and design thinking to data engineering and machine learning, to cloud and software engineering. Finally, baking in legal, ethical, and compliance checks and balances in an automated fashion.” QuantumBlack, AI by McKinsey colleagues at an event supporting women in tech McKinsey named a Leader in AI Service Providers by Forrester With MLOps, businesses can quickly solve use cases and scale their processes due to high interoperability and reusable components. This is somewhat new to data science–just as good software engineering and DevOps were core to improving software delivery. Business leaders who are serious about the value of AI are increasingly looking to MLOps to help unlock it. For example, in the past year, we’ve partnered with a global life-sciences company to help them transform a set of fragmented AI labs into an enterprise-wide MLOps solution architecture. Some 400 data scientists from 30 teams worldwide have already developed and deployed more than 25 AI use cases across research, clinical, and commercial business areas, delivering business impact while improving productivity and reliability. McKinsey senior partner Kia Javanmardian leads our MLOps service line. “In the early period of AI, companies created models but didn’t ensure sustainability and scalability,” Kia says. “Now we need to ensure the infrastructure is in place from the onset to sustain and scale impact from analytics. This will require us to be deeply involved with defining and implementing our clients’ AI technology architecture.” As this latest Forrester rating confirms, that’s a role we’re well-suited to play.
2022-12-01T00:00:00
https://www.mckinsey.com/about-us/new-at-mckinsey-blog/mckinsey-named-a-leader-in-ai-service-providers-by-forrester
[ { "date": "2022/12/01", "position": 5, "query": "artificial intelligence business leaders" } ]
PwC rated as a Leader in Artificial Intelligence Service ...
PwC rated as a Leader in Artificial Intelligence Service Providers by Independent Research Firm
https://www.pwc.com
[]
'PwC sparks intelligence in the digital business.' PwC announced that it was cited as a Leader in The Forrester Wave™: AI Service Providers, Q4 2022.
‘PwC sparks intelligence in the digital business.’ PwC announced that it was cited as a Leader in The Forrester Wave™: AI Service Providers, Q4 2022. The Forrester report notes that, ‘AI Service Providers differentiate via strategy and execution.’ Commenting on PwC, the report states that, ‘PwC sparks intelligence in the digital business. The company focuses on business and digital transformation. PwC pushes its New Equation, a multibillion-dollar initiative, to drive substantial business outcomes for clients. AI is a key component for digital success. As evidence of PwC’s commitment to sustained AI outcomes, nearly half of its clients sign on to outcome- and revenue-based pricing (compared to less than 15% for other participants in this Forrester Wave evaluation). Partners are extensive and core to achieving this vision. PwC looks beyond alliances and fosters joint business partnerships and academic relationships for emerging technology and cocreation. The provider also gets credit for its strategy to attract, develop, and retain talent.’ It further notes that, ‘PwC has strong data science and consulting talent. Thirty percent (roughly 12,600 employees) of its data and analytic practice staff are data science and AI experts with advanced degrees. In addition, 85% of the data and analytic staff have completed role-based training in the past year.’ Anand Rao, Global AI Lead; US Innovation Lead, Emerging Technology Group, PwC US, says, ‘We are proud to be recognised as a leader in the Forrester Wave for AI Service Providers. Artificial intelligence and analytics are transforming how business does business. From back-office automation to front office transformation, from changed skills, cultures and ways of working to new go-to-market strategies and business models, AI is everywhere.’ He goes on to say, ‘AI isn’t easy, but PwC’s AI capabilities are helping many companies get it right. Our experienced team and ready-to-go digital accelerators are helping companies with key analytics and AI challenges: developing and assessing strategy, enhancing and monetizing data sets, upskilling talent, building an AI-ready operational model and culture, upgrading technology, creating and maintaining AI models, optimizing potential acquisition strategies and establishing governance and managing risks.’
2022-12-01T00:00:00
https://www.pwc.com/gx/en/about/analyst-relations/2022/forrester-wave-leader-ai-service-providers-2022.html
[ { "date": "2022/12/01", "position": 7, "query": "artificial intelligence business leaders" } ]
The top business books to read- Recommendations of AI
The heart of the internet
https://www.reddit.com
[]
Here are some top business books that I would recommend for startup founders: "The Lean Startup" by Eric Ries, "Zero to One" by Peter Thiel, "The Innovator's ...
I gave ChatGPT the following prompt: “Tell me the top business books that I should read as a startup founder” Here’s its answer: Here are some top business books that I would recommend for startup founders: "The Lean Startup" by Eric Ries "Zero to One" by Peter Thiel "The Innovator's Dilemma" by Clayton Christensen "The Hard Thing About Hard Things" by Ben Horowitz "Good to Great" by Jim Collins "The E-Myth Revisited" by Michael E. Gerber "Traction: How Any Startup Can Achieve Explosive Customer Growth" by Gabriel Weinberg and Justin Mares "The 7 Habits of Highly Effective People" by Stephen R. Covey "Delivering Happiness" by Tony Hsieh "The Lean Entrepreneur" by Brant Cooper and Patrick Vlaskovits What do you think? Shameless plug: every Monday, I share bite-sized startup case studies. Subscribe to receive tomorrow’s in your inbox
2022-12-01T00:00:00
https://www.reddit.com/r/Entrepreneur/comments/zckmzn/the_top_business_books_to_read_recommendations_of/
[ { "date": "2022/12/01", "position": 34, "query": "artificial intelligence business leaders" } ]
Hybrid Intelligence: How AI roles can lead to commercial ...
Hybrid Intelligence: How AI roles can lead to commercial success (or failure)
https://www.linkedin.com
[ "Steve Nouri", "Dr. Andrée Bates", "Gopinath Polavarapu" ]
85% of AI projects fail to deliver on their original aims. Mostly, this is because projects are led by data scientists or IT professionals rather than business ...
Industry 4.0 has arrived, ushering in a new era of direct data exchange and automation in economies and markets globally. This new world is manifesting itself through cloud computing, IoT, intelligent data tools, and a broad spectrum of other transformative technologies. However, as is common in times of upheaval, organizations are struggling to keep up with the rapidly shifting social, political, and economic realities that stem from having access to vast amounts of data. Far too often, corporate leaders adopt a policy of “hiring a data scientist” or “getting the IT department to figure it out” without fully comprehending how a good data team could become their most valuable asset. In 2019, researchers working for Pactera Technologies revealed a shocking state of affairs when they estimated that 85% of AI projects fail to deliver on their original aims. Mostly, this is because projects are led by data scientists or IT professionals rather than business strategists, who are side-lined due to their lack of technical knowledge. Today, I’m going to take a dive deep into AI implementation and how it can trip up even the best-prepared firms. I’ll offer a new perspective on AI and explore solutions to ensure your machine learning projects consistently land within that elite 15% of AI undertakings that do deliver on their original goals. Understanding how AI can go wrong — Industry 4.0 in context It can be easy for AI to feel, at best, like little more than the latest buzzword — a descriptor for simple algorithms that do little to improve your business’s capabilities. At worst, AI can become a barely understood black box that threatens your workforce. Just as companies that thrived in the 20th century had to rethink their processes as we moved into the digital age, businesses again need to adapt to the new age of AI-driven information exchange and machine-to-machine communication. Unfortunately, adjusting to this status quo isn’t as simple as digitizing paper files — impactful AI execution requires an ability to grasp the relationships between AI tools themselves, the way they’re implemented, and the people who will interact with them. Even machine learning experts can run into difficulties when implementing AI in the real world. In particular, the healthcare industry has been slow to utilize neural networks and other advanced AI tech. Partly, that’s down to inflexible regulations and concerns about limited decision traceability, but adoption has also been hampered by weaknesses that creep in due to wide gulfs between management and development teams. Why good AI projects go bad It seems like everyone is looking to implement AI into their business model. The problem is, managers and executives often throw together IT teams to develop AI functionalities and then walk away. When a project ultimately fails or proves to be superfluous, everyone is baffled. What’s important to understand is that AI project failure doesn’t occur in a vacuum. People are more than just secondary considerations to hardware and software when it comes to AI — the reason things don’t work out as they should is almost always down to human rather than technical failure. In a recent seminar I attended, a participant shared an anecdote about a project in which a tech team was tasked with finding an automated solution to a business challenge. Time and funds were poured into the project to no avail. Eventually, someone identified a low-tech solution, instantly solving the problem. In the end, a low-tech solution solved the issue, requiring no additional resources. The takeaway? Non-tech execs and employees need to involve themselves in the development of AI tools to ensure objectives are contextualized and developers aren’t operating in an environment that needlessly blinds them to project weaknesses. This is a challenge seen across the pharma industry where tech teams are tasked with leading the AI projects but they are not close enough to the business challenges to identify the optimal solutions. What business leaders need to do So, what does a well-rounded approach to AI look like for businesses? In the past, firms that have successfully solved these issues have done so by hiring those rare professionals capable of bridging gaps between data science and business strategy. Of course, the problem with hiring these kinds of multidisciplinary workers is that they are currently not that common and currently you either have commercial teams or tech teams. However, the most effective way to assemble a team that can zero in on the best opportunities that Industry 4.0 has to offer is to teach your existing commercial teams and strategists how to take on “hybrid intelligence” roles. In other words, you can leverage your existing workforce through skill training rather than hiring anew. To make that work, professionals need to educate themselves about what AI is, the strengths and limitations of different approaches, and how it can be applied without getting them to endure learning code and math as there are existing teams for that. After that, they can apply their new knowledge in context — the logical next step is to identify specific recurring problems, financial burdens, or goals to address via machine learning and other Artificial Intelligence techniques. Understanding the basics of creating a fool proof AI roadmap Initially, finding the best way for your organization to utilize AI will require a strategy of its own — an adequate preparation work package should align your aims with organizational capabilities and take shareholder expectations into account. Once the planning phase is complete, it’s time to hit the ground running. Subsequent scoping should involve both business strategists and data scientists. Together, this multidisciplinary team can scour relevant data for clues on how to proceed and come up with a development model that shows infrastructure and data engineers what kind of software environment is necessary for the project. Finally, developers will team up with infrastructure engineers to integrate machine learning protocols. This is the point at which teams will decide if a custom tech stack is required or whether you could source an existing solution. Importantly, business analysts will also begin to receive data so that project success and KPIs can be gauged by product owners in real-time. Preparing your commercial teams for delivery Whether your team needs to commercialize an AI solution for external distribution or integrate a new tool into internal workflows, a culture of flexibility needs to be fostered. A key benefit of Artificial Intelligence techniques is often the elimination of monotonous tasks, and this can feel threatening for current workers. However, open discussion and education will help to assuage concerns. What’s more, business teams that possess a deep understanding of AI will be better able to use newly-developed custom AI/ML tools for creative problem-solving. Training in AI strategy for pharma business challenges Beyond a willingness and enthusiasm for bringing AI onboard, teams will need specific skills to become adept in handling AI tools and interpreting outputs from them. In the healthcare sector, for example, strategists require a good understanding of customer engagement and experience to utilize AI effectively — the healthcare space is one of many markets where future growth will largely come from improved customer engagement and strong customer experience as well as the proliferation of existing ideas and services. Conclusion As healthcare companies seek to implement AI, it’s important to understand why these projects so often fail. Generally, bad outcomes aren’t linked to poor AI but rather can be traced back to a lack of strong business leadership and the limited capacity of tech groups to communicate with or address the challenges faced by business leaders. Found this article interesting? These issues can be tackled by delivering a pragmatic AI education for healthcare teams that is both business focused (so not math and tech focused) but allows you to understand the math and tech enough to guide those teams.
2022-12-01T00:00:00
https://www.linkedin.com/pulse/hybrid-intelligence-how-ai-roles-can-lead-commercial-success-bates
[ { "date": "2022/12/01", "position": 38, "query": "artificial intelligence business leaders" } ]
AI and Leadership
AI and Leadership
https://www.atlantis-press.com
[ "Wenjing Xiong" ]
by W Xiong · 2022 · Cited by 20 — Meanwhile, artificial intelligence (AI), as a new technology, allows machines to perform difficult tasks that ordinarily need human intelligence. AI will ...
Leaders are important for developing and implementing new strategic plans, as well as for interacting with and motivating employees to improve their commitment to the company’s goals. Maintaining leadership skill development is critical to keeping on the right path. Enlightened corporate leaders recognize the problems of employees and put new progress of leadership theory into practice to create a more successful working environment. Meanwhile, artificial intelligence (AI), as a new technology, allows machines to perform difficult tasks that ordinarily need human intelligence. AI will increase the productivity of leaders by taking over some automated, mechanical, and administrative activities. It is difficult for artificial intelligence to replace human care, human thinking, and human interaction with human factors. Future leaders may need to adjust their attention to place more emphasis on these elements.
2022-12-09T00:00:00
2022/12/09
https://www.atlantis-press.com/proceedings/mmet-22/125977728
[ { "date": "2022/12/01", "position": 51, "query": "artificial intelligence business leaders" } ]
Vasi Philomin's Post
I am looking forward to addressing India’s top business leaders at the Amazon AI Conclave 2022, one of the largest artificial intelligence and machine learning conferences in India on December 14th.
https://www.linkedin.com
[ "Rob Heymann", "New Business Sales Manager", "Tata Communications", "Managed Services", "Telecommunications", "Unified Communications", "Divya Gupta Lamba", "Executive Director", "Ai Strategy", "Sustainability Digital Solutions" ]
I am looking forward to addressing India's top business leaders at the Amazon AI Conclave 2022, one of the largest artificial intelligence and machine ...
I am looking forward to addressing India’s top business leaders at the Amazon AI Conclave 2022, one of the largest artificial intelligence and machine learning conferences in India on December 14th. Join us for the 6th edition of the Amazon AI Conclave, a conference that aims to help builders and businesses build smart, customer-centric, scalable solutions in the cloud and on the edge using Amazon’s broadest and deepest set of AI and ML services. #AmazonAIConclave Sanket Dhurandhar Vikram Anbazhagan Guru Bala Puneet Chandok Karthik Sathuragiri Swami Sivasubramanian http://bit.ly/3h6PtVm
2022-12-01T00:00:00
https://www.linkedin.com/posts/vasi-philomin_amazonaiconclave-activity-7007245832580190208-3CzI
[ { "date": "2022/12/01", "position": 53, "query": "artificial intelligence business leaders" } ]
How emotionally intelligent computers can benefit society
How emotionally intelligent computers can benefit society
https://www.vodafone.com
[]
Emotion AI expert Rana el Kaliouby explains how the technology can impact the way people learn, work and monitor their health.
Rana el Kaliouby is Deputy CEO of Smart Eye and co-founder of Affectiva, a technology company developing emotion artificial intelligence (AI) software that enables machines to detect human emotions and complex cognitive states. AI is developing fast, and with many potential applications in the world of communications technology and connected devices, we spoke with Rana at the Arch Summit in Luxembourg to learn how emotion AI can impact the way people learn, work and monitor their health. “In the future, our technology is going to interact with us in the way we interact with one another,” says Rana el Kaliouby. “It’s going to be perceptive, empathetic, emotional and social. And there are so many applications for that.” Technology: a black hole for emotions When Rana el Kaliouby moved from Cairo to study at the University of Cambridge in 2000, working to build emotionally intelligent machines as part of a PhD in computer vision and machine learning, her husband remained in Egypt. They would keep in touch through instant messaging platforms, as international calls were too expensive and video conferencing didn’t exist. Rana, alone with her laptop, was homesick. And her laptop had no clue. “It dawned on me how completely insensitive computers are to our emotional state. I was messaging home, but there was no way for me to genuinely transmit how I felt. Technology was a black hole for emotions – all the richness of nonverbal communication that we as humans use to build empathy, love, loyalty and trust was gone. That sent me on a journey to bring emotional intelligence into our machines.” The commercial opportunity for emotionally intelligent devices After Cambridge, Rana took her ideas to the Massachusetts Institute of Technology (MIT), where she joined Dr Rosalind Picard’s lab. Together, they worked to develop emotion AI for different applications, initially focusing on how the technology could be used to help children with autism better understand emotional expression. At an MIT event for corporate sponsors, Rana noted an increasing commercial interest in her work. Realising she had a unique opportunity to take something she cared about and deliver it at scale, she left MIT and partnered with Rosalind to set up Affectiva. One of their early projects involved a partnership with Brain Power, a tech startup which embedded Affectiva’s technology into smart glasses as part of a research programme for children with autism and developmental disabilities. Loading... After 12 years, and more than USD $51m raised through funding rounds, their company was acquired by Smart Eye, where Rana is now deputy CEO. Applications for health and wellness The Covid-19 pandemic accelerated interest in emotion AI, as many businesses and schools made the shift to remote work and online learning. Increasing reliance on video conferencing platforms like Microsoft Teams and Zoom led to a rise in other challenges. “Certain issues quickly rose to the top. We saw a drop off in learner and student engagement, and concerns with mental health. Through emotion AI, we can detect the facial and vocal biomarkers of stress, anxiety, depression, and even suicidal intent. The data is there, it’s just a case of deploying that technology at scale. But imagine the potential, with us all spending so much time on our devices,” Rana says. Rana also sits on the board of an early-stage AI start-up, Videra Health, which operates in the remote patient monitoring space. Videra provides an app that enables patients with depression, for example, to log in and upload a self-recorded video. Videra then uses algorithms to quantify the patients’ levels of stress and anxiety. “The problem,” Rana says, “is that not everybody has this app. There is a need to partner with hospitals, clinicians and therapists so the technology can scale and support as many patients as possible.” How technology can intersect with health and wellness continues to be a key focus area for Rana. Having spent most of her career exploring how AI can sense emotion expressed physically, she is turning inward – how can sensors and AI work together to track internal health indicators, such as gut health or microbiome levels, hydration, or even ageing clocks? “Sensors give us the data, AI can help us analyse it and personalise interventions for different people, like specific diets or exercise programmes. I think this intersection of AI and biology is the next area where AI can have a lot of impact.” Rana was speaking from Arch Summit, the Luxembourg tech event hosted by Vodafone and Technoport joint-venture, Tomorrow Street. Latest AI stories
2022-12-01T00:00:00
https://www.vodafone.com/news/digital-society/emotion-ai-emotionally-intelligent-computers-benefit-society
[ { "date": "2022/12/01", "position": 65, "query": "artificial intelligence business leaders" } ]
Artificial intelligence and change management in small and ...
Artificial intelligence and change management in small and medium-sized enterprises: an analysis of dynamics within adaptation initiatives
https://pmc.ncbi.nlm.nih.gov
[ "Sara I C Lemos", "Iscte Business School", "University Institute Of Lisbon", "Avenida Das Forças Armadas", "Lisbon", "Fernando A F Ferreira", "Bru-Iul", "Fogelman College Of Business", "Economics", "University Of Memphis" ]
by SIC Lemos · 2022 · Cited by 58 — ... business innovation cooperation. The proposed system's practical ... Leadership motivated to apply artificial intelligence. Open in a new tab ...
The structuring phase was of crucial importance to the current proposed decision-support system because its results can help SMEs analyze possible AI adaptation initiatives. At this stage, a panel of specialists was recruited from among professionals known in their field as decision makers who can clarify objectives, identify areas of concern, and organize ideas (Belton & Stewart, 2002). The number of experts on the panel can range from 5 to 12 members (Eden & Ackermann, 2004), so six decision makers (i.e., professionals working in SMEs and with practical knowledge about AI) were recruited for the present study. Specifically, the panel members comprised an AI developer from SingularityNET, a Replai software engineer, a TNX Logistics software developer, and the Data Science Portuguese Association’s executive director. The other two were the chief executive officer and data science specialist of ML Analytics and a data scientist from Border Innovation. These six specialists expressed an active interest in participating in this research and investigating the decision-making problem of SME adaptation to AI-based tools, which ensured that these experts would be available to share their experience and knowledge in group work sessions. This study is process-oriented, so representativeness was not—and did not have to be—a point of concern. Bell and Morse (2013) note that the objective of the selected methodologies is not to make generalizations but rather to maintain a strong focus on process. In addition, once the expert panel was defined, the COVID-19 pandemic situation meant that the sessions had to be held online to guarantee the participants’ safety and well-being. A facilitator and two technical assistants were also present to provide support and record the results when necessary. The map in Fig. 2 has three significant features. First, each cause-and-effect relationship is represented by an arrow. Second, some criteria are linked to more than one cluster. Last, the negative signs next to arrows indicate the criteria that negatively influence SME adoption of AI. In short, cognitive mapping helped the expert panel identify decision criteria, thereby facilitating the search for appropriate solutions for the specific decision problem under study through intensive debate and knowledge sharing (Barão et al., 2021 ). The mapping process proved to be an important step toward communicating about and structuring the issue in question and provided the necessary conditions for the DEMATEL subsequent application in a neutrosophic context. After the first group session, the five clusters were formatted into a group cognitive map using the Decision Explorer software ( http://www.banxia.com ). Figure 2 presents the map’s final version, which was collectively validated by the participants in the second session. An editable version is available upon request from the authors. The next step was to organize the post-it notes (i.e., established criteria) into areas of concern or clusters. The decision makers separated the criteria identified into five clusters labeled as follows: Human Resources (C1); Information Technology (IT) Infrastructure (C2); Know-How and Knowledge (C3); Organizational Policies and Management (C4); and Leadership (C5). The last phase of the first group session consisted of an internal analysis of each cluster in order to establish the criteria’s hierarchy within each cluster. That is, the most important factors were placed at the top of their respective clusters, while the intermediate ones moved to the middle and the least important criteria at the bottom. The panel members were informed that only one digital post-it note could be used for each identified criterion (Ribeiro et al., 2017 ). In addition, given a positive or negative causal relationship to the problem, they had to include a positive (+) or negative (−) sign, respectively, in the note. In total, 112 criteria were identified and considered important to measures that allow SMEs to adapt to using AI. The results were in line with Eden and Ackermann’s ( 2004 ) suggestion that a cognitive map should normally contain between 90 and 120 criteria. A trigger question was then asked of the decision-maker panel: “Based on your professional experience, what initiatives can SMEs develop to facilitate change management while adapting to AI tools?”. This question encouraged the experts to exchange ideas and discuss the topic more fully. To identify significant factors affecting SME adaptation initiatives, the “post-its technique” (Ackermann & Eden, 2001 ) was applied to facilitate the collection of the input needed to construct the decision-support model. This procedure consisted of writing on post-it notes the decision criteria (i.e., determinants of initiatives that help SMEs adapt to AI) that the specialists identified as relevant to the decision-making process and as an appropriate response to the trigger question. The first group session lasted approximately three hours and covered the first phase of analyzing the decision problem (i.e., SME adaptation to AI tools). The main objective was to create a group cognitive map based on the panel members’ interactions. The meeting began with the introduction of each expert and a brief overview of the study and its methodological approach. The online platform used to conduct this session was the Miro platform ( http://www.miro.com ), which allowed the specialists to interact as needed. Evaluation phase: DEMATEL and neutrosophic logic After the structuring phase was completed, the panel members could move on to the evaluation phase, which took place during the second group session held online via Zoom. First, the group cognitive map was shown to the panel members so that they could make any adjustments needed. The next techniques to be used were then presented. The facilitator emphasized the advantage of integrating the two approaches selected (i.e., DEMATEL in a neutrosophic environment) due to the uncertainty and indeterminacy inherent to decision-making processes (Ferreira & Meidutė-Kavaliauskienė, 2019). During this session, the specialists thus focused on completing six relation matrices (i.e., a first matrix indicating the relationships between clusters and five matrices corresponding to the expected reality within each cluster). Because of the clusters’ size, nominal group and multivoting techniques were used to select the most significant criteria to be included in the last five matrices. Once the most important factors were identified, the decision makers assessed these criteria’s interrelationships on a DEMATEL scale ranging from 0 to 4 (0 = “No influence”; 1 = “Little influence”; 2 = “Medium influence”; 3 = “Strong influence”; 4 = “Very strong influence”). The experts could then conduct neutrosophic assessments of these relationships (i.e., to identify how likely—expressed as a percentage—their judgment was to be true (T), uncertain (I), or false (F)). The panel was informed that, within neutrosophic logic, the total of the percentages attributed to T, I, and F can differ from 100%. After finishing the evaluation using neutrosophic values, the specialists next aggregated the values ​​(i.e., crispification) to obtain the initial inputs needed to apply the DEMATEL technique. This procedure involved performing an extra calculation for all values obtained during the evaluation phase, which utilized the crispification Eq. (2) (see Sect. 3.1). The crisp values ​​were then ready for the five DEMATEL steps (see Sect. 3.2). The first round of analysis focused on the relationships between the previously identified clusters listed in Table 1. The panel created the matrix shown in Table 2, which includes the neutrosophic values assigned by the decision makers and subjected to crispification. The results of this step are given in Table 3. Table 1. Clusters identified Clusters C1 Human Resources C2 Information Technology Infrastructure C3 Know-How and Knowledge C4 Organizational Policies and Management C5 Leadership Open in a new tab Table 2. Matrix with neutrosophic values for clusters C1 C2 C3 C4 C5 C1 – 3 (0.8, 0.6, 0.4) 3 (0.8, 0.4, 0.2) 2 (0.8, 0.3, 0.2) 3 (0.6, 0.5, 0.4) C2 3 (0.9, 0.5, 0.5) – 3 (0.9, 0.4, 0.1) 1 (0.8, 0.1, 0.1) 1 (0.7, 0.2, 0.2) C3 4 (0.9, 0.5, 0.1) 4 (0.9, 0.5, 0.1) – 2 (0.7, 0.5, 0.3) 2 (0.9, 0.4, 0.1) C4 3 (0.6, 0.7, 0.3) 2 (0.6, 0.6, 0.4) 3 (0.8, 0.5, 0.2) – 1 (0.9, 0.1, 0) C5 4 (0.9, 0.5, 0.1) 3 (0.9, 0.5, 0.1) 2 (0.5, 0.7, 0.7) 3 (0.8, 0.3, 0.2) - Open in a new tab Table 3. Crisp Neutrosophic Values for Clusters Relationship analyzed DEMATEL scale (X) Neutrosophic values (T, I, F) Neutrosophic Crispification T I F Crispification equation numerator Crisp weight W Final value in matrix Z Clusters matrix C1–C2 3.0 0.80 0.60 0.40 0.56795 0.04211 1.70 C1–C3 3.0 0.80 0.40 0.20 0.71716 0.05318 2.15 C1–C4 2.0 0.80 0.30 0.20 0.76195 0.05650 1.52 C1–C5 3.0 0.60 0.50 0.40 0.56411 0.04182 1.69 C2–C1 3.0 0.90 0.50 0.50 0.58769 0.04358 1.76 C2–C3 3.0 0.90 0.40 0.10 0.75505 0.05599 2.27 C2–C4 1.0 0.80 0.10 0.10 0.85858 0.06366 0.86 C2–C5 1.0 0.70 0.20 0.20 0.76195 0.05650 0.76 C3–C1 4.0 0.90 0.50 0.10 0.70000 0.05190 2.80 C3–C2 4.0 0.90 0.50 0.10 0.70000 0.05190 2.80 C3–C4 2.0 0.70 0.50 0.30 0.62141 0.04608 1.24 C3–C5 2.0 0.90 0.40 0.10 0.75505 0.05599 1.51 C4–C1 3.0 0.60 0.70 0.30 0.50334 0.03732 1,51 C4–C2 2.0 0.60 0.60 0.40 0.52390 0.03885 1.05 C4–C3 3.0 0.80 0.50 0.20 0.66834 0.04956 2.01 C4–C5 1.0 0.90 0.10 0.00 0.91835 0.06809 0.92 C5–C1 4.0 0.90 0.50 0.10 0.70000 0.05190 2.80 C5–C2 3.0 0.90 0.50 0.10 0.70000 0.05190 2.10 C5–C3 2.0 0.50 0.70 0.70 0.35969 0.02667 0.72 C5–C4 3.0 0.80 0.30 0.20 0.76195 0.05650 2.29 ∑ k = 1 r w k c = 1 ; complies with Eq. (1) conditions (see Sect. 3.1) Crispification Equation Denominator 13.48648 1 Open in a new tab Table 3 shows that the final values used to create the DEMATEL direct-relation matrix (see Table 4) were produced by multiplying each crisp neutrosophic value (i.e., the crispification equation numerator) by the degree of influence assigned by the panel. The DEMATEL scale value x was in this way estimated for each causal relationship. The results were incorporated into direct-relation matrix Z (i.e., DEMATEL step one (see Sect. 3.2.1)) presented in Table 4. This step enabled the panel to proceed to the remaining steps (i.e., steps two through five in Sect. 3.2). Table 4. Direct-relation matrix Z for clusters C1 C2 C3 C4 C5 Total C1 0.00 1.70 2.15 1.52 1.69 7.07 C2 1.76 0.00 2.27 0.86 0.76 5.65 C3 2.80 2.80 0.00 1.24 1.51 8.35 C4 1.51 1.05 2.01 0.00 0.92 5.48 C5 2.80 2.10 0.72 2.29 0.00 7.91 TOTAL 8.87 7.65 7.14 5.91 4.88 Open in a new tab The initial direct-relation matrix (see Table 5) was subsequently normalized by performing the required intermediate calculations using Eqs. (4) and (5) (see Sect. 3.2.2). Next, total-relation matrix T (see Table 6) was created based on Eq. (6) (see Sect. 3.2.3) after the three imperative matrices needed for this procedure (i.e., matrices I, I − X, and (I − X)−1) were constructed. Table 5. Normalized direct-relation matrix X for clusters Max. 8.9 8.4 1/max 0.112700 0.119719 1/s 0.112700 C1 C2 C3 C4 C5 C1 0.0000 0.1920 0.2425 0.1717 0.1907 C2 0.1987 0.0000 0.2553 0.0968 0.0859 C3 0.3156 0.3156 0.0000 0.1401 0.1702 C4 0.1702 0.1181 0.2260 0.0000 0.1035 C5 0.3156 0.2367 0.0811 0.2576 0.0000 Open in a new tab In matrix T, column R corresponds to each row’s total obtained with Eq. (7), and row C is each column’s total calculated using Eq. (8) (see Sect. 3.2.4) in DEMATEL step four. The R value corresponds to the degree of total influence that a specific cluster has on all the other clusters. Thus, C3 has the greatest impact on the remaining clusters (i.e., 4.0691). Line C, in turn, represents the degree of influence that a given cluster receives from the other clusters, which shows that C1 is the most influenced (i.e., 4.2543). The results further reveal that C5 has the least effect (i.e., 2.4579) on the remaining clusters. The α value was calculated using Eq. (9) (see Sect. 3.2.4), namely by averaging all of matrix T’s values. With an α value of 0.6937, the most influential relationships could be retained, and all values with a lesser effect on this matrix were eliminated (see the values in red and green in Table 6). The definition of the α value thus plays a fundamental role in shaping the DEMATEL diagram of cause-and-effect relationships (i.e., IRM), as shown in Fig. 3. Fig. 3. Open in a new tab Influence relationship map for clusters Figure 3 provides a quick, clear overview of the importance and significant influences with regard to SME-AI adaptation initiatives. R + C (i.e., the horizontal axis in Fig. 3) reveals the total effects given and received by the clusters in question, highlighting the clusters’ order of importance (i.e., the higher a cluster’s R + C value is, the more important that cluster will be and the greater its impact will be on the analysis system). Based on the IRM, C1 is the most important cluster, with the highest R + C value of 7.8355, although this number is quite close to C3’s value. C4 ​​is the least significant, with the lowest R + C value of 5.8529. The overall order of importance can be expressed as C1 > C3 > C2 > C5 > C4. In addition, Fig. 3 reveals that C3 is the core factor cluster (i.e., QI), C5 is the driving factor cluster (i.e., QII), C4 and C2 are independent factor clusters (i.e., QIII), and C1 is an impact factor cluster (i.e., QIV). After analyzing this first matrix and respective IRM, the same five steps were completed for each individual cluster. As mentioned previously, the specialists had to first select the most important criteria within each cluster using nominal group and multivoting techniques. The initial DEMATEL matrices (i.e., direct-relation matrix Z) incorporate crisp weights produced by following the same procedures used in the above inter-cluster analysis. The C1 criteria listed in Table 7 were considered by the decision-maker panel to be of greater importance. After selecting the criteria, the experts crispified the values in the neutrosophic matrix (see Table 8) so that direct-relation matrix Z could be completed with crisp values (see Table 9) and the final results could be analyzed. Table 7. Most significant criteria: human resources cluster Selected sub-criteria SC17 Lack of practical knowledge SC15 Experienced professionals specializing in artificial intelligence SC31 Development teams’ knowledge of their company’s business SC24 Team motivation SC28 Management of expectations regarding artificial intelligence applications Open in a new tab Table 8. Matrix with neutrosophic values: human resources cluster SC17 SC15 SC31 SC24 SC28 SC17 – 4 (1, 0.1, 0) 3 (0.8, 0.5, 0.2) 2 (0.7, 0.2, 0.2) 4 (0.9, 0.1, 0.1) SC15 3 (0.8, 0.5, 0.2) – 4 (0.8, 0.5, 0.2) 4 (1, 0.5, 0) 4 (0.9, 0.1, 0.1) SC31 1 (0.8, 0.2, 0) 3 (0.9, 0.5, 0.2) – 3 (0.8, 0.4, 0.2) 4 (0.9, 0.1, 0.1) SC24 1 (0.5, 0.8, 0.5) 1 (0.4, 0.8, 0.3) 2 (0.8, 0.7, 0.2) – 4 (0.9, 0.1, 0.1) SC28 0 (0.8, 0.2, 0) 0 (0.8, 0.2, 0) 4 (0.7, 0.5, 0.3) 4 (0.9, 0.1, 0.1) – Open in a new tab Table 9. Direct-relation matrix Z: human resources cluster SC17 SC15 SC31 SC24 SC28 TOTAL SC17 0.00 3.77 2.01 1.52 3.60 10.90 SC15 2.01 0.00 2.67 2.85 3.60 11.12 SC31 0.00 2.05 0.00 2.15 3.60 7.80 SC24 0.38 0.40 1.13 0.00 3.60 5.51 SC28 0.00 0.00 2.49 3.60 0.00 6.09 TOTAL 2.39 6.22 8.29 10.12 14.40 Open in a new tab Table 10 confirms that SC17 has the most influence on the other criteria because it has an R value of 1.5872, followed immediately by SC15 with an R value of 1.5395. SC28 is the most influenced by all the remaining SCs, with a C value of 1.9488. Figure 4 also shows that the SCs of this cluster can be listed by order of importance as follows: SC28 > SC24 > SC31 > SC15 > SC17. SC28 thus has the greatest significance overall, with an R + C value of 2.7440. Table 10. Total-relation matrix T: human resources cluster Open in a new tab Fig. 4. Open in a new tab Influence relationship map for human resources cluster Regarding the R − C values, the SCs belonging to the effects group (i.e., negative R − C) are SC31, SC24, and SC28, which are located at the bottom half of the respective DEMATEL diagram. The causes group (i.e., positive R − C) is composed of SC17 and SC15, which affect the other factors more than these two SCs are influenced. This group appears in the IRM’s upper half. The DEMATEL-diagram quartiles organize the criteria as follows. SC15 and SC17 are driving factors. SC31 can be considered an independent factor, and SC24 and SC28 are impact factors. The analysis next focused on C2. The decision makers again selected the criteria to be included in this cluster (see Table 11) so that, at a later stage, the neutrosophic matrix could be created more easily (see Table 12). This matrix served as the basis for C1’s initial direct-relation matrix, which contained the crisp weights needed to conduct this analysis (see Table 13). Table 11. Most significant criteria: information technology infrastructure cluster Selected sub-criteria SC97 Data organization SC100 Data quality SC103 Solutions tested before investing SC101 Digitalization SC114 Platforms that provide access to reduced development costs Open in a new tab Table 12. Matrix with neutrosophic values: information technology infrastructure cluster SC97 SC100 SC103 SC101 SC114 SC97 – 3 (0.8, 0.2, 0.2) 3 (0.8, 0.5, 0.2) 4 (0.8, 0.5, 0.2) 0 (0.9, 0.5, 0.1) SC100 3 (0.7, 0.2, 0.2) – 3 (0.8, 0.5, 0.2) 4 (0.9, 0.5, 0.1) 0 (0.9, 0.5, 0.1) SC103 1 (0.8, 0.3, 0.1) 2 (0.6, 0.5, 0.4) – 2 (0.8, 0.5, 0.2) 3 (0.6, 0.6, 0.2) SC101 4 (0.8, 0.5, 0.2) 4 (0.8, 0.5, 0.2) 3 (0.8, 0.5, 0.2) – 1 (0.6, 0.7, 0.5) SC114 4 (0.5, 0.5, 0.5) 4 (0.8, 0.5, 0.2) 4 (0.8, 0.5, 0.2) 4 (0.9, 0.5, 0.1) – Open in a new tab Table 13. Direct-relation matrix: information technology infrastructure cluster SC97 SC100 SC103 SC101 SC114 TOTAL SC97 0.00 2.40 2.01 2.67 0.00 7.08 SC100 2.19 0.00 2.01 2.74 0.00 6.93 SC103 0.78 1.13 0.00 1.34 1.70 4.95 SC101 2.67 2.67 2.01 0.00 0.36 7.71 SC114 2.00 2.67 2.67 2.80 0.00 10.15 TOTAL 7.64 8.87 8.69 9.55 2.06 Open in a new tab As Table 14 shows, SC114 has the most influence on the other SCs (i.e., R = 3.0746), while SC101 is the most affected by the other SCs in this cluster (i.e., C = 2.8633). SC101, however, stands out as the most prominent (i.e., R + C = 5.1937). SC114, in contrast, has an R + C value equal to 3.8218, so this factor is the least important because it has an extremely low C value (i.e., 0.7471) compared to the remaining SCs. Thus, the C2 SCs’ order of importance is as follows: SC101 > SC100 > SC97 > SC103 > SC114. Table 14. Total-relation matrix t: information technology infrastructure cluster Open in a new tab SC114 is the only member of the causes group, with a positive R + C value of 2.3275. The remaining SCs of this cluster (i.e., SC97, SC100, SC103, and SC101) are part of the effects group. Figure 5 reveals the SCs’ position in terms of their most important connections. That is, SC114 is a driving factor, SC103 is an independent factor, and SC97, SC100 and SC101 are impact factors. Fig. 5. Open in a new tab Influence relationship map for information technology infrastructure cluster C3 was analyzed next using the SCs selected as the most important (see Table 15). Table 16 is the matrix of neutrosophic values, and Table 17 presents the results of crispification. Table 18, in turn, characterizes the degree of influence attributed to the five selected SCs. Table 15. Most significant criteria: know-how and knowledge cluster Selected sub-criteria SC67 Research on similar cases of artificial intelligence usage SC66 Incorrect information about artificial intelligence SC62 Lack of clear benefits from using artificial intelligence SC72 Business questions determined via rapid ideation SC65 Information sharing within the organization Open in a new tab Table 16. Matrix with neutrosophic values: know-how and knowledge cluster SC67 SC66 SC62 SC72 SC65 SC67 – 2(0.6, 0.5, 0.1) 3(0.9, 0.1, 0.1) 4(0.9, 0.5, 0.1) 3(0.7, 0.2, 0.3) SC66 4(0.9, 0.1, 0.1) – 3(0.9, 0.1, 0.1) 4(0.9, 0.5, 0.1) 2(0.7, 0.2, 0.3) SC62 3(0.8, 0.2, 0.1) 2(0.5, 0.7, 0.5) – 4(0.9, 0.5, 0.1) 1(0.7, 0.2, 0.3) SC72 4(0.9, 0.5, 0.1) 3(0.7, 0.3, 0.3) 4(0.9, 0.5, 0.1) – 4(0.9, 0.5, 0.1) SC65 3(0.8, 0.2, 0.2) 2(0.6, 0.5, 0.1) 3(0.8, 0.2, 0.2) 3(0.8, 0.2, 0.2) – Open in a new tab Table 17. Direct-relation matrix: know-how and knowledge cluster SC67 SC66 SC62 SC72 SC65 TOTAL SC67 0.00 1.25 2.70 2.80 2.19 8.94 SC66 3.60 0.00 2.70 2.80 1.46 10.56 SC62 2.48 0.85 0.00 2.80 0.73 6.86 SC72 2.80 2.10 2.80 0.00 2.80 10.50 SC65 2.40 1.25 2.40 2.40 0.00 8.45 TOTAL 11.28 5.45 10.60 10.80 7.18 Open in a new tab Table 18. Total-relation matrix T: know-how and knowledge cluster Open in a new tab Table 18 confirms that SC66 is the most influential factor in this cluster, with a total R value of 4.4713, but SC72 has an only slightly lower value. SC67, in contrast, is the most affected by the other SCs in this cluster, with a C value of 4.6252. SC72 is again a close second, with a value of C equal to 4.5482, so the latter factor plays a prominent role because it is also influenced by the remaining SCs. Thus, SC72 is a more significant factor in the overall decision-support system, whereas SC65 is the least important SC in C3. The following order of importance was confirmed: SC72 > SC67 > SC62 > SC66 > SC65. In addition, Fig. 6 presents SC66 and SC65 as having a positive R − C value, which places them in the causes group, while the other SCs (i.e., SC67, SC62 and SC72) have a negative R − C value and belong to the effects group. This DEMATEL-diagram quartiles similarly reveal that SC66 and SC65 are driving factors, SC62 is an independent factor, and SC67 and SC72 are impact factors (see Fig. 6). Fig. 6. Open in a new tab Influence relationship map for know-how and knowledge cluster The most important SCs from C4 are listed in Table 19. After crispification, this cluster’s matrix could be transformed—using neutrosophic values calculated in the second group work session (see Table 20)—into the direct-relation matrix shown in Table 21. Table 19. Most significant criteria: organizational policies and management cluster Selected criteria SC88 Funding programs SC63 Work in/with collaborative networks: ecosystems SC90 Use of existing standards SC94 Difficulty of evaluating the results obtained SC85 Development of turnkey artificial intelligence projects Open in a new tab Table 20. Matrix with neutrosophic values: organizational policies and management cluster SC88 SC63 SC90 SC94 SC85 SC88 – 2(0.6, 0.3, 0.4) 1(0.8, 0, 0.2) 0(0.9, 0.1, 0.1) 2(0.9, 0.5, 0.1) SC63 3(0.8, 0, 0.2) – 1(0.8, 0, 0.2) 2(0.9, 0.1, 0.1) 2(0.9, 0.5, 0.1) SC90 1(0.8, 0, 0.2) 1(0.8, 0, 0.2) – 2(0.9, 0.1, 0.1) 2(0.5, 0.5, 0.5) SC94 1(0.8, 0, 0.2) 3(0.8, 0.1, 0.2) 2(0.7, 0.4, 0.3) – 3(0.8, 0.3, 0.2) SC85 1(0.8, 0, 0.2) 1(0.8, 0, 0.2) 1(0.8, 0, 0.2) 3(0.9, 0.5, 0.1) – Open in a new tab Table 21. Direct-relation matrix: organizational policies and management cluster SC88 SC63 SC90 SC94 SC85 TOTAL SC88 0.00 1.26 0.84 0.00 1.40 3.50 SC63 2.51 0.00 0.84 1.80 1.40 6.55 SC90 0.84 0.84 0.00 1.80 1.00 4.47 SC94 0.84 2.48 1.33 0.00 2.29 6.93 SC85 0.84 0.84 0.84 2.10 0.00 4.61 TOTAL 5.02 5.41 3.84 5.70 6.09 Open in a new tab In Table 22, the SC with the greatest impact within C4 is SC94 as this factor has the highest R value (i.e., 4.1112). SC85 exhibits the highest C value (i.e., 3.6266), so it receives the most influence from the other SCs. SC94 stands out as having the highest R + C value in this matrix (i.e., 7.5106), so this factor is the most prominent than the remaining SCs. SC88, in contrast, is the least important with an R + C value of 5.1763. The SCs’ order of importance in this cluster is confirmed by Fig. 7 (i.e., SC94 > SC63 > SC85 > SC90 > SC88). Based on this IRM, SC94 and SC63 are core factors, while SC90 is the driving factor. Finally, SC88 is an independent factor, and SC85 is an impact factor. Table 22. Total-relation matrix T: organizational policies and management cluster Open in a new tab Fig. 7. Open in a new tab Influence relationship map for organizational policies and management cluster Table 23 lists the most significant SCs selected from C5 for analysis. Table 24 presents this cluster’s neutrosophic matrix. Table 25 then reveals C5’s direct-relation matrix after the values were subjected to crispification. Table 23. Most significant criteria: leadership cluster Selected sub-criteria SC53 Leadership with knowledge about adaptation processes SC24 Team motivation SC51 Company leadership’s weak commitment SC8 Correct perceptions of artificial intelligence uses SC48 Leadership motivated to apply artificial intelligence Open in a new tab Table 24. Matrix with neutrosophic values: leadership cluster SC53 SC24 SC51 SC8 SC48 SC53 – 4(0.9, 0.5, 0.1) 4(0.9, 0.2, 0.1) 3(0.9, 0.3, 0.3) 4(0.9, 0.1, 0.1) SC24 3(0.7, 0.3, 0.3) – 3(0.7, 0.3, 0.3) 3(0.9, 0.3, 0.3) 2(0.6, 0.5, 0.2) SC51 4(0.9, 0.1, 0.1) 4(0.9, 0.2, 0.1) – 2(0.5, 0.9, 0.5) 2(0.6, 0.5, 0.2) SC8 4(0.9, 0.1, 0.1) 4(0.9, 0.2, 0.1) 4(0.7, 0.5, 0.2) – 4(0.9, 0.1, 0.1) SC48 3(0.7, 0.3, 0.3) 4(0.9, 0.2, 0.1) 4(0.9, 0.2, 0.1) 2(0.5, 0.9, 0.5) – Open in a new tab Table 25. Direct-relation matrix: leadership Cluster SC53 SC24 SC51 SC8 SC48 TOTAL SC53 0.00 2.80 3.43 2.25 3.60 12.08 SC24 2.10 0.00 2.10 2.25 1.23 7.67 SC51 3.60 3.43 0.00 0.68 1.23 8.94 SC8 3.60 3.43 2.58 0.00 3.60 13.21 SC48 2.10 3.43 3.43 0.68 0.00 9.65 TOTAL 11.40 13.10 11.55 5.85 9.65 Open in a new tab According to Table 26, SC8 has the greatest total effect on the other SCs, with an R value of 4.0115. SC24 receives the most influence from the other factors, with the highest C value of 3.9519. The totals of these variables’ R and C confirm that SC53 is easily spotted as the most important SC (i.e., 7.1543). The selected SCs can be ranked by order of importance as follows: SC53 > SC24 > SC51 > SC8 > SC48. Table 26. Total-relation Matrix T: leadership cluster Open in a new tab In the last step of this cluster’s analysis (see Fig. 8), SC53 and SC8 were allocated to the causes group (i.e., R − C > 0), and SC48, SC51, and SC24 were placed in the effects group. The conclusion was also reached that SC53 is a core factor, SC53 is a driving factor, and SC51 is an independent factor. Finally, SC24 is an impact factor.
2022-12-30T00:00:00
2022/12/30
https://pmc.ncbi.nlm.nih.gov/articles/PMC9801159/
[ { "date": "2022/12/01", "position": 94, "query": "artificial intelligence business leaders" } ]
The Evolving Role of AI in Recruitment and Retention - SHRM
The Evolving Role of AI in Recruitment and Retention
https://www.shrm.org
[]
Intelligent algorithms excel at administrative tasks, freeing up recruiters to focus on relationship building and strategic hiring decisions. While AI handles ...
As organizations continue to face recruiting and retention challenges, human resource teams are increasingly looking to artificial intelligence to enhance their talent acquisition and management capabilities. In fact, between 35% and 45% of companies have now adopted AI in their hiring processes, with the AI recruitment sector projected to expand at a 6.17% compound annual growth rate from 2023 to 2030. This reliance on AI-powered solutions is highlighted in the recent WorkplaceTech Spotlight series conversation between host Guillermo Corea and Prem Kumar, CEO and co-founder of Humanly. Overview of AI’s Impact on HR Technology Evolution HR technology has undergone rapid evolution in recent years. The industry has seen a marked shift toward AI-driven recruitment and retention tools to address persistent talent shortage issues. Corea mentioned that “solving hiring and retention issues is key for any organization looking to reduce costs, boost efficiency, and improve diversity.” With talent acquisition more competitive than ever, AI promises to be a game changer in providing data-based assistance for critical HR decisions. This potential is reflected in the 38% of HR leaders who have already explored or implemented AI solutions to enhance process efficiency within their organizations. As the leader of an AI recruitment startup, Kumar provided insights into the transformative potential of intelligent algorithms to enhance hiring practices. The episode offers a timely look at AI’s expanding role in HR and the responsible integration of automation into existing workflows. Exploring AI’s Promise in HR As innovations continue to roll in, the question evolves into how AI-based solutions can optimize inefficient and biased human processes. Corea suggested that the focus should be on AI augmentation rather than the wholesale replacement of human recruiters. Kumar elaborated on how Humanly’s conversational AI platform screens and engages candidate pools based on skills, culture fit, and personalized interactions. By handling large volumes of applications and surfaced tailored matches for open roles, the technology saves recruiters’ time while connecting them to better candidates. “It allows the recruiters to spend more time building relationships with that shortlist of qualified candidates rather than going through hundreds of resumes,” Kumar said. The impact is substantial, with AI recruitment reducing cost-per-hire by as much as 30%. Benefits of AI for Recruitment Workflows Corea and Kumar explored several key advantages that AI brings to recruitment. These intelligent tools increase efficiency by automatically filtering viable candidates. According to Corea, “85% of employers that use automation or AI say it saves them time and increases their efficiency.” Even more compelling, 86.1% of recruiters utilizing AI reported that it accelerates the hiring process. Similarly, AI enhances decision-making by assessing candidates based on job skills rather than superficial attributes subject to human bias. Providing equitable opportunity is instrumental for diversity, Kumar said. Further benefits include constant improvement, updated algorithms based on new data, and platform integration for a unified view of potential hires. Best Practices for AI Implementation The responsible development and equitable use of AI underpins its effective implementation in HR environments. Corea and Kumar discussed essential practices for integrating intelligent algorithms into recruitment in an ethical, trust-based manner. Kumar advised clearly defining the role and expectations for AI tools rather than overpromising capabilities. “I often encourage people almost to write a job description for their AI,” specifying which tasks it will automate, he said. Transparency around the use of data and algorithms also builds user trust. Kumar suggested that “being very clear and upfront with candidates on how their data is being used” mitigates some of the ethical risks arising from AI. Anticipating the Future of AI in Talent Management AI holds much promise, but responsible design and adoption remain imperative. As Kumar outlined, technology alone cannot drive successful recruitment; rather, it should aid human decision-making. He advocated for AI that assists rather than replaces recruiters in evaluating and engaging with candidates. When AI is used ethically and focused on augmenting human capabilities, it can mitigate existing hiring biases. Continued innovation will likely expand AI’s capabilities to optimize future recruitment workflows further. However, success ultimately hinges on integrating human skill with machine learning in a trust-based partnership. Concluding Thoughts Talent recruitment and retention persist as a top business priority, and AI has emerged as an invaluable tool for 21st-century HR teams. Solutions like Humanly demonstrate the tremendous potential to address hiring inefficiencies, process biases, and resource constraints through automation and augmented intelligence. However, conscientious development and the use of AI underpin productive adoption. Constructive collaboration between ethical AI design and human-centered practices will shape the next frontier of data-driven yet candidate-focused recruitment. FAQs Can AI replace human recruiters? Artificial intelligence is not positioned to wholly replace recruiters; it should augment human capabilities. Intelligent algorithms excel at administrative tasks, freeing up recruiters to focus on relationship building and strategic hiring decisions. While AI handles screening and initial interactions, human insight remains essential for evaluating candidates. How can biases in AI recruitment tools be addressed? Bias mitigation requires a concerted effort across development and usage. Diversifying data science teams and carefully selecting inclusive training data helps produce fairer algorithms. Continual bias testing throughout the machine learning process is necessary, as is monitoring recruitment outcomes for uneven impacts on candidates. What are some examples of AI applications in recruitment? AI is transforming various recruitment workflows, including automating sourcing through job boards, screening applicants with chatbots, scheduling interviews via email assistants, analyzing interviews to reduce bias, and matching candidates to open positions based on skill sets. How can HR professionals stay ahead in the AI-driven HR landscape? As AI proliferates in HR tech, professionals must actively educate themselves on emerging tools, evaluate ethical vendors, examine how AI can enhance their capabilities, and invest in integrating responsible AI solutions focused specifically on existing recruitment pain points. — This article was written based on Episode 20 of the WorkplaceTech Spotlight. Thank you to Prem Kumar, CEO and co-founder of Humanly, for contributing to the conversation. Sources:
2022-12-01T00:00:00
https://www.shrm.org/labs/resources/the-evolving-role-of-ai-in-recruitment-and-retention
[ { "date": "2022/12/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2023/01/01", "position": 7, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 7, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 7, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 7, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 7, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 6, "query": "artificial intelligence hiring" } ]
AI in Recruitment: How Artificial Intelligence Helps Hiring in 2025
AI in Recruitment: How Artificial Intelligence Helps Hiring in 2025
https://www.helioshr.com
[ "Jenna Bishop" ]
Discover how AI in recruitment is improving efficiency, reducing bias, and enhancing candidate engagement through advanced technologies and ...
Artificial Intelligence is a hot-button topic right now. In every industry, people are talking about how AI might impact their jobs. Will it make life easier? Harder? Will AI make them redundant? In fact, my colleague Katt Silver and I recently spoke at a local SHRM chapter meeting about this very topic. Human Resources professionals are having the same conversations, but with one big difference. We have been using AI for a long time now, especially in recruitment and talent management. Before we take a closer look at AI in recruitment, let's take a moment to look at the current state of play. How AI is transforming hiring Artificial intelligence in recruitment has become essential in recent years, with 87% of companies now using AI-driven tools to revolutionize their hiring process. The stakes couldn't be higher. With talent acquisition being more competitive than ever, companies are struggling to find qualified candidates while managing overwhelming application volumes. Traditional recruiting methods simply can't keep pace, which is why smart organizations are turning to AI recruitment technology to gain a competitive edge. The transformation is already underway. Companies using AI tools report reducing time-to-hire by up to 40% while achieving better candidate quality and dramatically improved diversity outcomes. But using AI effectively isn't just about adopting the latest technology. It's about understanding how to integrate artificial intelligence into your recruitment strategy in ways that enhance—rather than replace—human decision-making. What makes AI recruitment different from traditional hiring? AI in recruitment isn't simply automation with a fancy name. It's a fundamental shift in how organizations approach talent acquisition, combining machine learning algorithms with human expertise to create smarter, more efficient hiring processes. The AI recruitment market, valued at $661.56 million in 2023, is projected to reach $1.12 billion by 2030, reflecting the rapid adoption across industries. Unlike traditional recruitment methods that rely heavily on manual resume screening and gut instincts, AI-powered systems analyze vast amounts of data to identify patterns that predict candidate success. These systems can process thousands of job applications in minutes, extracting insights that would take human recruiters weeks to uncover. The technology excels at eliminating repetitive tasks that consume valuable recruiter time. AI recruiting tools can automatically screen resumes, schedule interviews, answer candidate questions through chatbots, and even predict which candidates are most likely to accept job offers. This frees up HR teams to focus on strategic relationship-building and cultural fit assessment. Currently, 65% of recruiters have implemented AI in their hiring processes, with the primary drivers being time savings (44% of recruiters) and improved candidate sourcing (58%). However, only 25% of companies are using generative AI for recruiting, indicating significant room for growth. What's particularly compelling is how AI recruitment software addresses human bias in hiring decisions. By focusing on skills, experience, and measurable qualifications rather than subjective factors, artificial intelligence helps create more equitable hiring outcomes. Companies like Unilever have seen 16% increases in diversity after implementing AI-driven recruitment systems. Machine learning algorithms improve over time, learning from successful hires to refine candidate recommendations Predictive analytics help forecast which candidates will thrive in specific roles and company cultures Natural language processing enables more sophisticated analysis of candidate communications and video interviews How leading companies are using AI to transform their recruitment process 1. Automated resume screening Gone are the days of manually sifting through endless stacks of resumes. Modern AI recruiting software can analyze thousands of applications simultaneously, identifying qualified candidates based on complex criteria that go far beyond keyword matching. Unilever's recruitment transformation serves as a perfect example. The global consumer goods company processes nearly 2 million job applications annually—an impossible task for human recruiters alone. Their AI-powered system reduced the entire hiring process from four months to just four weeks while saving over 100,000 hours of human recruitment time. The technology doesn't just look for obvious qualifications. Machine learning algorithms analyze patterns in successful employee profiles, identifying subtle indicators of potential success that human recruiters might miss. This approach has helped companies discover hidden talent and reduce the risk of overlooking qualified candidates due to non-traditional career paths. 2. Video interviewing enhanced by artificial intelligence Video interviews have become standard practice, but AI technology is taking them to the next level. Platforms like HireVue use artificial intelligence to analyze not just what candidates say, but how they say it—examining tone, word choice, and even facial expressions to assess communication skills and cultural fit. This isn't about replacing human judgment. Instead, AI-assisted video analysis provides recruiters with additional data points to inform their decision-making. Companies using these tools report that candidates selected by AI are 14% more likely to pass interviews and 18% more likely to accept job offers compared to traditional screening methods. The candidate experience improves significantly as well. Automated scheduling reduces back-and-forth communication, while AI-powered feedback helps unsuccessful candidates understand areas for improvement—something traditional recruitment processes rarely provide. 3. Predictive analytics for better hiring decisions Predictive analytics represents the next frontier in talent acquisition. Rather than simply evaluating what candidates have done, AI systems can forecast how they'll perform in specific roles and environments. Amazon has been particularly successful in this area, developing algorithms with functionality that not only predicts candidate success but also improves diversity outcomes. Their data-driven approach analyzes historical hiring data to identify patterns that lead to long-term employee satisfaction and performance. These systems consider factors human recruiters might not consciously evaluate: team dynamics, growth potential, likelihood of staying with the company, and alignment with organizational values. The result? Hiring decisions that are both more accurate and more equitable. 4. Chatbots that enhance candidate engagement Modern recruitment chatbots do far more than answer basic questions. They provide personalized candidate support throughout the entire hiring process, from initial application to final decision. Companies like Hilton have implemented AI-powered chatbots that can schedule interviews, provide application status updates, and even conduct preliminary screenings. This real-time support dramatically improves the candidate experience while reducing administrative burden on recruiting teams. The technology is particularly valuable for high-volume recruiting scenarios, where maintaining personal communication with every candidate would be impossible. AI agents ensure no candidate falls through the cracks while providing the responsiveness that modern job seekers expect. 5. Intelligent job description optimization Creating compelling job descriptions that attract the right candidates while maintaining legal compliance is more complex than it appears. AI tools can analyze successful job postings to identify language patterns that resonate with your target talent pool. These systems can suggest modifications to improve gender neutrality, reduce unintentional bias, and optimize keyword usage for better search visibility. Some platforms even predict application volumes based on job description elements, helping hiring managers set realistic expectations. Generative AI tools like ChatGPT have made it easier for recruiters to create multiple versions of job descriptions for different platforms and audiences, ensuring consistent messaging while maximizing reach. 6. Skills-based candidate assessment Traditional resume screening often misses candidates with non-linear career paths or transferable skills. AI-powered assessment tools focus on actual capabilities rather than just previous job titles or educational credentials. Companies like Google and IBM have embraced skills-based hiring supported by artificial intelligence. Their systems can identify candidates who possess the necessary competencies regardless of how they acquired them, whether through formal education, self-study, or practical experience. Breaking through the paper ceiling is particularly valuable for addressing skill shortages in emerging technology areas where traditional qualification requirements might exclude capable candidates. AI systems can evaluate coding abilities, problem-solving skills, and technical knowledge through practical assessments rather than proxy indicators. 7. Streamline onboarding with intelligent automation AI in recruitment extends beyond hiring into the onboarding process. Intelligent systems can automate administrative tasks, create personalized training plans, and provide new hire support through their first months on the job. Unilever's "Unabot" exemplifies this approach, using natural language processing to answer new employee questions and gather feedback about the onboarding experience. This ensures consistent support while freeing up HR professionals to focus on strategic relationship-building. Transform your recruitment strategy with smart AI implementation However, successful AI recruiting requires more than just purchasing software. It demands strategic thinking about how technology can enhance your existing recruitment process while maintaining the human connections that matter most to candidates. Helios HR can help you navigate this transformation with: Book a call with Helios HR today to discover how AI can transform your recruitment outcomes and give you the competitive advantage you need in today's tight talent market.
2022-12-01T00:00:00
https://www.helioshr.com/blog/ai-in-recruiting-pros-vs.-cons-of-hiring-with-artificial-intelligence
[ { "date": "2022/12/01", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 12, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 10, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 11, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 13, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 10, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 10, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 11, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 11, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 14, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 16, "query": "artificial intelligence hiring" } ]
AI in Recruitment: The 2025 Clear Guide to Everything You Need
AI in Recruitment: The 2025 Clear Guide to Everything You Need
https://www.oleeo.com
[ "Natasha Thakkar" ]
AI in recruitment is transforming the hiring process by improving speed, precision, and the candidate experience. Explore everything you need to know in ...
AI in recruitment is no longer a future trend but a major advance that is here to stay. As recruitment software evolves alongside advancing technology, AI has become essential for optimizing Talent Acquisition . Its ability to analyze data objectively, enhance efficiency, and automate repetitive tasks makes it a powerful tool for hiring teams. But how can you best leverage AI in recruitment to streamline workflows, and what challenges might arise? In this article, we’ll explore how AI Recruiting software is transforming the hiring process and how you can integrate it into your strategy for smarter, faster hiring. What is AI in Recruitment? AI in recruitment simply means the application of Artificial Intelligence to the recruitment process. You may have heard of AI being used to screen resumes, but that’s not all it can do. In fact, there are applications of AI for recruiting talent at every stage of the procedure. The great boon this provides for HR teams is that they no longer have to shoulder the burden of plowing through mundane, repetitive tasks. AI can take care of those and, in doing so, streamline the recruitment workflow and free up human recruiters to focus on what they do best, such as building and developing relationships. The AI in talent acquisition market size has grown rapidly recently, according to The Business Research Company. It is predicted to grow to $1.35 Billion in 2025 at a compound annual growth rate (CAGR) of 18.9%, and $2.67 billion in 2029 at a CAGR of 18.6%. There’s only one conclusion: AI in recruitment recruiting is here to stay. In this article, we’ll take a look at why that is. The difference between artificial intelligence and machine learning is a broad field of computer science focused on creating systems that can perform tasks requiring human-like intelligence, such as decision-making, problem-solving, and natural language processing. is a subset of AI that enables systems to learn from data and improve their performance over time without being explicitly programmed. While AI covers a wide range of intelligent behaviors, ML specifically focuses on developing algorithms that recognize patterns and make predictions based on data. What Are the Benefits of AI in Recruitment? Using AI in recruitment comes with several advantages. Here are just a few of them: 1. Saves time 2. Reduces bias in the hiring process 3. Improves communication How is the AI in Recruitment Process Changing Hiring The most significant changes recruiters and HR professionals are seeing as a result of AI are to their quality of life. Automation is making manual tasks that used to take hours into almost instant processes. Beyond efficiency, AI for Candidate Experience is also improving how organizations interact with job seekers, ensuring a smoother and more engaging hiring journey. Let’s explore some of the most significant ways AI is making recruitment easier and more effective. It’s a fast filtering system A 2025 statistic says the global It can promote diversity and inclusion in the workplace studies which It can help you to connect and engage with applicants Use of AI in Recruitment It’s time to explore some concrete examples of AI in recruitment. These are the tools and features that are available today and becoming ever more popular as the advantages of using them become clear. 1. Screening recruiting 2. Chatbots 3. Outreach 4. Text generation Examples of Successful AI Implementation in Recruitment Many leading companies have successfully integrated AI into their hiring processes to improve efficiency, candidate experience, and decision-making. Here are two notable examples: Unilever : Unilever transformed its hiring process by using AI-Powered Video Interviews and predictive analytics. By leveraging AI-driven assessments, the company reduced hiring time and increased candidate diversity, ensuring a more efficient and unbiased selection process. L’Oréal : L’Oréal implemented AI chatbots to Streamline Candidate Engagement and scheduling. They used AI For Candidate Experience , providing instant responses, improving efficiency, and allowing recruiters to focus on high-value interactions. What Are the Challenges of AI Recruitment? At this point, we introduce a note of caution. Yes, AI talent acquisition has a lot to recommend it, but there are a few potential drawbacks to AI, too: It can learn and replicate human biases One of the most important ethical considerations when it comes to artificial intelligence in recruitment is how to deal with biases that already exist in your system. The reason this can pose a problem is that one of the ways AI operates is by looking for patterns in existing data. It can be less accurate It needs lots of data for effective screening Best Practices for Integrating AI Tools into Your Existing Recruitment Workflow: A Quick Checklist You need a strategic approach to successfully incorporate AI into your recruitment process. Here are some best practices to ensure an easy integration: Define your goals: Identify the specific challenges AI can address, such as improving candidate sourcing, automating screening, or enhancing engagement. Choose the right AI tools: Select AI recruitment solutions that align with your existing HR tech stack and meet your organization’s unique hiring needs. Balance automation with human touch: Use AI for efficiency but maintain human oversight to ensure a personalized and fair candidate experience. Ensure ethical and unbiased AI usage: Regularly audit AI-driven decisions to detect and mitigate biases in screening and selection processes. Train your HR team: Provide training on how to use AI effectively, interpret AI-driven insights, and make informed hiring decisions. Monitor and optimize performance: Continuously analyze AI’s impact on your recruitment workflow and refine strategies for better results. Is AI in Recruitment Replacing Human Recruiters? Absolutely not! While AI is excellent at automating repetitive tasks, it’s not so good at cultivating relationships with potential applicants. That takes the human touch. Also, AI in recruitment can’t replicate the subtleties of human judgment. The sharp eye of a skilled recruiter remains invaluable when it comes to double-checking the AI platform’s suggestions. Enhance Your Hiring Process by Using AI in Recruitment AI in recruitment is transforming the hiring landscape, making talent acquisition faster, smarter, and more efficient. So, it’s no surprise that AI recruiting technology is one of the fastest-growing areas of HR today. The right AI Platforms for Recruitment Strategy can save time, improve candidate communication, and expand your reach to a diverse talent pool. At Oleeo, we bring years of expertise in AI-driven hiring solutions. Our advanced AI recruitment platform offers a fully customizable approach, allowing you to optimize your recruitment strategy to fit your unique business needs. Want to see AI in recruitment in action? Contact Us today to learn more. FAQs About AI in Recruitment What is the best AI recruiting tool? The best AI recruiting tool depends on your organization’s needs, but top options include platforms like Oleeo. These tools use AI to streamline candidate sourcing, automate screening, and enhance hiring decisions. Look for a solution that integrates well with your existing systems and supports AI in recruitment features like predictive analytics, chatbots, and bias reduction. How can AI in recruitment improve diversity and inclusion? How can organizations stay updated on AI recruiting trends? Organizations can stay updated on AI recruiting trends by attending industry conferences, subscribing to HR technology publications, and participating in webinars focused on AI in recruitment. Engaging with thought leaders and joining professional networks can also provide valuable insights. How does AI-driven internal mobility impact recruitment? AI-driven internal mobility enhances recruitment by identifying and promoting existing employees for new roles, reducing reliance on external hiring. By analyzing skills, experience, and career goals, AI helps match employees with internal opportunities, improving retention and workforce agility. This approach streamlines recruitment efforts, reduces hiring costs, and ensures a more engaged and motivated workforce.
2025-03-19T00:00:00
2025/03/19
https://www.oleeo.com/blog/how-is-ai-changing-recruitment/
[ { "date": "2022/12/01", "position": 13, "query": "artificial intelligence hiring" }, { "date": "2023/01/01", "position": 5, "query": "AI hiring" }, { "date": "2023/01/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2023/02/01", "position": 5, "query": "AI hiring" }, { "date": "2023/03/01", "position": 5, "query": "AI hiring" }, { "date": "2023/05/01", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 8, "query": "artificial intelligence hiring" }, { "date": "2025/03/19", "position": 12, "query": "AI hiring" }, { "date": "2025/03/19", "position": 9, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 5, "query": "artificial intelligence hiring" } ]
Artificial intelligence in hiring: Assessing impacts on equality - IFOW
Artificial intelligence in hiring: Assessing impacts on equality
https://www.ifow.org
[]
A comprehensive review of the equality challenges posed by the use of AI in hiring, and the pitfalls of available technical solutions.
The use of artificial intelligence (AI) presents risks to equality, potentially embedding bias and discrimination. Auditing tools are often promised as a solution. However our new research, which examines tools for auditing AI used in recruitment, finds these tools are often inadequate in ensuring compliance with UK Equality Law, good governance and best practice. The report provides evidence of the need for a more comprehensive approach than technical auditing to safeguard equality in the use of AI for hiring, which shapes access to work, and outlines the present first steps which could be taken to achieve this. The report also includes a prototype AI Equality Impact Assessment, which we plan to develop and pilot.
2022-12-01T00:00:00
https://www.ifow.org/publications/artificial-intelligence-in-hiring-assessing-impacts-on-equality
[ { "date": "2022/12/01", "position": 18, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 19, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 21, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 19, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 18, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 20, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 19, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 18, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 18, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 19, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 18, "query": "artificial intelligence hiring" } ]
AI in Recruiting | IBM
AI in Recruiting
https://www.ibm.com
[]
In addition to filling roles and meeting labor needs, today's AI hiring strategies aim to provide a positive candidate experience to all job applicants.
AI is redefining the recruitment process for businesses across industries by automating repetitive tasks, bolstering the candidate experience and enhancing efficiency. AI in recruitment is a major advancement that is continuing to evolve as advanced technology, like artificial intelligence (AI) and automation, are being implemented in every step of the recruitment process. Specifically, the integration of AI agents and assistants is streamlining the hiring process by engaging with candidates in real-time and answering queries. With these new technologies, human resources (HR) teams are being reimagined in a way that drives human recruiters to build relationships and removes the burdensome daily tasks that historically slowed growth. By leveraging machine learning algorithms, AI can analyze vast amounts of data to identify suitable candidates who might be overlooked through traditional methods and remove unconscious bias. It reduces human bias and increases diversity by focusing on skills and qualifications rather than demographic information. AI adoption in the field of talent acquisition is part of a larger push from executives to build out a different work strategy in the future. This approach employs the use of generative AI and automation capabilities. A recent IBM Institute for Business Value (IBM IBV) report, in partnership with Oracle, found that 27% of executives expect their future work strategy to generate more ROI. And while gen AI is still in its early days, the executives reported 35% more overall profitability relative to their competition.
2022-12-01T00:00:00
https://www.ibm.com/think/topics/ai-in-recruitment
[ { "date": "2022/12/01", "position": 19, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 15, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 11, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 14, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 15, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 11, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 16, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 14, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 10, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 13, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 14, "query": "artificial intelligence hiring" } ]
AI in Hiring | Emerging Issues | Sustainable Business Network ... - BSR
Sustainable Business Network and Consultancy
https://www.bsr.org
[]
Employers are increasingly automating the hiring process through the use of artificial intelligence (AI) tools.
What’s New More and more, employers are using artificial intelligence (AI) tools to automate key parts of the hiring process. Hiring algorithms have gained popularity in recent years—across all industries and job types—as a result of employers’ desires to increase efficiency in the hiring process and improve the quality and, in some cases, diversity of candidates. Automation in hiring has become even more widely adopted as a result of COVID-19 and the pivot to remote work, which has made business increasingly dependent on virtual systems for previously in-person interactions such as job applications and interviews. The use of hiring AI will likely continue to proliferate as the economy recovers and companies enter a phase of new hiring. However, this deployment of AI may prove to be problematic as algorithms can reproduce human bias in surprising and insidious ways. Automated delivery of online job ads has resulted in reproduction of gender and racial bias. One tech company’s internally developed software to screen candidates was trained with existing employees’ resumes and produced severe discrimination against women. In the interview process, use of facial recognition technologies can also lead to discrimination. In all of these cases, the issue lies with the development of the AI. When trained on the resumes of a company’s current employees, the algorithm reproduces the human bias that led to the company’s workforce being predominantly white and male—further entrenching systemic inequities. As algorithms aren’t trained with diverse data, it perpetuates biases against candidates from diverse backgrounds. Lawmakers are seeing this potential for discrimination and acting on it. A bill proposed by the New York City Council would require companies to disclose their use of technology in the hiring process. In addition, the bill would require vendors of the hiring software to conduct audits to ensure their tools do not discriminate. At a national level, in December 2020, 10 U.S. senators sent a letter to the Chair of the Equal Employment Opportunity Commission (EEOC) requesting the commission’s oversight on hiring technologies, noting its potential to “reproduce and deepen systemic patterns of discrimination.” While it may seem that shifting away from human-led decision-making can reduce prejudice, a comprehensive review of employment algorithms found that “predictive hiring tools are prone to be biased by default.” This is due to the inherent weakness of the workforce data on which the algorithms are trained, given that the data reflects the bias of the people that screened, hired, and promoted the workforce. This leads to a conundrum for companies, who are simultaneously seeking to improve efficiencies in hiring and meet goals related to diversity, equity, and inclusion; particularly as they face renewed calls to diversify their workforces following both the unequal impacts of COVID-19 and recent momentum around the global Black Lives Matter movement. Employers may see potential for AI to be trained to value candidates from protected classes (in the U.S., this includes race, color, religion, sex, national origin, disability, and genetic information). Indeed, some companies are developing hiring tech with the specific goal of eliminating bias. However, automating the process at all involves training AI to seek specific qualifications and attributes—the sum of which defines a standard expectation of what jobseekers should be and how jobseekers should present themselves in order to be hired. As a result, even with government intervention, this may lead to discrimination against anyone that doesn’t—or can’t—meet the expectation.
2022-12-01T00:00:00
https://www.bsr.org/en/emerging-issues/ai-in-hiring
[ { "date": "2022/12/01", "position": 21, "query": "artificial intelligence hiring" }, { "date": "2023/03/01", "position": 7, "query": "AI hiring" }, { "date": "2023/05/01", "position": 20, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 22, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 21, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 20, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 22, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 21, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 20, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 20, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 22, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 25, "query": "artificial intelligence hiring" } ]
2025 AI in Hiring Survey Report - Insight Global
2025 AI in Hiring Survey Report
https://insightglobal.com
[ "Emilie Skaug", "Alexandra Woodford", "Anna Morelock", "Patrick Glynn" ]
The survey explores how AI is being integrated into the hiring process and its future potential in recruiting and talent acquisition.
Nearly All Hiring Managers Use AI, But More Than Half Care if an Applicant Uses It With virtually all hiring managers reporting that they use AI in the hiring process, 54% also said they care if a job seeker applies using a resume or cover letter written by AI. The juxtaposition of these two data points reveals a friction point for people on opposite sides of the interview table. Perhaps, depending on the relevance to the role, hiring managers may view job seekers who use the tool as more tech savvy or resourceful. Some may not even give job seekers’ use of the tool a second thought. But others may see AI use in applying for a job negatively. To dig a little deeper, we asked our 4 million followers on LinkedIn a few more questions about job seekers using AI, and here’s what they told us. Hiring Managers: If you care that an applicant uses AI in the application process, why do you care?
2022-12-01T00:00:00
https://insightglobal.com/2025-ai-in-hiring-report/
[ { "date": "2022/12/01", "position": 22, "query": "artificial intelligence hiring" }, { "date": "2023/01/01", "position": 26, "query": "AI hiring" }, { "date": "2023/02/01", "position": 23, "query": "AI hiring" }, { "date": "2023/05/01", "position": 11, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 12, "query": "artificial intelligence hiring" }, { "date": "2023/08/01", "position": 29, "query": "AI hiring" }, { "date": "2023/10/01", "position": 16, "query": "artificial intelligence hiring" }, { "date": "2024/01/01", "position": 33, "query": "AI hiring" }, { "date": "2024/03/01", "position": 16, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 17, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 15, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 12, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 17, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 33, "query": "AI hiring" }, { "date": "2025/01/01", "position": 16, "query": "artificial intelligence hiring" }, { "date": "2025/04/01", "position": 14, "query": "AI hiring" }, { "date": "2025/06/01", "position": 25, "query": "AI hiring" } ]
AI in Hiring and Recruiting - BBB National Programs
AI in Hiring & Recruiting
https://industryselfregulation.org
[]
The Principles for Trustworthy AI in Recruiting and Hiring serve as a global baseline standard for the use of AI applications in recruitment and hiring.
The Landscape Artificial intelligence (AI) is being discussed in nearly every business environment. The use of AI in various employment processes is advancing rapidly with new products and services entering the market at an explosive pace promising efficiencies and added value. But a challenge for rapid innovation is the ability for regulatory requirements to keep pace. In the recruiting and hiring process, where AI provides an aid to human decision making, how can you combine important technological innovation with a proactive approach to employment law regulations and compliance requirements? 99% of Fortune 500 companies rely on the aid of talent-sifting software 55% of HR leaders in the U.S. use predictive algorithms to support hiring 46% of business leaders are using AI to make DEIB-related workforce decisions
2022-12-01T00:00:00
https://industryselfregulation.org/incubator/ai-hiring
[ { "date": "2022/12/01", "position": 23, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 30, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 31, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 26, "query": "artificial intelligence hiring" }, { "date": "2023/12/01", "position": 54, "query": "AI regulation employment" }, { "date": "2024/03/01", "position": 23, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 30, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 27, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 27, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 24, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 28, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 58, "query": "AI regulation employment" }, { "date": "2025/06/01", "position": 17, "query": "artificial intelligence hiring" } ]
Artificial Intelligence (AI) Hiring Technology and Disability ...
Artificial Intelligence (AI) Hiring Technology and Disability Discrimination
https://stpp.fordschool.umich.edu
[]
Do you have a disability? Are you applying for a job? What challenges do people with disabilities face with AI technologies in the hiring process?
Do you have a disability? Are you applying for a job? What challenges do people with disabilities face with AI technologies in the hiring process? Employers are increasingly using artificial intelligence (AI) to make hiring decisions. These AI tools go by many names, like: predictive hiring, machine learning (ML), and algorithms. People with disabilities are employed less than half as much as people without disabilities and face systemic inequality when it comes to hiring. These disparities exist despite legal protections like the Americans with Disabilities Act (ADA). Without proactive efforts to make hiring practices and software inclusive, employers risk losing out on valuable talent while AI technology deprives people with disabilities from economic opportunities. STPP and community partner Detroit Disability Power have published a community fact sheet to help disabled applicants navigate these new hiring processes. Shutterstock/ MDV Edwards
2022-12-01T00:00:00
https://stpp.fordschool.umich.edu/research/community-resource/artificial-intelligence-ai-hiring-technology-and-disability
[ { "date": "2022/12/01", "position": 27, "query": "artificial intelligence hiring" }, { "date": "2024/08/28", "position": 38, "query": "AI employment" } ]
Artificial Intelligence, Hiring Practices & EPLI - Brown & Brown
Artificial Intelligence, Hiring Practices & EPLI
https://www.bbrown.com
[]
Proponents of using AI software in the hiring process say that it allows employers to process more resumes and applications faster and helps eliminate the risk ...
Artificial intelligence (AI) has an increasing presence in the workplace as technology continues to evolve, offering new methods to ease administrative burdens for employers. Employers now have a wide variety of data-driven decision-making tools to assist them in making employment decisions, such as recruitment, hiring, promotions and dismissals. Despite the many benefits of AI, employers may inadvertently run the risk of violating existing federal and state civil rights laws that could lead to employment discrimination claims. Artificial Intelligence and the Hiring Process According to a survey conducted by the Society for Human Resource Management (SHRM), around 79% of employers were projected to use AI software in recruitment and hiring1. Companies may rely on various software tools to scan resumes for relevant work experience or venture onto the web, monitoring sites like LinkedIn for potential applicants the company can target directly. Some employers use virtual assistants or chatbots to screen job candidates for their qualifications, rejecting those who do not meet set requirements. Hiring platforms, like HireVue, a video interviewing software, can evaluate candidates based on their facial expressions and speech patterns to determine if the applicant will be a good fit for the organization. Proponents of using AI software in the hiring process say that it allows employers to process more resumes and applications faster and helps eliminate the risk of biases that can occur during human interaction. However, without adequate safeguards, there are also risks that employers can encounter under federal and state employment statutes. Title VII of the Civil Rights Act of 1964 provides federal protections for employees and applicants against discrimination based on certain characteristics, including race, religion and gender. The Age Discrimination in Employment Act (ADEA) also prohibits age discrimination. However, an employer relying on AI to pre-screen applications could inadvertently disqualify an applicant based on a protected class. For example, an AI software screening application could disqualify applicants outside of a specific geographic radius, which might inadvertently discriminate against a particular racial or ethnic group2. Potential Exposures for Employers Several state and city governments have attempted to regulate the use of artificial intelligence in hiring procedures in response. Illinois and Maryland have enacted laws limiting the use of specific AI technologies in the workplace and New York City began enforcement of its new legislation, Local Law 144, meant to regulate the use of AI-driven tools in hiring and promotion decisions3. The NYC law requires employers to have independent auditors annually check the AI software for any biases and to notify potential candidates that an automated system is being utilized to aid the hiring process4. While this law only applies to companies with workers within NYC, labor experts expect this to influence other states and municipalities. At least four other jurisdictions – California, New Jersey, Vermont and the District of Columbia – are also working on laws to regulate AI in hiring5. As laws continue to evolve, it can be expected that plaintiffs will find new avenues to challenge companies’ employment processes. As part of its Artificial Intelligence and Algorithmic Fairness Initiative, the Equal Employment Opportunity Commission (EEOC) issued guidance regarding the use of AI in employment-related decision-making, assessing whether these procedures may have a disproportionately discriminatory effect. The EEOC addressed the potential liability employers could face when using algorithmic tools designed or administered by another entity. Because employers cannot rely on the vendor’s representations to escape liability from disparate impact, measures should be taken to prevent any inadvertent discrimination against job seekers and workers6. As more employers use AI to assist in their employment decisions, we expect to see more litigation which specifically addresses AI in the context of discrimination claims. Recently, the EEOC settled its first-ever discrimination lawsuit involving AI hiring software, reaching a $365,000 settlement deal with a tutoring company that allegedly used recruitment software that automatically rejected older applicants7. The parties filed a Joint Notice of Settlement in August 2023 in the U.S. District Court for the Eastern District of New York that is pending approval before the Court8. In another example, a class-action lawsuit9 was filed in April 2023 against CVS Health Corp. in Massachusetts by an applicant who failed to get a job after completing an AIassisted video interview using HireVue10. A similar class-action suit11 alleging discriminatory practices on the basis of race, age and disability was filed in February 2023 in California against Workday on behalf of prospective employees who had failed to be hired as a result of an alleged discriminatory screening process12. While these lawsuits are still pending, the EEOC’s recent settlement brings light to the potential legal and monetary risks. How EPLI Can Help Limit Exposure As more AI-related employment cases are filed, employers should understand the potential impact from both a business and insurance perspective. Defending any employment-related litigation can accrue significant defense costs and drain internal resources to address the matter. If news of an AI-related employment lawsuit is broadcasted on media outlets, it can negatively impact a company’s brand and reputation. Employers seeking to mitigate risks presented by the threat of AI-related litigation can utilize Employment Practices Liability insurance. Leading EPL insurers are now adjusting their underwriting protocols to acknowledge how and where companies are using AI in their business practices. As the prevalence of AI software expands, companies renewing their EPL policies should anticipate additional questions related to the use of AI. In recent discussions with several top EPL carriers, maintaining indemnity agreements with outside AI software vendors does insulate companies from legal action related to the use of AI software in the employment process. As the EPL insurance space continues to monitor court filings addressing the use of AI in hiring, employers should process the effects of these cases and consider the scope of their insurance policies to help address such litigation. Brown & Brown’s Risk Solutions Practice offers innovative strategies to companies seeking to address this area of concern with comprehensive and tailored insurance coverage solutions.
2022-12-01T00:00:00
https://www.bbrown.com/us/insight/artificial-intelligence-hiring-practices-epli/
[ { "date": "2022/12/01", "position": 31, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 38, "query": "artificial intelligence employers" }, { "date": "2023/05/01", "position": 40, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 40, "query": "artificial intelligence employers" }, { "date": "2023/06/01", "position": 41, "query": "artificial intelligence hiring" }, { "date": "2023/09/01", "position": 37, "query": "artificial intelligence employers" }, { "date": "2023/10/21", "position": 21, "query": "AI employment" }, { "date": "2023/10/21", "position": 13, "query": "AI hiring" }, { "date": "2023/10/01", "position": 41, "query": "artificial intelligence employers" }, { "date": "2023/10/01", "position": 34, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 37, "query": "artificial intelligence employers" }, { "date": "2024/04/01", "position": 35, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 34, "query": "artificial intelligence hiring" }, { "date": "2024/07/01", "position": 41, "query": "artificial intelligence employers" }, { "date": "2024/08/01", "position": 41, "query": "artificial intelligence employers" }, { "date": "2024/10/01", "position": 35, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 36, "query": "artificial intelligence employers" }, { "date": "2024/11/01", "position": 31, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 36, "query": "artificial intelligence hiring" } ]
The Importance of Artificial Intelligence in Recruitment | Infosys BPM
The Importance of Artificial Intelligence in Recruitment
https://www.infosysbpm.com
[ "Infosys Limited" ]
Recruiters can use AI-assisted automated recruiting to pursue a candidate-first strategy. They can make better hiring decisions by evaluating massive volumes of ...
Human Resource Outsourcing The role of AI in the hiring process Attracting and recruiting talent is one of the main concerns for companies that are competing for talent in the global market. But with each open posts attracting thousands of applications, it has become challenging for HR managers to sort through all the applications and select the right candidates. About 52% of the recruitment and talent acquisition specialists find screening candidates from the application pool the hardest part of the recruitment process. But if you can automate the screening step or even other repetitive tasks in the hiring process, recruitment can become more efficient and effective. This is where artificial intelligence (AI) in recruitment and HR management comes into the picture, offering recruiters opportunities to disengage from repetitive screening tasks and focus on recruiting the best candidates. AI for recruitment: Opportunities and challenges AI for recruitment is the integration of AI solutions in the recruitment process, where machine learning models can help you shortlist ideal candidates, automate manual tasks, streamline the recruitment workflow, and free recruitment managers from repetitive high-volume tasks. The applications of AI in the hiring process also help you generate insights from the available talent pool, build screening tests to eliminate bias, and identify successful traits from candidates who have remained with the company. You can gain many benefits and opportunities if you integrate AI in the hiring process, including: Saving recruiters’ time from repetitive tasks to focus more on the best candidates Better quality of hire via standardised job matching Improved interaction, communication, and engagement throughout the recruitment lifecycle Removal of bias from the recruitment process Access to a larger talent pool to find suitable candidates for the job But AI-powered recruitment is still evolving, and some of the challenges you need to look out for are: Higher upfront costs Need for a lot of data for training and engagement Potential to learn human bias from patterns in the existing recruitment process Potentially losing human touch in recruitment Scepticism and hesitation in adopting new technology How can AI transform your hiring process? So, you are considering implementing AI recruiting solutions. But how will they transform your hiring process? Automation and making smart decisions may save companies thousands of dollars per employee every year, in addition to improving the work life. Therefore, it is not surprising that several HR departments and providers are attempting to capitalise on the advantages of AI. Recruiters can use AI-assisted automated recruiting to pursue a candidate-first strategy. They can make better hiring decisions by evaluating massive volumes of data from the job market and connecting with prospects on a frequent basis. Here are a few ways AI can transform your hiring process. Simplifying the sourcing process: Finding and attracting the right candidates for an open position is a struggle well-known to talent acquisition specialists. With AI-powered recruitment, you can scan online resumes and social profiles of potential candidates — at scale — to find a more diverse talent pool suitable for your open position. Finding and attracting the right candidates for an open position is a struggle well-known to talent acquisition specialists. With AI-powered recruitment, you can scan online resumes and social profiles of potential candidates — at scale — to find a more diverse talent pool suitable for your open position. Enhancing employee experience: Once sourced, AI solutions can contact potential candidates and lead them through the recruiting funnel quickly. This ensures a smooth and positive candidate experience, which is a critical contributor to talent recruitment and retention. Once sourced, AI solutions can contact potential candidates and lead them through the recruiting funnel quickly. This ensures a smooth and positive candidate experience, which is a critical contributor to talent recruitment and retention. Boosting screening: As AI tools are always learning, they can boost your screening process by not just relying on the candidates that have applied for the position but also reviewing past applications to select the most suitable candidate. Moreover, you can train AI models to remove unconscious bias from the recruitment process to ensure an inclusive and diverse workforce. As AI tools are always learning, they can boost your screening process by not just relying on the candidates that have applied for the position but also reviewing past applications to select the most suitable candidate. Moreover, you can train AI models to remove unconscious bias from the recruitment process to ensure an inclusive and diverse workforce. Assisting interview process: Before inviting candidates for an in-person interview, AI in talent acquisition can help you check a candidate’s fit by offering job simulations, undertaking personality testing, and conducting a pre-set virtual interview. As a result, you can ensure that not only the candidates meet the job requirements but will also fit your company culture. Before inviting candidates for an in-person interview, AI in talent acquisition can help you check a candidate’s fit by offering job simulations, undertaking personality testing, and conducting a pre-set virtual interview. As a result, you can ensure that not only the candidates meet the job requirements but will also fit your company culture. Improving onboarding experience: AI-powered HR solutions go beyond just screening and interviewing and extend to onboarding as well. Automating repetitive tasks — such as conducting background checks, putting together the paperwork, and getting employees started with the necessary documentation — helps employees begin work quickly and enables the HR department to focus on other critical tasks. For organisations on the digital transformation journey, agility is key in responding to a rapidly changing technology and business landscape. Now more than ever, it is crucial to deliver and exceed on organisational expectations with a robust digital mindset backed by innovation. Enabling businesses to sense, learn, respond, and evolve like a living organism, will be imperative for business excellence going forward. A comprehensive, yet modular suite of services is doing exactly that. Equipping organisations with intuitive decision-making automatically at scale, actionable insights based on real-time solutions, anytime/anywhere experience, and in-depth data visibility across functions leading to hyper-productivity, Live Enterprise is building connected organisations that are innovating collaboratively for the future. How can Infosys BPM help? Infosys BPM Recruitment Outsourcing solutions and services offer you end-to-end solutions for your talent supply chain, from sourcing and screening to onboarding. Infosys BPM Recruitment Outsourcing solutions and services can give you automated hiring system solutions custom designed to meet your recruitment goals.
2022-12-01T00:00:00
https://www.infosysbpm.com/blogs/human-resource-outsourcing/artificial-intelligence-role-in-hiring-process.html
[ { "date": "2022/12/01", "position": 44, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 44, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 44, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 43, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 44, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 45, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 44, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 44, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 45, "query": "artificial intelligence hiring" } ]
Best Artificial Intelligence Jobs in NYC, NY 2025
Best Artificial Intelligence Jobs in NYC, NY 2025
https://www.builtinnyc.com
[]
Top Artificial Intelligence Jobs in NYC, NY · Sr Lead Software Engineer - Platform Engineering · Senior Lead eSoftware Engineer - Java, Equities Trading.
Success! We'll use this to further personalize your experience.
2022-12-01T00:00:00
https://www.builtinnyc.com/jobs/artificial-intelligence
[ { "date": "2022/12/01", "position": 46, "query": "artificial intelligence hiring" }, { "date": "2024/02/01", "position": 12, "query": "generative AI jobs" }, { "date": "2025/03/01", "position": 74, "query": "artificial intelligence employment" }, { "date": "2025/05/01", "position": 70, "query": "artificial intelligence employment" } ]
10+ Best AI Recruiting Software for 2025: Expert Reviews + Pricing
10+ Best AI Recruiting Software for 2025: Expert Reviews + Pricing
https://www.selectsoftwarereviews.com
[ "Phil Strazzulla", "Hr Tech Expert", "Harvard Mba", "Software Enthusiast" ]
Indeed, many of the top AI recruiting tools can help source, screen, and hire top talent. Others are great at generating creative cold emails ...
Workable Review After nearly 2 years of closely watching Workable, we can confidently say that it’s a powerful platform that has evolved significantly over its decade in the recruitment tech market. With over 30,000 customers, Workable has solidified its reputation as one of the top AI-driven recruiting tools, thanks to its advanced automation workflows and a growing list of integrations. Many offer AI-powered job description features, but none have surpassed what we’ve seen in Workable. It doesn’t just stop at crafting first drafts of job postings based on the role title and requirements. It also allows users to adjust the tone (formal, friendly, or engaging), revert to previous versions, or even regenerate specific sentences to suit particular needs. Another feature that positions Workable a step ahead of platforms like Lever or JazzHR is the addition of a Salary Estimator. In our test, this provided us with a competitive pay range for attracting top talent. Workable’s AI-Recruiter tool for passive candidate sourcing, while not as comprehensive as some dedicated sourcing platforms we know, like Gem or Fetcher, is sufficient for SMBs. During our Standard Plan trial, we received 75 profile views, which allowed us to preview candidates and see why the system matched them to our job postings— all without relying on external apps (read: additional fees). We were equally impressed with the AI Screening Assistant. It provided both profile scores and a detailed summary of how well each candidate matched our job’s requirements, which made the whole screening process labor-saving. Our HR Tech editor tested Workable’s AI Screening Assistant It was also convenient to have everything under one roof with an AI video interview tool in-house. However, it’s worth noting that this feature, along with transcription capabilities (via third-party tools like Supernormal or BrightHire), comes at an additional cost. On the pricing front, Workable starts at $299 per month, which is steeper than competitors like Manatal ($15 per month) or Wellfound (free). This higher price point reflects its robust feature set. Still, we can see how budget-conscious teams might find this a bit much. Workable is also geared towards in-house recruiting, so we wouldn’t recommend it if you're part of a staffing agency. And for those who focus heavily on chatbot-driven candidate screening and communication, you might want to look elsewhere, such as Humanly or Paradox, which excel in that area.
2022-12-01T00:00:00
https://www.selectsoftwarereviews.com/buyer-guide/ai-recruiting
[ { "date": "2022/12/01", "position": 53, "query": "artificial intelligence hiring" }, { "date": "2023/01/01", "position": 8, "query": "AI hiring" }, { "date": "2023/02/01", "position": 8, "query": "AI hiring" }, { "date": "2023/03/01", "position": 8, "query": "AI hiring" }, { "date": "2023/06/01", "position": 6, "query": "AI hiring" }, { "date": "2023/06/01", "position": 53, "query": "artificial intelligence hiring" }, { "date": "2023/08/01", "position": 6, "query": "AI hiring" }, { "date": "2023/09/01", "position": 7, "query": "AI hiring" }, { "date": "2023/10/01", "position": 52, "query": "artificial intelligence hiring" }, { "date": "2023/12/01", "position": 6, "query": "AI hiring" }, { "date": "2024/01/01", "position": 6, "query": "AI hiring" }, { "date": "2024/03/01", "position": 24, "query": "artificial intelligence hiring" }, { "date": "2024/06/01", "position": 6, "query": "AI hiring" }, { "date": "2024/10/01", "position": 7, "query": "AI hiring" }, { "date": "2024/10/01", "position": 55, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 56, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 7, "query": "AI hiring" }, { "date": "2025/07/04", "position": 7, "query": "AI hiring" }, { "date": "2025/07/04", "position": 7, "query": "AI hiring" }, { "date": "2025/07/04", "position": 7, "query": "AI hiring" }, { "date": "2025/07/04", "position": 86, "query": "artificial intelligence hiring" } ]
The Role of Artificial Intelligence in the Hiring Process - Comeet
AI in Hiring: The New Rules for Candidates and Companies
https://www.comeet.com
[ "Osa Tunde-Oni" ]
AI can guide candidates through the hiring process, facilitate quick candidate screening and selection for recruiters or hiring teams, and ensure compliance.
If you’re wondering how to do that, this article will help by reaching across both sides of the aisle and digging into how AI is used on both sides of the hiring process, and what you can do to move this unspoken AI battle closer to a peaceful agreement. The challenge now isn’t stopping AI, it’s rethinking, restructuring, and even rebuilding a hiring process that can surface authenticity, experience, and personality, even in a world where both sides have AI-powered copilots. The result? An unspoken arms race, where one side adds AI safeguards and looks for the polished sheen of AI language, and the other finds clever ways to “sound more human”. On the flip side, candidates worry they’re being judged by algorithms instead of humans – and they’re not exactly wrong either. Many recruiters now worry they’re talking to prompts instead of people – and they’re not exactly wrong. The surge in AI usage within hiring has led to a quiet stalemate; companies are using it to assess candidates while candidates are using it to strategize and easily pass video interviews, talent assessments, and other screening approaches. Table of contents A Quick History of AI in Hiring Just a few years ago, automation in hiring was mostly happening behind the scenes. With advancements in machine learning and natural language processing (NLP), AI was quietly helping those HR teams with large candidate pools and even larger budgets use Applicant Tracking Systems with more sophisticated resume parsing and candidate ranking systems. Fast forward to today, where AI has gone from the seen-but-not-heard helper to center stage in hiring – on both ends of the spectrum. Tools like ChatGPT, Google Gemini, and more aren’t just assisting recruiters anymore – they’re sitting in the corner of your candidate’s browser, helping them answer interview questions, polish resumes, and prep for video interviews. AI’s presence in recruitment started slowly and subtly, gaining traction in the early aughts as some HR tech vendors and Fortune 500 companies began testing new strategies to improve their hiring processes. As with anything new, it didn’t come without its bumps and bruises. For example, Amazon infamously scrapped its own AI recruiting tool in 2018 after discovering it was inherently biased against female candidates. Where Do Candidates Come Into the Picture? Candidates weren’t really introduced to AI until it hit the mainstream when it exploded in popularity with the launch of the generative AI tool, ChatGPT. Over 100 million users tried it out within two months of launch in 2022, making it the fastest-growing consumer app in history! Since then and into today, candidates across the world are using it to: Generate or completely rewrite resumes Script pre-recorded video answers Prepare for live interviews with mock Q&As And so the unspoken AI hiring battle began. How Candidates Are Using AI and What You Can Do to Prepare AI is the new video interview wingman or the new-age Swiss Army knife for job seekers if you will. According to a recent study, approximately 65% of job seekers will use AI at some point in their application process, whether that be to optimize their resumes, skill match, or even automate job applications – and that’s just the beginning. Generative AI tools have become a regular part of most candidate’s toolkits, helping candidates navigate landing interviews and completing them with answers, coaching, and real-time support during the process. The reality: most candidates believe they are less likely to get a job without using AI. It’s so popular and common that a simple Google search will pull up several cases where candidates admit to using AI to: Translate technical jargon into easy-to-understand language Research company strategy and product messaging Write press releases or copywriting assignment drafts Generate potential interview questions and responses Other Common AI Use Cases For Candidates AI-Generate Resumes & Cover Letters: Tools like ChatGPT and Resume.io can craft custom, applicant-tracking system-optimized resumes in seconds. A survey from Capterra found that over 40% of candidates are using AI to write and refine their resumes Prepping for Phone Screens & One-way Video Interviews: Candidates can copy and paste screening questions directly into AI tools to get polished responses, sometimes in real-time during one-way video interviews and even live video interviews if the candidate is bold. Practicing and Participating in Live Interviews: Not only are candidates pasting screening questions directly into AI tools to get polished responses, but interviewers and talent acquisition leaders are publicly sharing their experiences of getting that “uncanny feeling that something is off”. Candidates also use it to create mock interview questions based on the industry or role, refine STAR (Situation, Task, Action, Result) assessment responses, and even simulate technical whiteboard challenges. These are just a few examples of how rampant the use of AI is for job seekers, and with every passing minute, candidates discover new workarounds and ways to keep up with employers. So the question remains… How Should Companies Approach Candidates Using AI? Here’s the bottom line: if we’re being completely realistic, trying to catch or ban AI use for applicants is a losing battle. Candidates are using it in smart and, more often than not, ethical ways. So, instead of treating AI as cheating, organizations need to shift their hiring processes, especially in the screening and selection stages, to reward authenticity and lived professional experience. Here are some strategies we’re using here at Spark Hire to stay ahead of candidates using AI unethically or to create a false narrative about themselves: Set expectations early: Add clear disclaimers letting candidates know you welcome preparation through the use of AI, but that you expect answers in their resumes and interviews that reflect their real experiences – because you will be checking. Ask behavioral interview questions that AI can’t fake or will have problems generating: Be ultra-specific. Ask about roles, timelines, companies, and outcomes. A good example question might be, “Tell me about a project you led at [Company Name] where you had to overcome [specific challenge].” This question is a lot harder to prepare an AI-generated answer for. Be ultra-specific. Ask about roles, timelines, companies, and outcomes. A good example question might be, “Tell me about a project you led at [Company Name] where you had to overcome [specific challenge].” This question is a lot harder to prepare an AI-generated answer for. Incorporate follow-up questions and throw a few curveballs: AI-generated responses tend to fall apart when you go off-script or ask for personal thoughts, ideas, and feelings. They also have the added benefit of surfacing surprising insights about your candidate’s personality that allow you to establish clear winners as well as candidates not well-aligned with your company values. For example, a question such as, “What’s something you learned recently outside of work?” would work well as a curveball, and “Tell me more about [previously discussed topic]?” as a follow-up. AI is evolving at breakneck speeds, and your candidates are evolving along with it, but at the end of the day, it’s just another hiring tool – Like Grammarly or Google. The real winners in the arms race are hiring teams and talent acquisition professionals who aren’t afraid or frustrated by AI use but understanding of it, and working to design interviews and assessment strategies that surface who candidates are beyond the AI. How Organizations Are Using AI Hiring Tools And What Candidates Should Know More and more hiring teams are implementing AI in their hiring processes. According to the 2025 LinkedIn Future of Recruiting Report, 70% of talent acquisition professionals are integrating or experimenting with generative AI to improve their hiring efficiency. Nowadays, as a candidate, if you’re applying for a role at a Fortune 500 organization, you’re more likely to encounter AI in the interviewing process than not. From resume screening to job description writing, AI has quickly embedded itself into nearly every stage of modern hiring. Remember when we said candidates are mainly using AI ethically? Well, so are recruiters so it’s equally important that candidates acquaint themselves with how organizations are using it to better prepare. Here are a few of the most popular company-implemented AI use cases you’ll run into: AI-Powered Resume Screenings and Rankings: The average applicant tracking system (ATS) comes pre-loaded with AI-powered resume parsing and scoring tools to help recruiters surface top matches within seconds. The average recruiter without AI spends about 23 hours a week screening resumes for a single hire – AI cuts that time dramatically. Automated Candidate Outreach and Scheduling: Chatbots and AI assistants help recruiters manage the early touchpoints with you. It’s used to send interview or assessment reminders and schedule interviews automatically. Writing Job Descriptions: Generative AI tools are used to optimize job posts for inclusivity, remove biased language, and tailor descriptions to specific candidate profiles. A study by ZipRecruiter found that job listings with gender-neutral wording get 42% more responses, meaning AI is helping create a more inclusive hiring process. The smartest hiring teams aren’t just throwing AI at their problems, they’re using it to create an efficient and fair hiring process. Again, for those organizations using AI ethically, you can expect them to be: Regularly auditing their AI tools for bias. Keeping the human in human resources – human review cannot be removed. Communicating with you on exactly how and where AI is used. AI is permanently reshaping how both sides of the hiring table are operating, but candidates should know that its goal is to help hiring teams spend more time with the right candidates, not less. It’s here to enhance the human connection and not replace it. So keep your chin up and remember, authenticity wins everytime! Candidates: Are you in need of practical tips to better prepare for and succeed at video interviewing, assessments, and AI usage in all of the above? Check out our candidate boot camp! Where Does AI in the Hiring Process Go From Here? We’ve established that AI in hiring is here to stay, but in a world where everyone has access to the same tools, the playing field can get crowded and loud – FAST. So, how do you “win”? Well, the companies and candidates who will win the unspoken AI battle aren’t the ones fighting it or just using it as a temporary workaround. They’re the leaders and innovators who are working with it while staying unmistakably human every step of the way. What Does This Mean for Hiring Teams? You can only succeed if your screening and selection process expects every candidate to use AI. You need to design it to surface what AI can’t fake, such as: Real-world Experience. What have they lived through? What have they lived through? Authentic Stories. What makes them tick? What makes them tick? Emotional Intelligence. How do they move through the world? How do they move through the world? Problem-solving and Critical Thinking. What happens when they face a challenge they can’t solve with a prompt? Think less about “we need to catch them” and more about “how do I show them that showing me the real version of themself will be rewarded?” What Does This Mean for Candidates? AI can help candidates get their foot in the door, but it won’t close the deal. Unfortunately, it’s becoming commonplace that every resume looks the same. A 2025 study by resume.io identified that nearly half (49%) of AI-generated resumes are automatically dismissed because they don’t “feel authentic”. We’re not saying you shouldn’t use AI, but you also can’t forget to: Share your unique story Share specific professional and personal experiences Connect like a human Strategize and prep with AI, but win with yourself. Resources to Help Organizations Better Understand and Navigate AI in Hiring Looking to sharpen your hiring strategy in an AI-driven world, especially in the screening and selection stages of your hiring process? Good news: we have THE comprehensive guide for you! Looking to join the AI-driven hiring world – How can’t you? Drop into The Ultimate Guide to Hiring Software, which includes all you need to know before you pump that gas pedal up to 100 on AI. Candidates and job seekers – Looking to prep for an interview or assessment? Check our candidate boot camp for tips and guides to help you better understand why employers choose the hiring tools and processes they do, and how you can prepare to tackle each one.
2025-04-14T00:00:00
2025/04/14
https://www.comeet.com/resources/blog/the-role-of-artificial-intelligence-in-the-hiring-process
[ { "date": "2022/12/01", "position": 54, "query": "artificial intelligence hiring" }, { "date": "2023/03/18", "position": 14, "query": "artificial intelligence employment" }, { "date": "2023/03/18", "position": 2, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 50, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 51, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 54, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 51, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 52, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 53, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 54, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 52, "query": "artificial intelligence hiring" } ]
Leverage Talent Acquisition & Artificial Intelligence (AI) - Jobvite
Leverage Talent Acquisition & Artificial Intelligence (AI)
https://www.jobvite.com
[]
When you embrace the power of AI in recruiting, you'll take a conscious step toward removing bias from your hiring process. AI can help you diversify your ...
Automate Candidate Engagement The worst thing you can do is ghost a candidate and let good talent walk due to human error. With Jobvite’s AI capabilities, you can automate candidate outreach and engagement, ensuring everyone is kept up to speed and informed on where they are in the hiring process.
2022-11-16T00:00:00
2022/11/16
https://www.jobvite.com/capabilities/talent-acquisition-ai-automation/
[ { "date": "2022/12/01", "position": 58, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 54, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 39, "query": "artificial intelligence hiring" }, { "date": "2023/10/01", "position": 55, "query": "artificial intelligence hiring" }, { "date": "2024/03/01", "position": 31, "query": "artificial intelligence hiring" }, { "date": "2024/04/01", "position": 59, "query": "artificial intelligence hiring" }, { "date": "2024/05/01", "position": 55, "query": "artificial intelligence hiring" }, { "date": "2024/10/01", "position": 60, "query": "artificial intelligence hiring" }, { "date": "2024/11/01", "position": 33, "query": "artificial intelligence hiring" }, { "date": "2025/01/01", "position": 38, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 37, "query": "artificial intelligence hiring" } ]
Artificial Intelligence In The Hiring Process - Meytier
Artificial Intelligence In The Hiring Process; Are Women Disadvantaged?
https://meytier.com
[]
AI is alive and well within the hiring process. Many believe wholeheartedly that these computer systems will fill the gap human bias and error.
Countless companies and organizations globally are integrating artificial intelligence programs into their business in some way. Whether it be candidate sourcing, hiring remote work, facial expression analysis, or diversity hiring… AI is alive and well within the hiring process. Many believe wholeheartedly that these computer systems will fill the gap human bias and error and make the hiring process easier, more efficient, and more equitable. This is not necessarily true. Though they are technology- computers, algorithms, and AI are all man-made. Just like anything else, an AI system will only be as sound, as, fair, and as unbiased as the person who built it. So what does this mean for candidates looking for a job? Perhaps the most common use of technology in the hiring process is Keyword matching, whereby a computer system matches key phrases and words from a resumé to a job description and picks out the candidates with the most overlap. There are countless resources online ranking the most important keywords to include in your resumé and how to integrate them. This is obviously a flawed system- as it wouldn’t necessarily pick out the strongest candidates but rather the ones that had beefed up their resumé with enough overlapping language from the job description. Many believed that integrating machine learning programs into these programs could solve this issue. AI systems have been built to “grade” resumés off of various parameters and rank, match, and choose accordingly. In fact, some went so far as to predict that the computer hiring system would eventually be of a higher quality than a human hiring system, as it would accommodate for implicit human biases in hiring and develop a “fairer” view of the candidate pool. Let’s look at Amazon. Machine learning specialists at Amazon set out to build a hiring model that would filter through their candidate pool and pick out the best applicants. They set out to do this by giving the machine data from the past ten years of resumés, allowing it to uncover patterns and traits amongst “strong” candidates. In theory, this was a good idea but in practice, it only proved to show that technology is a historically male dominated field and identified models and markers of strength and success would inevitably be traditionally male traits. According to sources at Reuters, after sifting through dozens of overwhelmingly men’s resumés Amazon’s system taught itself to penalize women applicants. The technology began to favor applicants who used words like “captured” and “executed” (power language that is traditionally found in men’s resumés) and to automatically downgrade applications that had female gendered details (such as women’s colleges, “women’s basketball team”, etc.). Although Amazon maintains that it never solely relied on this program to hire- it simply stands as an example of the limitations and dangers of over automating a process intended to hire humans. Many studies have indicated that there are fewer responses for women applicants. With AI driving selection, the results are inordinately influenced by the fact that women are under-represented in many functions and industries. Algorithms tend to replicate historical biases and those introduced by the preferences of the hiring managers. Artificial Intelligence and other technology cannot transcend human bias, it will simply recreate it in new ways unless the makers of said technology take measurable steps to change that. Men and women tend to represent themselves differently linguistically in their resumés, men usually have shorter resumés and will use more “powerful” language than women, using words like “create”, “founded” etc. while women use more collaborative language, like “helped”, “co-founded”, etc. To take a history of resumés from a male dominated field and workforce and mark them on supposed areas of strength will always put women in a less favorable position as their resumés will just not necessarily follow the same trends and guidelines as the resumés of their male peers. In addition to all of this, Artificial Intelligence research is another piece of technology that is deeply male-dominated. One study showed that the number of women in machine learning research was as low as 12%. Artificial intelligence that evaluates, judges, and identifies candidates can never be unbiased if the creators of such technology are a homogenous group. Women already get hired at lower rates than men (despite earning more bachelor's degrees than them) and the gender gap and pay gap increases as a woman progresses through her career. If women’s resumés are getting counted out before a human gets to look at them- this problem will only grow. Companies often struggle to hire women and perpetuate the myth that simply not enough women are interested or applying for technology jobs- when it is often their own hiring systems that count out women’s resumés before anyone even looks at them. Companies are made of humans, and they are trying to hire more humans. While AI can be an incredible tool to help deal with the massive quantities of applications coming in and to help filter candidates on unique parameters- it can only be a piece of the puzzle when it comes to evolving the way companies hire. About The Author Rena Nigam is the Founder of Meytier. She can be reached at [email protected] Learn how Meytier uses LLMs to streamline the hiring process How will ChatGPT impact recruitment?
2022-12-01T00:00:00
https://meytier.com/blogs/artificial-intelligence-in-the-hiring-process
[ { "date": "2022/12/01", "position": 61, "query": "artificial intelligence hiring" } ]
Artificial intelligence in hiring: Efficiency + fairness [2025] - Amperly
Artificial intelligence in hiring: Efficiency + fairness [2025]
https://amperly.com
[ "Priit Kallas" ]
Integrating artificial intelligence in hiring leads to cost savings and improved quality in recruitment. Companies can allocate resources more effectively, ...
Artificial intelligence in hiring is reshaping how banks find and select talent. If you’re in HR, you know time is tight, and AI can help you sift through resumes and streamline communication. Let’s dive into how AI is changing recruitment and what it means for you. You can find helpful resources on AI applications in human resources in this article AI in HR examples. You can learn more about AI’s application in recruitment by checking out this article on AI applications in various sectors. AI Applications in Recruitment AI effectively identifies potential candidates: This technology analyzes large datasets to find the best talent for your bank. In fact, AI can process and evaluate thousands of resumes in a fraction of the time it takes a human, which can save hiring teams up to 50,000 hours of labor, as demonstrated by Unilever’s streamlined hiring process. By implementing artificial intelligence in hiring, you improve efficiency and maximize your team’s productivity. AI automates routine tasks to boost efficiency: Automating tasks like resume screening means your HR team spends less time sifting through applications. Companies using AI in their recruitment processes see significant improvements, especially in identifying qualified candidates more effectively. Embracing artificial intelligence in hiring directly impacts your ability to attract the best talent while allowing your team to focus on strategic initiatives. AI-Driven Screening Tools AI-Driven Screening Tools: AI can quickly evaluate resumes to identify candidates that meet specific qualifications. In fact, companies using AI in screening can reduce hiring times significantly, as shown by Unilever, which cut its hiring process from 4 months to just 4 weeks. You can see how this not only speeds up hiring but also helps eliminate human biases. Recruitment using AI: AI enhances recruitment strategies by helping to target qualified candidates more effectively. This targeted approach has proven effective; job sites like Indeed and Glassdoor employ AI to reach users that are likely to fit job criteria based on their demographics and behavior. Implementing these tools means more resources are focused on finding the right applicants, ultimately improving your hiring outcomes with artificial intelligence in hiring. Targeted Recruitment Strategies Targeted Recruitment Strategies: AI significantly boosts the effectiveness of job ads. Platforms like Facebook and LinkedIn use AI to define a ‘look-a-like’ audience for job ads, reaching potential applicants who closely resemble your best candidates. The right targeting can increase the number of qualified applicants by up to 60%. Artificial intelligence in hiring helps companies find the best-fit candidates faster. AI not only identifies the right profiles but also enhances the ad’s visibility to those who match the desired criteria. With AI-driven approaches, many companies experience improved engagement rates, creating more opportunities to recruit top talent. Efficiency in Recruitment: AI dramatically speeds up the hiring process, allowing human resource teams to focus on higher-level tasks. One example is Unilever, which has reduced its hiring time from 4 months to just 4 weeks by using AI for screening and assessing over 250,000 applications. By saving 50,000 hours of labor, teams can concentrate on improving the candidate experience and ensuring better hiring outcomes. Integrating artificial intelligence in hiring leads to cost savings and improved quality in recruitment. Companies can allocate resources more effectively, ensuring that each application receives thorough attention without overwhelming staff. When AI handles the heavy lifting, it sets the stage for better-informed decisions and ultimately more successful hires. Chatbots and Interview Automation Chatbots and Interview Automation: Chatbots can really streamline the interview process, making things easier for HR staff. A study found that over 12 million interviewees have been screened by more than 700 companies using such technology. Incorporating artificial intelligence in hiring not only saves time but also provides deep insights into candidate responses. Enhanced Recruitment Using AI: AI boosts recruitment efforts by targeting qualified candidates more efficiently. By leveraging social media ads, companies can reach applicants most suited for specific roles. Utilizing artificial intelligence in hiring allows for a significant increase in the number of qualified applicants. Predictive Analytics in Hiring Predictive Analytics in Hiring: You can use predictive analytics to assess a candidate’s success in a role. Companies that adopt these methods can increase their hiring efficiency by up to 50%. Artificial intelligence in hiring offers tools to analyze data more thoroughly, giving HR more informed insights into who might excel in specific positions. Targeted Recruitment with AI: Using AI for targeted recruitment helps draw in qualified candidates more effectively. This practice can boost the number of potential applicants by over 30%. With artificial intelligence in hiring, you can analyze demographics, interests, and behavior to ensure your recruiting process reaches the best-fit candidates. Ethical Concerns in AI Hiring Ethical Concerns in AI Hiring: AI systems often repeat biases from past data. This has led to worries about fairness and transparency in hiring algorithms. For instance, 70% of hiring managers have rejected candidates based on information from their social media that they weren’t legally allowed to ask about, which poses significant privacy issues. Impact on Recruitment Efficiency: AI can greatly speed up hiring by automating many tasks. A noteworthy example is how Unilever cut its hiring time from 4 months to 4 weeks for reviewing over 250,000 applications using AI, saving around 50,000 hours of labor. This efficiency gained through artificial intelligence in hiring allows HR departments to focus on more strategic tasks instead of repetitive duties. Compliance and Regulatory Issues Compliance and Regulatory Issues: You need to know about the laws regulating artificial intelligence in hiring, like the Artificial Intelligence Video Interview Act in Illinois. This law requires companies to follow specific guidelines when using AI to analyze video interviews, making 65% of employers nervous about compliance. Understanding these legal frameworks is crucial, as non-compliance can lead to legal issues and damage your organization’s reputation. Automation in Recruitment: AI significantly speeds up the hiring process, allowing you to focus on more strategic tasks. For example, companies like Unilever have cut their hiring time from 4 months to just 4 weeks by reviewing over 250,000 applications with AI, saving about 50,000 hours of labor. This efficiency means that you can recruit better candidates faster and increase your team’s productivity. Future Trends in AI and Hiring AI is automating complex HR tasks: Artificial intelligence in hiring is transforming HR roles by automating various complex tasks, allowing HR professionals to focus on more strategic initiatives. This automation has already enabled companies to reduce the hiring time significantly, as seen with Unilever reducing its hiring process from 4 months to just 4 weeks. As a bank employee, you can leverage this efficiency to enhance hiring outcomes and free up valuable time for other important projects. New job roles focused on managing AI systems are emerging: The rise of artificial intelligence in hiring is not just about automation; it also creates new roles centered on managing these AI systems. For instance, with 47% of jobs at risk due to automation, understanding AI’s operation might become essential for many professionals. Embracing these changes can help you stay ahead in your career while optimizing the recruitment process within your organization. Challenges of AI Implementation Limited data availability can hinder AI’s effectiveness during the hiring process: When companies don’t collect enough data on candidates, artificial intelligence in hiring can’t make accurate predictions. This means AI might miss out on identifying great talent because it lacks sufficient information. Addressing this issue is key to improving the recruitment process and ensuring better hiring outcomes. Organizations must evaluate the reliability of AI predictions and avoid over-relying on technology: It’s critical for you to assess how much you trust the insights that artificial intelligence in hiring gives. A study indicated that biases can unintentionally become part of the data used in AI, which can lead to flawed hiring decisions. As a bank employee, understanding the limitations of AI can help you maintain a balanced approach in hiring methodologies. Balancing AI with Human Interaction Balancing AI with Human Interaction: The combination of AI and human recruiters creates a balanced approach to hiring. Human judgment is still essential for tasks that require empathy and complex decisions. Your ability to connect with candidates creates a more engaging recruitment process. Efficiency Gains in Hiring Processes: Artificial intelligence in hiring speeds up recruitment dramatically. For example, Unilever reduced its hiring process from 4 months to just 4 weeks. This efficiency allows HR teams to focus their energy on strategic initiatives rather than routine tasks. Artificial Intelligence Services Artificial intelligence is reshaping how we hire. By applying AI services, companies can improve their recruitment processes, making them more efficient and effective. Let’s look at three important AI services that can enhance your hiring strategies. AI Applications in Recruitment automate crucial tasks in the hiring process. This service analyzes large datasets to identify potential candidates quickly. By automating tasks like resume screening, you increase efficiency and save valuable time. Discover how our AI services can help simplify your recruitment efforts. AI-Driven Screening Tools make qualifications easier to evaluate. This service assesses resumes for required skills and experience, ensuring you find the right fit faster. By reducing human bias in applicant filtering, you promote fairness and transparency. Learn more about our artificial intelligence consulting services to enhance your hiring process. Predictive Analytics in Hiring provides valuable insights into candidates. This service forecasts an applicant’s potential success in a role based on historical data. Using data-driven decision-making, you can better match candidates to job requirements. Uncover the benefits of our dynamic product descriptions using AI personas for better recruitment strategies. Ready to improve your hiring process? Check out our AI services today! You can get valuable information on AI tools for improving workplace efficiency in this article AI tools for business. Transform Your Hiring: Unlock the Power of AI for Recruitment Success You’ve learned how transforming your hiring process with AI can make a real difference. To move forward, start by assessing how your current recruitment methods align with AI technologies. Identify the tasks that can be automated for better efficiency. Consider reaching out for expert support in integrating AI solutions tailored to your bank’s needs. Feel free to contact us to discuss how we can help enhance your recruitment strategy and drive better results.
2025-06-04T00:00:00
2025/06/04
https://amperly.com/artificial-intelligence-in-hiring/
[ { "date": "2022/12/01", "position": 63, "query": "artificial intelligence hiring" } ]
AI-Driven Recruitment: The Before and After States of the Hiring ...
AI-Driven Recruitment: The Before and After States of the Hiring Process
https://www.carv.com
[]
Artificial intelligence can reshape recruitment and act as a catalyst for reimagining the entire hiring landscape.
The modern recruitment process falls short of ideal as it remains burdened with paperwork and administrative tasks that take up too much time, leaving little room for connecting with candidates and stakeholders. These inefficiencies, affecting both in-house and agency recruiters, might seem like a given or an inherent shortcoming of talent acquisition processes. However, the reality is that most of the challenges in modern recruitment processes can be solved by AI. Yet, to achieve this, we need to stop thinking of artificial intelligence as merely an automation tool, and see beyond obvious use cases. Using AI to simply automate process steps and move candidates throughout the recruitment funnel means negating its transformative potential. Artificial intelligence has the potential to completely reimagine hiring, taking it from AI-assisted to AI-led. This transformation can be achieved with the help of AI recruiters, agents that work alongside their human counterparts in full synergy. With this approach, the focus in the recruitment process shifts away from the process steps to the outcomes and impact it drives for the business. So let’s see how the recruitment process is changing, in practice, once AI is added to the mix. Recruitment before AI: Process-focused, driven by automation To make it easier to understand the impact of artificial intelligence on the hiring process, let’s work with the stages of the recruitment process as we know them today. However, keep in mind that the shift towards AI-led autonomous recruitment might mean that some of these steps will no longer be necessary, or will no longer have the same weight in the overall recruiting experience. Sourcing & attracting talent In traditional recruitment, even in heavily automated processes, this first step always requires human intervention. Typically, the search for quality candidates involves browsing through talent pools, social media platforms, and sharing vacancies across various job boards. The talent acquisition team might use templates and automation tools to set up candidate outreach sequences, but this often ends up sabotaging their efforts, as candidates dismiss such approaches as spam. As outreach automation peaks, candidate engagement rates are expected to decline further. On the sourcing side, recruiters depend on a mix of talent pools, memory, ATS data, and standalone Excel sheets. However, when talent pools are built by several team members and there’s little alignment between teams, and ATS data provides limited candidate details, finding suitable candidates becomes way more difficult than it should be. To summarize, the main challenges in sourcing include: Inefficient search methods: Over-dependence on matching keywords can lead to inefficient talent discovery. Over-dependence on matching keywords can lead to inefficient talent discovery. Limited outreach precision: Traditional methods may not effectively target the most suitable candidates, leading to a broader yet less focused outreach. Traditional methods may not effectively target the most suitable candidates, leading to a broader yet less focused outreach. Time-consuming manual processes: The manual aspect of talent browsing and outreach can be time-intensive, reducing overall efficiency. AI recruitment software can address these challenges, making sourcing fast and effective, and helping companies stand out in a crowded market, to attract the best candidates. Assessing & screening candidates In the modern approach to candidate screening and assessment, the administrative workload involved in the process leaves little room for extended engagement with the right candidates. Especially for positions with a high volume of applications, recruiters swiftly skim through resumes, and assessments often adopt a one-size-fits-all approach, making it easy to overlook qualified candidates. Furthermore, due to the diverse range of candidates applying, recruiters frequently incorporate knock-out questions to filter out unsuitable applicants. Such questions are necessary but may not reveal much about the candidates. Additionally, information stored in the applicant tracking system often emphasizes these procedural questions rather than focusing on the candidate's actual skills, potential, or personality. So, to sum it up, the main challenges in screening include: Time-consuming manual processes: Steps like resume screening and candidate shortlisting take a significant amount of time, hampering the overall efficiency of the recruitment process. Steps like resume screening and candidate shortlisting take a significant amount of time, hampering the overall efficiency of the recruitment process. Prone to human bias: Subjective assessments are susceptible to unconscious bias, influencing decision-making and potentially impacting the fairness of the selection process. Subjective assessments are susceptible to unconscious bias, influencing decision-making and potentially impacting the fairness of the selection process. Limited scalability: Manual screening becomes increasingly challenging to scale, especially in high-volume recruitment scenarios, potentially leading to oversight of qualified candidates. Manual screening becomes increasingly challenging to scale, especially in high-volume recruitment scenarios, potentially leading to oversight of qualified candidates. Missed potential in unstructured data: Traditional screening methods may overlook valuable insights present in unstructured data, such as candidate portfolios, interviews, or project work, limiting the understanding of a candidate's capabilities. In addressing these challenges, the integration of AI technologies promises to bring about transformative changes, optimizing efficiency, and promoting unbiased candidate evaluations. Candidate interviewing and selection The manual effort involved in the logistical aspects of scheduling interviews and communicating with candidates often consumes valuable time that could be better utilized in more strategic recruitment activities. While automation has successfully addressed some of the challenges in the interview process, synchronizing calendars across teams and rescheduling no-shows remain persistent issues faced by both in-house and staffing recruiters. Also, the pre- and post-interview administrative tasks, ranging from intake calls with hiring managers to transcribing candidate interviews and writing candidate profiles, represent a significant time commitment. To summarize, the main challenges in interviewing include: Scheduling and syncing calendars : Repetitive tasks like scheduling interviews and coordinating calendars contribute to a significant time drain in current recruitment processes. : Repetitive tasks like scheduling interviews and coordinating calendars contribute to a significant time drain in current recruitment processes. Lack of standardization: In the absence of standardized criteria, current recruitment practices often lead to subjective and biased interviews, potentially filtering out top talent. In the absence of standardized criteria, current recruitment practices often lead to subjective and biased interviews, potentially filtering out top talent. Limited time for actual interviewing: Due to the time-consuming admin work before and after talks with job applicants, recruiters may struggle to conduct comprehensive interviews that delve into a candidate's true capabilities and potential. While automation has brought notable improvements, the intricacies of scheduling and the need for standardized criteria underscore the importance of incorporating AI recruitment tools. Decision-making & offer management In the final stages of non-AI recruitment, decision-making and offer management encounter several challenges, mainly related to lengthy application processes that cause job seekers to withdraw, and to lack of data-driven insights when making the hiring decision. Relying solely on candidate scoring lacks nuance, as scores do not account for a candidate's potential, feedback, and the assessment of transferable skills. While some tools aim to reduce subjectivity in scoring, when a recruiter passes a candidate to the hiring manager with only a score and no additional context, the hiring decision can become challenging and potentially flawed. To sum it up, the main challenges in this phase include: Inconsistent criteria: Human recruiters may apply scoring criteria with variations. The absence of standardized benchmarks can result in divergent assessment approaches among recruiters, potentially introducing inconsistencies in candidate evaluations. Human recruiters may apply scoring criteria with variations. The absence of standardized benchmarks can result in divergent assessment approaches among recruiters, potentially introducing inconsistencies in candidate evaluations. Reliance on gut feeling: Without data-driven insights, the hiring decision often relies on human intuition, which may introduce potential bias. This reliance on gut feelings may lead to variations and inconsistencies in decision-making. Without data-driven insights, the hiring decision often relies on human intuition, which may introduce potential bias. This reliance on gut feelings may lead to variations and inconsistencies in decision-making. Complex offer negotiations: The absence of data-driven insights into market benchmarks, candidate expectations, and budget constraints can hinder the optimization of offer management processes. These hiring challenges collectively underscore the need for more streamlined, standardized, and data-informed approaches in the final stages of the recruitment process - and AI-powered tools can help. AI-led recruitment: Impact-focused, driven by human-AI synergy In an AI-driven reality, the recruitment process no longer needs to happen in a waterfall style, as AI is able to multitask, can execute tasks autonomously 24/7, and can work with both unstructured and structured data. This means that the recruitment team can hand over most of the admin work to their AI counterparts, and focus on what they’re best at: Building relationships with candidates, hiring managers, and agency clients. Although there are concerns regarding the replacement of human recruiters with AI technology, our belief is that artificial intelligence will be used as a tool to augment human decision-making, not replace it. Ethical implementation and focus on human-centric strategies are crucial for a successful AI-driven recruitment ecosystem. With this in mind, here’s how implementing AI can transform each stage of the hiring process. Sourcing & attracting talent In AI-driven recruitment, sourcing shifts from keyword-based searches to always-on predictive matching based on job descriptions and requirements, candidate skill set, personality, and culture fit. ‍ AI recruiters or assistants navigate through talent pools 24/7, identifying ideal applicants and proactively engaging with potential candidates - and all of this happens with little to no human intervention. The driver behind this always-on sourcing machine is the Talent Space, powered by graph databases. Without making this too technical, imagine a social network where new connections are recommended based on similar interests, group belonging, and so on. Each person is a node in the network, and there’s no limit to the number of other “nodes” it can connect to. This is how networks like Facebook or LinkedIn work. In an AI-powered Talent Space, the nodes are not only candidates, but also their CVs, social media interaction data, interview recordings, assessment results, hobbies, values, and so on. All these data nodes or talent mind maps can be scanned by AI in real time, and the data can be automatically matched with the job requirements coming from hiring managers. At the same time, AI systems can debrief intake calls and launch targeted social media campaigns, reaching relevant applicants through their preferred communication channels, and engaging with them through contextually-enriched recruiting chatbots. The outreach messages can be highly personalized, as the AI can access all the information it finds when searching the Talent Space. This results in more streamlined sourcing initiatives, more diverse and qualified candidate pools, less time doing admin work, and overall more effective talent acquisition. We covered the topic of AI's impact on sourcing in the video below. Candidate screening & assessment AI can be used to streamline the screening and assessment phase too. Unlike non-AI screening, which works like a factory line, focusing on speed rather than candidate experience, AI-led recruiting gives the candidate the spotlight. Among others, AI-driven screening uses automated resume parsing and context extraction, skills-based assessments with bias mitigation algorithms, and video interviews with AI-powered analysis for deeper insights. Although this new way of working still relies on reading resumes, artificial intelligence systems can also go through candidate calls and video interviews, and can review the data collected from the Talent Space. Moreover, AI can create personalized assessments on the go, to mitigate the potential risks it sees when evaluating a candidate’s profile. Instead of using only one data point and out-of-the-box assessments, AI can make the experience candidate-friendly, resulting in more flexibility and more focus on the candidate itself. The screening can be done in the background, automatically, with candidates engaging in their own timing. This removes the randomness of standardized assessments, creating a truly candidate-focused hiring experience. The result is faster screening, fairer decisions, deeper insights into candidate potential. Candidate interviewing and selection In the interviewing phase, AI can be used to bring interview-related admin down to zero, freeing up recruiters so they can focus on candidates. While automation technology is useful for scheduling and streamlining pre- and post-interview candidate communication, it often lacks a human touch. For example, templated outreach emails that invite candidate to interviews often lack personalization, and out-of-the-box candidate rejection emails often lack empathy. Given that modern recruitment software can’t handle unstructured data, that’s where the automation stops. Artificial intelligence doesn’t have this limitation. It can work with any type of data source - be it resumes, interviews, audio calls, or written messages. In the pre-interview phase, AI can listen to intake calls, review candidate resumes and profiles, and draft interview questions. This gives recruiters more context about candidates, and helps them enter interviews with confidence. During the interviews, AI can assist with real-time transcription, as well as with note-taking. Both these tasks are very time-consuming when done manually, and they often distract the recruiter from the interviewee. With the help of AI, recruiters can focus on candidates, instead of note taking and typing. Finally, post interview, AI recruiters - the assistants of human recruiters - can share interview summaries, highlight unanswered questions, or draft candidate profiles for the hiring managers. AI can ensure objective evaluations based on the candidate’s skills, experience and potential, and the actual role requirements. As you can see, AI-powered interviewing software can automate these repetitive tasks, streamlining the workflow before, during, and after the interview. This gives recruiters their time back and improves both the candidate experience and the recruiter experience. Decision-making & offer management Finally, AI can support in the decision-making process, by removing bias and subjectivity, and ensuring a fair assessment. In staffing agencies and recruitment service providers, artificial intelligence can create commercial profiles for candidates, taking into consideration all the candidate data and context. This can help hiring managers make better decisions - however, AI should never take over the actual hiring decision. This is the only step in the entire recruitment process, along with the interview, that we believe should still be handled by humans. In the offer stage, AI can support with writing offer letters and negotiating with candidates, personalizing the communication to increase success rates. Overall impact of AI-driven recruitment processes As you can see from the above, AI-driven recruitment takes away all the administrative work, bringing admin down to zero, and freeing up TA teams so they can focus on building relationships with candidates and clients and shaping recruitment strategies to drive business outcomes. This has impact on both stakeholders and recruitment metrics. Impact on stakeholders Hiring managers: Improved efficiency, better hires, data-driven decision-making. Improved efficiency, better hires, data-driven decision-making. Recruiters: Time-saving automation, deeper candidate insights, focus on strategic tasks. Time-saving automation, deeper candidate insights, focus on strategic tasks. Candidates: Personalized experience, fairer selection process, faster feedback. Personalized experience, fairer selection process, faster feedback. Organizations: Higher ROI, stronger employer brand, improved talent retention. Impact on recruitment metrics Efficiency: AI takes over admin tasks. Time to fill and time to hire metrics improve. AI takes over admin tasks. Time to fill and time to hire metrics improve. Effectiveness: Data-driven insights lead to better talent selection and improved talent management. Candidate conversion rates, quality of hire, and retention rates improve. Data-driven insights lead to better talent selection and improved talent management. Candidate conversion rates, quality of hire, and retention rates improve. ROI: For staffing agencies and recruitment service providers, AI-led recruitment means faster placements and thus faster return on investment. For staffing agencies and recruitment service providers, AI-led recruitment means faster placements and thus faster return on investment. Fairness: AI tools can mitigate bias and promote inclusive hiring practices. Candidate selection is no longer affected by subjective opinions. AI tools can mitigate bias and promote inclusive hiring practices. Candidate selection is no longer affected by subjective opinions. Candidate experience: Personalized interactions and faster communication create a positive candidate experience. Candidate satisfaction scores and employer ratings improve. To conclude, artificial intelligence can reshape recruitment and act as a catalyst for reimagining the entire hiring landscape. AI goes beyond streamlining process steps; it has the power to shift recruitment from being AI-assisted to AI-led. The key lies in adopting AI agents that can work with unstructured data, and in treating these AI recruiters as helpful workmates - intelligent agents working seamlessly alongside their human counterparts. This collaborative synergy allows for a holistic transformation where the focus shifts from the intricacies of process steps to the broader outcomes and impact on the business.
2022-12-01T00:00:00
https://www.carv.com/blog/ai-driven-recruitment-process
[ { "date": "2022/12/01", "position": 64, "query": "artificial intelligence hiring" }, { "date": "2023/05/01", "position": 63, "query": "artificial intelligence hiring" }, { "date": "2023/06/01", "position": 65, "query": "artificial intelligence hiring" }, { "date": "2024/02/12", "position": 26, "query": "AI hiring" } ]
Artificial Intelligence and the Hiring Process - Ronald P. Ackerman
Artificial Intelligence and the Hiring Process – California Employment Lawyer
https://lawofficeofronaldpackerman.com
[]
Internally, AI can be used to scan job applications, recruit, screen, hire, and onboard new candidates. AI tools are trained to recognize similar applicants ...
The use of Artificial Intelligence (AI) is becoming increasingly prevalent in the hiring process. It is important for workers to be aware of how this impacts them. This page provides information on how AI is being used in the hiring process, its impact on employers and workers, and state laws that have been enacted to protect workers. What is Artificial Intelligence (AI)? Artificial Intelligence is the use of computers and machines to mimic the problem-solving and decision-making capabilities of the human mind. AI is most used in industries such as transportation, healthcare, and finance. Since the pandemic, AI has been used in other industries like retail. How is AI used in hiring? AI can be used in several ways to expedite processes in the workplace. Internally, AI can be used to scan job applications, recruit, screen, hire, and onboard new candidates. AI tools are trained to recognize similar applicants that have been hired or denied employment in the past, the technology then uses those characteristics to identify applicants for a particular job. In addition, employers are using video interviews, chatbots, online assessments, and other screening tools to make certain hiring conclusions. Once a candidate has been identified, AI, in the form of chat bots, can be used to automatically reach out to an individual and determine whether the person is available to start on the employer’s preferred timeline or whether the individual is open to commuting. With video interviews, companies can record answers to questions and upload them to a database for recruiters to later review and compare to answers from other applicants. They also use facial and voice recognition software to analyze body language, tone, and other factors to determine whether a candidate exhibits preferred traits. See this American Bar Association article for more information. Why are employers using AI in the hiring process? Involving AI in the employment processes helps human resources professionals sort through many applications efficiently. According to the American Bar Association, using AI allows every resume to be screened and eliminates preferences and biases of recruiters. In addition, some AI services can also save time by analyzing publicly available data such as social media profiles, resumes, and other text-based data submitted by the applicant, eliminating the need for additional assessments. What are the disadvantages of using AI in the hiring process? Some argue that AI tools can retain subconscious biases through their design and could perpetuate additional barriers. For example, if an AI tool is fed resumes of people who have previously been hired by a company, and the recruiting departments making the hiring decision harbored subconscious biases and preferences, those biases and preferences could be inherited by the AI tool. In addition, because AI tends to look at experience, it has the potential to overlook, deprioritize, or even reject qualified applicants. See this American Bar Association article for more information. What federal law regulates AI in hiring? There is currently no federal law that regulates the use of AI by private companies. However, this is an emerging area of discussion for some lawmakers. As the federal conversation progresses, we will keep you updated on any implemented laws. However, the use of A.I. in the hiring process has implications for Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, and the Americans with Disabilities Act. See this American Bar Association article for more information on how AI impacts these laws. See workplacefairness.org for more information on discrimination. See workplacefairness.org/discrimination for more information. The Equal Employment Opportunity Commission launched an initiative on AI fairness to ensure that AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces. See their website for more information. What state/city laws currently exist to address AI in hiring? Illinois: In 2019, Illinois passed the Artificial Intelligence Video Interview Act. This law requires employers to disclose when they are using AI in accepting video interviews. Employers must notify/provide applicants the following: 1) AI may be used to determine the applicant’s “fitness” for the position; 2) Information before the interview on how the AI works and how it determines a candidate’s “fitness”; 3) Consent from the candidates to be evaluated by AI for the position. Furthermore, under this law, videos may not be shared, and applicants have the option to have their data deleted. See the act or more information. Maryland: In 2020, Maryland passed HB1202, which prohibits an employer from using facial recognition technology during an applicant’s interview for employment. However, an applicant may provide consent to the technology during the interview. See the law for more information. New York City, NY: Effective January 1, 2023, New York City employers will be restricted in using AI machine-learning products in hiring or promotion. The law prohibits the use of such tools to screen a candidate or employee for an employment decision unless it has been the subject of a “bias audit” no more than one year prior to its use. A bias audit is an evaluation by an independent party testing the AI tool’s bias based on race, sex, national origin, etc. As a result of the law, employers must maintain the audit on their website. Furthermore, employers interested in using such tools must first notify each candidate or employee who resides in New York City that an automated employment decision tool will be used in connection with an assessment or evaluation of the individual. See the law on automated employment decision tools for more information. How can I file a complaint if I believe I was discriminated against during the hiring process by an AI system? See filing-discrimination-claim for more information on filing a discrimination complaint. If you would like to contact an attorney, see the find-attorney for more information.
2022-12-01T00:00:00
https://lawofficeofronaldpackerman.com/artificial-intelligence-and-the-hiring-process/
[ { "date": "2022/12/01", "position": 65, "query": "artificial intelligence hiring" }, { "date": "2025/06/01", "position": 34, "query": "artificial intelligence hiring" } ]
Disadvantages of Artificial Intelligence in Recruitment - PEG Staffing
Disadvantages of Artificial Intelligence in Recruitment
https://www.pegstaff.com
[]
There are drawbacks to algorithmic hiring as well. Data inaccuracies or algorithmic errors can result in bias or discrimination. This can lead to only certain ...
Although it can be a very helpful part of the hiring process, problems exist with using AI in recruitment. AI can remove human bias from hiring but can also perpetuate bias if it is trained using data that is prejudiced in some way. If a company has a history of discriminatory hiring practices, AI hiring discrimination will occur as well because its decisions are based on the data from the previous hiring selections. In the case of AI screening tools, candidates can be overlooked, not because they aren’t qualified, but because their skills, abilities, or experience don’t quite fit the parameters being used to identify certain qualifications. Conversely, with the right knowledge, candidates can tailor their resumes or applications to increase their likelihood of being chosen by AI algorithms. Another limitation of AI in hiring is that the software and training can be cost prohibitive for many companies, especially smaller ones. There are a wide range of AI tools and the associated costs can vary drastically, but some can cost up to thousands of dollars per month. Companies must make sure they are utilizing the right kind of AI to help them get the best return on their investment. Furthermore, while AI works great for simple or repetitive tasks, it isn’t well-equipped for the parts of the hiring process that require more complex thinking or judgment. Data privacy and security is also a big concern when using AI in hiring since much of the data being processed is sensitive, and companies aren’t always transparent about the ways in which personal information is being used. It is important for companies to consider the challenges of using AI in recruitment and hiring to determine the types of AI technology that will best meet their needs.
2022-12-01T00:00:00
https://www.pegstaff.com/disadvantages-of-artificial-intelligence-in-recruitment/
[ { "date": "2022/12/01", "position": 67, "query": "artificial intelligence hiring" } ]