title_s
stringlengths 2
79
| title_dl
stringlengths 0
200
| source_url
stringlengths 13
64
| authors
listlengths 0
10
| snippet_s
stringlengths 0
291
| text
stringlengths 21
100k
| date
timestamp[ns]date 1926-02-14 00:00:00
2030-07-14 00:00:00
| publish_date_dl
stringlengths 0
10
| url
stringlengths 15
590
| matches
listlengths 1
278
|
---|---|---|---|---|---|---|---|---|---|
Looking for Career Growth in 2023? Think About AI and ...
|
Looking for Career Growth in 2023? Think About AI and Zero Trust » Community
|
https://www.govloop.com
|
[
"Kerry Rea"
] |
All employees can benefit from a better understanding of where AI is being used in government today and thinking about how that could extend to their work.
|
No matter your specific job, technology has a huge impact on your day-to-day. While you may not be in a technical role, your ability to understand how technology can help (or impede) your work is a critical skill in the increasingly digital environment of government work. With this in mind, a 2023 career goal may be to better familiarize yourself with some key technologies that are currently in the early stages of use in government — artificial intelligence and zero trust.
Getting Real About Artificial Intelligence
Artificial intelligence, or AI, is the practice of computers, aided by algorithms, performing tasks that normally require human intelligence or intervention. AI systems mimic the problem-solving and decision-making capabilities of the human mind, doing so at machine speed. This enables agencies to make better use of the troves of data they hold for daily decision-making, strategic planning, and citizen service. The use of AI frees up humans to work on more meaningful tasks rather than (say) spending time telling citizens the mailing address of the agency.
Of course, the application of AI is not without drawbacks. In the fall of 2022, the White House released the AI Bill of Rights, designed to address concerns about how, without some oversight, AI could lead to discrimination against minority groups and further systemic inequality. The National Institute of Standards and Technology (NIST) will issue an AI Risk Management Framework (AI RMF) in early 2023 to follow on this high-level guidance with actionable steps agencies can take.
AI’s benefits in reducing menial work have helped dispel the fear that “machines will replace people.” However, those who refuse to adopt and adapt to AI-enabled tools may in fact find themselves replaced by other people who do. All employees can benefit from a better understanding of where AI is being used in government today and thinking about how that could extend to their work. Doing so with an understanding of the fair-use policies being crafted for AI can make someone a critical resource no matter their job title.
Increasing Understanding of Zero Trust
Zero trust is a security approach centered on the belief that organizations should not automatically trust anything accessing their systems either inside or outside their perimeters. Instead, all people and devices must be verified before access is granted. The Executive Order on Improving the Nation’s Cybersecurity (Cyber EO) has a strong emphasis on moving government toward a zero-trust approach. It laid out deadlines for agencies to submit plans for implementing zero-trust architectures, holding organizations accountable for changing how they allow users to access their systems.
Every agency is rethinking how it grants system access and reworking processes and policies to fall in line with zero-trust tenants. The Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) developed a Zero Trust Maturity Model that defined the core pillars of zero trust and the technologies/systems needed to meet zero-trust guidance — identity, device, network/environment, application workload, and data.
Becoming familiar with zero trust’s basic principles will benefit all employees, as it will help make sense of changes related to system access. With this understanding, the less technical government workforce can partner with the IT and security teams implementing these reforms, helping to accelerate the improved security of government systems.
Becoming Part of the Government Workforce of the Future, Today
Getting up to speed on these two key technologies does not require going back to school for a degree in computer science. Instead, it only takes a curiosity to get in on the ground floor of understanding how AI and zero trust will impact government work moving forward. Some ways to do this:
Look for webinars. Learn about AI and zero trust from the comfort of your desk or even your couch. There are a host of webinars on this topic that feature government executives talking about implementations in non-technical terms — focusing on the business and mission impacts these technologies have. Even if you cannot make the webinar live, sign up. Many webinars send a recording after the event to all registrants.
Learn about AI and zero trust from the comfort of your desk or even your couch. There are a host of webinars on this topic that feature government executives talking about implementations in non-technical terms — focusing on the business and mission impacts these technologies have. Even if you cannot make the webinar live, sign up. Many webinars send a recording after the event to all registrants. Get back out to events. After two+ years of limited in-person events, live events have made a strong comeback. Look for events about AI and zero trust local to your home or office, and use the need for education as a motivator to get back out and network in person.
After two+ years of limited in-person events, live events have made a strong comeback. Look for events about AI and zero trust local to your home or office, and use the need for education as a motivator to get back out and network in person. Do some reading. There are countless articles and white papers that break down the complexity of AI and zero trust into layman’s terms. Set a Google alert on these topics or bookmark/set an RSS feed for blogs that frequently highlight government application of these and other leading-edge technologies.
In dedicating time to learn about key emerging technologies, you can not only improve your career growth options, but you can also help advance their use for the betterment of government missions.
As the founder of GovEvents and GovWhitePapers, Kerry is on a mission to help businesses interact with, evolve, and serve the government. With 25+ years of experience in the information technology and government industries, Kerry drives the overall strategy and oversees operations for both companies. She has also served in executive marketing roles at a number of government IT providers.
| 2023-01-04T00:00:00 |
https://www.govloop.com/community/blog/looking-for-career-growth-in-2023-think-about-ai-and-zero-trust/
|
[
{
"date": "2023/01/04",
"position": 78,
"query": "future of work AI"
},
{
"date": "2023/01/04",
"position": 5,
"query": "government AI workforce policy"
}
] |
|
Navigating the AI revolution: how designers can stay ...
|
Navigating the AI revolution: how designers can stay competitive
|
https://uxdesign.cc
|
[
"Irina Nik"
] |
AI revolution is not the future, it is happening now. 34% of businesses in the U.S., Europe, and China have already adopted AI, according to Global AI Survey by ...
|
Navigating the AI revolution: how designers can stay competitive
Artificial intelligence is changing how we design and the skills required to succeed in the design industry. This article will explore the impact AI has on design and how designers can prepare for the future. Irina Nik 7 min read · Jan 5, 2023 -- 11 Share
A concept of generative AI generated by Midjourney
Reality, not Hype
AI revolution is not the future, it is happening now. 34% of businesses in the U.S., Europe, and China have already adopted AI, according to Global AI Survey by Morning Consult.
Many organizations like World Economic Forum or IBM recognize AI as the primary technology that will drive the 4th Industrial Revolution. It will fundamentally alter the way we live, work, and relate to one another.
AI can be biased, produce unethical and even dangerous results, and generate incorrect or misleading information. However, it is evolving at an incredible rate and these issues are likely to improve over time.
How generative AI is changing the world
2022 saw a big disruption in generative AI: the technology that is able to generate something new rather than analyze something that already exists.
| 2023-01-06T00:00:00 |
2023/01/06
|
https://uxdesign.cc/navigating-the-ai-revolution-how-designers-can-stay-competitive-7798bc664210
|
[
{
"date": "2023/01/04",
"position": 35,
"query": "AI economic disruption"
},
{
"date": "2023/01/04",
"position": 14,
"query": "artificial intelligence graphic design"
}
] |
The Power of 7: Tech that will disrupt 2023 - Sify
|
The Power of 7: Tech that will disrupt 2023
|
https://www.sify.com
|
[
"Nigel Pereira",
"Malavika Madgula"
] |
All thanks to the exponential pace at which Machine Learning and Artificial Intelligence is advancing. According to McKinsey, this tech trend makes possible the ...
|
Adarsh looks at various developments in the technology industries that are in for some major innovations in the year 2023
As the name suggests, disruption in technology is any kind of technological innovation that is likely to affect the status quo.
It is not always something new or path breaking but more of an exponential upgrade on existing tech that has the potential to disrupt a market or industry.
Personal computers, online shopping and ride sharing apps are all excellent examples of disruptive tech.
Here is a look at 7 tech industries that are likely to see major disruption in the upcoming year:
1. Machine Learning
Image Credit: Flickr
Machine Learning has been around for a while now but 2023 could be the year it blows up in a big way. Unlike traditional Machine Learning, the focus is now on Unsupervised Machine Learning (UML). Without any data training or labelling, UML recognizes and records unknown patterns and thus makes predictions that are more accurate.
This way, UML eliminates limitations created by human bias or pre-existing knowledge and notes insights like never before.
2. Robotic Process Automation
Image Credit: Shutterstock
Automation is nothing new but its growing in leaps and bounds. In fact, the prediction is that almost half of all existing human activities will be automated in the next few decades.
But Robotic Process Automation (RPA) is a whole new dimension altogether! RPA is a software technology that involves building and managing software robots that can imitate human actions. It uses 3 key technologies: key scraping, workflow automation and artificial intelligence, to automate processes and improve efficiency. The game changer in this domain would be human intelligence-based robotics and automation.
Talking to Forbes, Ayman Shoukry, the CTO (Chief Technology Officer) of Specright Inc., explains it in simple terms: “Think about drones delivering packages with human understanding: ‘Please place this order beside this vase on my living room table.’ If the vase got moved, the drone will realize that it moved at delivery time (vision recognition) and either deliver it there or ask the user for a new precise delivery location.”
3. Virtual & Augmented Reality
Image Credit: Shutterstock
As of right now, virtual, and augmented reality is limited only to the entertainment and gaming industries. But 2023 could see it venture into other avenues, like healthcare, e-commerce, and travel. With AR (Augmented Reality) and VR (Virtual Reality) devices becoming more capable and cost effective, it opens a bunch of possibilities.
Imagine a VR space to interact with co-workers remotely or using a VR device to help customers experience your product. Considering this technology is already lightyears ahead of others on this list, this sector could really blow up in 2023!
4. AI-Chat Bots
Image Credit: Shutterstock
It seems like artificial intelligence has been around for a while, but the truth is that we are still in the initial stages of this technological marvel!
By 2024, more than 50% of people’s interactions with customer care will be with computers, according to McKinsey. Apart from being more efficient than humans, chatbots can operate 24*7, communicate in any language, and have no demands about working conditions. Once it gets more sophisticated, it will result in a lot of jobs being lost but it will affect the end user in a significant way!
5. 5G
Image Credit: Dreamstime
5G arrived in India in late 2022 but 2023 is the year when the technology will boom! Voice calls will become clearer, video support will become more accessible and greater bandwidth will help in widespread adoption and automation.
This in turn will unlock a lot of potential economic opportunities from the digitization of manufacturing through wireless control of mobile tools, machines, and robots to decentralized energy delivery and remote patient monitoring. In fact, according to McKinsey, faster connections in mobility, healthcare, manufacturing, and retail could increase the global GDP (Gross Domestic Product) by $2 trillion by 2030.
6. Internet of Things
Image Credit: FreeCodeCamp
As Artificial Intelligence and Machine Learning continue to advance, it will lead to bigger and better things. Quantum computing, for instance. We are still in the prototyping phase but once this moves to practice it will be like the invention of the internet all over again.
It will lead to the growth of the Internet of Things (IoT). The Internet of things describes physical objects with sensors, processing ability, software and other technologies that connect and exchange data with other devices and systems over the Internet or other communications networks.
7. Programming
Image Credit: MOOC
This might sound absurd, but we are at that point in history where computer software is going to start writing code. All thanks to the exponential pace at which Machine Learning and Artificial Intelligence is advancing. According to McKinsey, this tech trend makes possible the rapid scaling and diffusion of new data-rich, AI-driven applications.
This could lead to the creation of applications that are far more powerful and capable of anything that exists today. But it will also enable the standardization and automation of existing software and coding processes.
So that is a lot of exciting things to look forward to in 2023! Which one are you most excited about? Let us know in the comments section.
In case you missed:
| 2023-01-04T00:00:00 |
2023/01/04
|
https://www.sify.com/technology/the-power-of-7-tech-that-will-disrupt-2023/
|
[
{
"date": "2023/01/04",
"position": 54,
"query": "AI economic disruption"
}
] |
Trustworthy AI Framework and AI Bill of Rights
|
Trustworthy AI Framework and AI Bill of Rights
|
https://www.deloitte.com
|
[] |
Learn how Deloitte's Trustworthy AI Framework aligns with the latest iteration of the federal regulatory guidelines featured in the AI Bill of Rights.
|
Ensure equitable, ethical and transparent AI governance
The Artificial Intelligence (AI) regulatory landscape continues to mature as government agencies refine and build upon previous guidance designed to manage AI risk, ensure equality and transparency, and provide trust in automated systems. As American institutions continue to innovate and embrace AI to harness its benefits; federal, state, and local agencies are increasing their regulatory efforts to protect the American public.1 The latest iteration of federal regulatory guidelines is The Blueprint for an AI Bill of Rights (AIBoR).2
In October of 2022, the White House Office of Science and Technology Policy (OSTP) released the AIBoR to provide additional guidance for organizations to create trustworthy and ethical automated systems.The AIBoR provides guidance to American innovators to harness the extraordinary potential and benefits of automated systems and AI while protecting “the American public’s rights, opportunities, or access to critical resources or services.”3
The AIBoR applies to all automated systems that have the potential to meaningfully impact individuals’ or communities’ exercise of rights, opportunities, or access (Figure 1).
| 2023-01-04T00:00:00 |
https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/articles/ai-bill-of-rights.html
|
[
{
"date": "2023/01/04",
"position": 9,
"query": "government AI workforce policy"
},
{
"date": "2023/01/04",
"position": 64,
"query": "AI labor union"
}
] |
|
December 2022 U.S. Tech Policy Roundup
|
December 2022 U.S. Tech Policy Roundup
|
https://techpolicy.press
|
[
"Kennedy Patlan",
"Rachel Lau",
"Carly Cramer"
] |
... government rules to keep the use of AI technology secure and protected. ... artificial intelligence strategy focused on workforce recruitment and retention.
|
Kennedy Patlan,
Rachel Lau,
Carly Cramer /
Jan 4, 2023
Kennedy Patlan, Rachel Lau, and Carly Cramer are associates at Freedman Consulting, LLC, where they work with leading public interest foundations and nonprofits on technology policy issues.
In December, Senator Marco Rubio (R-FL) introduced the ‘‘Averting the National Threat of Internet Surveillance, Oppressive Censorship and Influence, and Algorithmic Learning by the Chinese Communist Party Act’’ or the ‘‘ ANTI-SOCIAL CCP Act’ ’.
As the 117th Congress came to a close, December was an opportunity to look back at the U.S. tech policy wins of 2022. The year was eventful for tech policy, with major developments in artificial intelligence, data, and healthcare privacy, particularly within federal agencies and the White House:
We look forward to what the new 118th Congress brings with regard to legislative proposals on privacy, reform of Section 230, antitrust, and other important technology policy issues. You can continue to monitor and track passed and pending legislation via techpolicytracker.org, where we maintain a comprehensive database of legislation and other public policy proposals related to platforms, artificial intelligence, and relevant tech policy issues. In early 2023, we will be archiving legislation from the 117th Congress to make room for the work of the 118th Congress.
Read on to learn more about December U.S. tech policy highlights from the White House, Congress, and beyond.
Platform Regulation Hitches a Ride on the Omnibus
Passage of the NDAA Includes Key AI Provisions
Biden Administration Continues Involvement in Section 230 Deliberations
Public Opinion Spotlight
Morning Consult conducted a survey among 2,212 U.S. adults from November 11-26, 2022 on consumers’ opinions and expectations on artificial intelligence. They found that:
24 percent of U.S. adults say they know “exactly” what AI is, in comparison to 40 percent of Gen Z and 43 percent of adults working in tech
52 percent of U.S. adults believe AI will change their life in a negative way
44 percent of U.S. adults believe AI will have a positive impact on innovation in science
43 percent of U.S. adults believe AI will have a positive impact on health care innovation
36 percent of U.S. adults believe AI will have a positive impact on innovation in education
38 percent of U.S. adults believe AI will have a negative impact on employment in major companies
34 percent of U.S. adults believe AI will have a negative impact on employment at small businesses
67 percent of U.S. adults express some concern on foreign powers using AI against U.S. interest and job loss across industries
65 percent of U.S. adults express some concern on how AI may impact their personal data privacy
46 percent of Black adults express concern about racial discrimination in AI application
Morning Consult also asked U.S. voters whether they support a ban on Chinese-based social media platforms in the United States in a poll from December 16-19, 2022 of 2,001 registered voters. They found that:
“53 percent of voters support a ban on Chinese-based social media platforms in the United States, while a slightly higher share (59 percent) support banning the platforms from government-issued devices”
“The U.S. ban had strong support among baby boomer voters, with 2 in 3 backing it, while the proposal was more divisive among younger generations. Gen Z voters were slightly more likely to oppose (41 percent) than support (32 percent) such a ban, while another 28 percent didn’t know or had no opinion. Millennial voters were roughly split on the ban, with 39 percent supporting it and 34 percent opposing it.”
- - -
We welcome feedback on how this roundup and the underlying tracker could be most helpful in your work – please contact Alex Hart and Kennedy Patlan with your thoughts.
| 2023-01-04T00:00:00 |
2023/01/04
|
https://techpolicy.press/december-2022-u-s-tech-policy-roundup
|
[
{
"date": "2023/01/04",
"position": 42,
"query": "government AI workforce policy"
}
] |
Biden Signs FY 23 Omnibus With Increases for Research, ...
|
Biden Signs FY 23 Omnibus With Increases for Research, Health Workforce
|
https://www.aamc.org
|
[] |
... workforce, and all components of the health infrastructure that improve the ... Advocacy, Policy, & Legislation · Budget/Appropriations · Legislative ...
|
On Dec. 29, 2022, President Joe Biden signed the Consolidated Appropriations Act, 2023 (H.R. 2617) into law, which includes $1.7 trillion in fiscal year (FY) 2023 discretionary government funding for all 12 annual spending bills, as well as a number of other health care provisions. After prolonged negotiations to reach an agreement on various procedural issues following the bill’s Dec. 20, 2022, introduction, the Senate passed the omnibus on Dec. 22, 2022, by a vote of 68-29 and the House passed the package on Dec. 23, 2022, by a vote of 225-201.
The $1.7 trillion omnibus also included a host of other policy provisions relevant to academic medicine, including a number related to health care and pandemic preparedness [refer to related stories on health care and preparedness measures.]
Following the release of the omnibus text, AAMC President and CEO David J. Skorton, MD, and Chief Public Policy Officer Danielle Turnipseed, JD, MHSA, MPP, released a statement applauding Congress’ bipartisan efforts, thanking them for the investments in medical research, public health, the health workforce, and all components of the health infrastructure that improve the health of patients and communities, and urging swift passage of the measure.
The bill provides the following FY 2023 funding levels for agencies and their respective programs that are impactful for academic medicine:
National Institutes of Health (NIH)
The omnibus provides a total of $47.5 billion for the NIH in FY 2023, an increase of $2.5 billion (5.6%) above the FY 2022 enacted level. The bill provides increases to the Clinical and Translational Science Awards and the Institutional Development Award programs. The accompanying joint explanatory statement includes requirements regarding the reporting of the use of animals in research, funding for regional biocontainment laboratories and the workforce to support biosafety level 3-plus research, and funding to increase the diversity of the research workforce.
Advanced Research Projects Agency for Health (ARPA-H)
The omnibus provides $1.5 billion for ARPA-H through FY 2025 ($500 million or a 50% increase over FY 2022) through the Health and Human Services (HHS) Office of the Secretary in FY 2023. As in FY 2022, the secretary has the authority to transfer the funding to the NIH or another HHS agency within 30 days of the bill’s enactment. The bill also includes authorizing language for ARPA-H including its establishment separately within the NIH, allowance to be exempt from certain NIH policies, and guidance regarding location of ARPA-H offices. The bill authorizes $500 million per year for the agency for FY 2024 through FY 2028.
Gun Violence Prevention Research
For a fourth consecutive year, the legislation includes dedicated funding for firearm injury and mortality prevention research, with $12.5 million for the Centers for Disease Control and Prevention (CDC) and $12.5 million for the NIH, the same funding levels as FY 2022.
Agency for Healthcare Research and Quality (AHRQ)
The spending bill includes $373.5 million for AHRQ, which is an increase of $23.1 million (6.6%) above the FY 2022 enacted level.
Centers for Disease Control and Prevention (CDC)
The bill provides a total of $9.2 billion for the CDC, an increase of $760 million (8.9%) above the FY 2022 enacted level, which includes investments in maternal health, public health data modernization, and health equity including the Racial and Ethnic Approach to Community Health (REACH) program and Social Determinants of Health Accelerator plans.
Health Resources and Services Administration (HRSA)
The bill set aside $879.8 million for HRSA Title VII and Title VIII programs, which is a $60.6 million (12%) increase above the FY 2022 enacted level. The Title VII pathway programs (Centers of Excellence, Health Career Opportunity Program, Scholarships for Disadvantaged Students, and Faculty Loan Repayment) received increases in funding across the board. Preventing Burnout in the Health Workforce program, authorized by the AAMC-endorsed Dr. Lorna Breen Health Care Provider Protection Act (P.L. 117-105), did not receive any funding in the bill despite the proposed funding in both the House and Senate FY 2023 draft spending bill.
Additionally, the bill includes $125.6 million for the National Health Service Corps, a $4 million (3.28%) increase over the FY 2022 appropriation. The bill provides $385 million for the Children’s Hospitals Graduate Medical Education program, a $10 million (2.7%) increase over FY 2022. Lastly, the bill includes $12.5 million for Rural Residency Planning and Development grants, which is an increase of $2 million (19%) over FY 2022.
Preparedness
$305 million, a $9.5 million or 3.2% increase over FY 2022 levels, was allocated for the Hospital Preparedness Program (HPP) under the Office of the Assistant Secretary for Preparedness and Response. Within that $305 million total for the HPP, $7.5 million was set aside for the National Emerging Special Pathogens Training and Education Center, and $21 million (flat funding) for the Regional Emerging Special Pathogen Treatment Centers.
Department of Education
The bill provides $50 million in research infrastructure investments at Historically Black Colleges and Universities, Tribal Colleges and Universities, and Minority-Serving Institutions.
Department of Veterans Affairs
The bill provides a total of $916 million for the Veterans Affairs (VA) Medical and Prosthetic Research program in FY 2023, a $34 million (3.9%) increase above the FY 2022 funding level, with the joint explanatory statement including text regarding the use of animals in research and access to clinical oncology trials.
Regarding VA health care funding, the bill provides an additional $216 million for VA Medical Services for FY 2023 beyond the previously approved advanced appropriations for FY 2023 representing a total of $70.6 billion for FY 2023, a $11.7 billion (19.8%) increase over the comparable FY 2022 spending level. The bill also provides $74 billion in FY 2024 advanced appropriations.
For medical community care, the bill provides an additional investment of $4.3 billion for FY 2023 beyond the previously approved advanced appropriations for FY 2023 representing a total of $28.5 billion for FY 2023, a $5 billion (21.5%) increase in FY 2023 over the comparable FY 2022 spending level. The bill also provides $33 billion in FY 2024 advanced appropriations.
The omnibus also includes text of the AAMC-endorsed VA Infrastructure Powers Exceptional Research Act of 2021 (H.R. 5721), which aims to improve the functionality and efficiency of the VA Medical and Prosthetic Research program while also ensuring continuity of existing research affiliations in light of updated guidance from the VA that would impact outside salary support for VA researchers [refer to Washington Highlights, Dec. 9, 2022].
National Science Foundation (NSF)
The bill provides a total of $9.9 billion for the NSF through the Commerce, Justice, Science, and Related Agencies appropriations provisions as well as through supplemental funding. The funding total would represent an increase of $1.04 billion (12%) over the comparable FY 2022 funding level and includes specific funding for the implementation of the CHIPS and Science Act [refer to Washington Highlights, July 29, 2022].
| 2023-01-04T00:00:00 |
https://www.aamc.org/advocacy-policy/washington-highlights/biden-signs-fy-23-omnibus-increases-research-health-workforce
|
[
{
"date": "2023/01/04",
"position": 94,
"query": "government AI workforce policy"
}
] |
|
AI and data jobs at Deloitte Consulting
|
AI and data jobs at Deloitte Consulting
|
https://www.deloitte.com
|
[] |
Enable clients to grasp the future and reach new heights. Consulting jobs in data engineering, data science, intelligent automation and more.
|
AI-enabled technologies are shaking business foundations. Some find this daunting. We see opportunity—for clients, societies, and people.
Deloitte’s AI & D specialists partner with clients to leverage AI and reach new levels of organisational excellence. We turn data into insights, into action—at an industrial scale.
Join us as we enable clients to grasp the future and reach new heights. Learn from the best in the field to create solutions blending data science, data engineering, and process engineering with our industry-leading expertise.
| 2023-01-04T00:00:00 |
https://www.deloitte.com/southeast-asia/en/careers/explore-your-fit/students/deloitte-ai-and-data-careers.html
|
[
{
"date": "2023/01/04",
"position": 3,
"query": "generative AI jobs"
}
] |
|
Microsoft Recognizes Its First Labor Union in the US
|
Microsoft recognized its first labor union in the US after staff at $7.5 billion video game firm ZeniMax Studios voted to unionize
|
https://www.businessinsider.com
|
[
"Sawdah Bhaimiya"
] |
Microsoft recognized its first labor union in the US, after an overwhelming majority of video game testers at ZeniMax Studios voted to unionize.
|
Microsoft recognized its first union in the US, formed by workers at one of its video game subsidiaries ZeniMax Studios.
Microsoft recognized its first union in the US, formed by workers at one of its video game subsidiaries ZeniMax Studios. SOPA Images / Contributor/ Getty Images
Microsoft recognized its first union in the US, formed by workers at one of its video game subsidiaries ZeniMax Studios. SOPA Images / Contributor/ Getty Images
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.
Microsoft recognized its first labor union in the US, after an overwhelming majority of video game testers at ZeniMax Studios voted to unionize, the Communications Workers of America union announced on Tuesday.
Around 300 software testers across four of ZeniMax's locations in Maryland and Texas voted to unionize, according to Reuters.
ZeniMax is a video game production company popular for its games like The Elder Scrolls and DOOM, and was acquired by Microsoft for $7.5 billion in March 2021.
Workers at the Microsoft subsidiary had been organizing for months and began signing union authorization cards in November 2022. The official voting process commenced on December 2 through a confidential online portal, and closed on December 31.
"We want to put an end to sudden periods of crunch, unfair pay, and lack of growth opportunities within the company. Our union will push for truly competitive pay, better communication between management and workers, a clear path for those that want to progress their career, and more," Victoria Banos, a senior quality assurance audio tester at ZeniMax said in the CWA release.
Microsoft agreed to voluntarily recognize the union if workers voted to unionize in December, per Reuters. This was a first for the company in the US.
"In light of the results of the recent unionization vote, we recognize the Communications Workers of America (CWA) as the bargaining representative for the Quality Assurance employees at ZeniMax," a Microsoft spokesperson said in a statement to Insider. "We look forward to engaging in good faith negotiations as we work towards a collective bargaining agreement."
Related stories Business Insider tells the innovative stories you want to know Business Insider tells the innovative stories you want to know
"Other video game and tech giants have made a conscious choice to attack, undermine, and demoralize their own employees when they join together to form a union," CWA president Chris Shelton said in the release.
"Microsoft is charting a different course which will strengthen its corporate culture and ability to serve its customers and should serve as a model for the industry and as a blueprint for regulators."
Microsoft said it respected its employee's "legal right to choose whether to join or form a union," in 2022 after quality assurance workers at Activision Blizzard formed a union. Microsoft was acquiring Activision in a $69 billion deal at the time.
Companies often opt to voluntarily recognize unions to avoid legal trouble. Union workers can petition to the National Labor Relations Board to force their employer to recognize their union, but the process is long and arduous.
| 2023-01-04T00:00:00 |
https://www.businessinsider.com/microsoft-recognizes-its-first-labor-union-in-the-us-2023-1
|
[
{
"date": "2023/01/04",
"position": 26,
"query": "AI labor union"
}
] |
|
Labor & Trust: Health Insurance for Workers
|
Labor & Trust: Health Insurance for Workers
|
https://www.anthem.com
|
[] |
... union members are accustomed to. Now, more than ever, it's important that ... For members, our analytics and artificial intelligence (AI) provide real ...
|
Anthem's experienced and dedicated Labor & Trust team is ready to provide solutions for your unique labor and trust challenges. We'll customize your plans, save you money, and provide quality support for your members.
Our established national leadership, large networks, and flexibility combine with local staffs that understand your state and industry. Together we create an effective partnership that will serve your needs well into the future.
| 2023-01-04T00:00:00 |
https://www.anthem.com/employer/industry-solutions/labor-and-trust
|
[
{
"date": "2023/01/04",
"position": 50,
"query": "AI labor union"
}
] |
|
Elon Musk 'a perfect recruitment tool' for organized labor, says ...
|
Elon Musk ‘a perfect recruitment tool’ for organized labor, says new UK unions boss
|
https://www.politico.eu
|
[] |
... labor, says new UK unions boss. Trades Union Congress Paul Nowak says Twitter layoffs are driving new union sign-ups. Listen. AI generated Text-to-speech.
|
Musk has fired roughly 3,700 employees — nearly half of Twitter’s workforce — in a round of mass layoffs since buying the company.
U.K. Twitter employees earmarked for an exit received an email saying their job would be “potentially” impacted or “at risk," because, under British law, firms are required to consult with staff over mass redundancies.
In November, Musk meanwhile gave staff an email ultimatum to either go “extremely hardcore” by "working long hours at high intensity” or quit the company.
Musk’s behavior is, Nowak said, “a great recruiting tool for us."
“If I was a young worker in tech, I'd be thinking that being a union member might be a good investment at the moment," he said. "If it can happen at Twitter, it can happen anywhere.”
Unions have in recent years ramped up their activity in another part of the tech world: the gig economy. Uber and food delivery service Deliveroo recently signed agreements with unions, while some Apple stores have voted for union recognition. Last year also saw the first-ever industrial action ballots at a U.K. Amazon warehouse.
| 2023-01-04T00:00:00 |
2023/01/04
|
https://www.politico.eu/article/elon-musk-organized-labor-uk-unions/
|
[
{
"date": "2023/01/04",
"position": 99,
"query": "AI labor union"
}
] |
Over 150,000 employees laid off by tech companies in 2022
|
Over 150,000 employees laid off by tech companies in 2022
|
https://www.information-age.com
|
[
"Aaron Hurst",
"More Aaron Hurst"
] |
Hundreds of tech companies carried out staff cuts in the final months of the year, with Layoffs. ... Historical education bolstered by AI-powered digital human. A ...
|
Analysis from Layoffs.fyi has found that 153,160 members of staff at tech companies were laid off across 2022, the highest amount since the dotcom bubble burst
Facebook‘s parent company Meta (11,000) and e-commerce giant Amazon (10,000) topped the list for workers laid off in the past 12 months, amidst a widespread slowdown in hiring across big tech generally after years of expansion.
Both companies bet on a continued surge in e-commerce spending post-pandemic, with Meta founder Mark Zuckerberg later admitting: “I got this wrong and I take responsibility for that.”
Amazon CEO Andy Jassy, meanwhile, referred to the cuts made at his corporation in 2022 as his “most difficult” since taking the mantle in 2021.
The third and fourth highest amounts of layoffs were overseen by global travel agency Booking.com (4,375) and business IT conglomerate Cisco (4,100), while ridesharing app Uber and Twitter each made 3,700 members of staff redundant.
Hundreds of tech companies carried out staff cuts in the final months of the year, with Layoffs.fyi founder Roger Lee claiming that cutbacks have become more widespread.
Lee told The Times, “Earlier in 2022, layoffs in tech were concentrated within smaller start-ups in food, transportation and finance, but in recent months it’s hit every sector within tech, Big Tech companies included.”
Andrew Challenger, senior vice-president at global outplacement & career transitioning firm Challenger, Gray & Christmas, commented: “When the economy slows down one of the first things companies do is cut their marketing and ad budgets. A lot of those dollars flow into Silicon Valley.”
Economic uncertainty across the world has been cited as a key factor for the trend of tech industry staff cuts.
Layoffs.fyi has been tracking tech sector layoffs since the Covid-19 pandemic took hold, with over 1,000 organisations across the tech industry being tracked by the site.
Related:
Over 14,000 European tech workers made redundant in past year — Actual number of redundant tech workers may be much higher as figures only refer to jobs lost specified in company announcements, says VC firm Atomico.
Silicon Valley start-ups turn to debt deals amidst VC slowdown — With venture capital funding decreasing due to economic uncertainty, start-ups across Silicon Valley are turning to alternative financing deals for the prospect of sustained growth.
| 2023-01-04T00:00:00 |
2023/01/04
|
https://www.information-age.com/over-150000-employees-laid-off-by-tech-companies-in-2022-123500983/
|
[
{
"date": "2023/01/04",
"position": 56,
"query": "AI layoffs"
}
] |
Update from CEO Andy Jassy on role eliminations
|
Update from CEO Andy Jassy on role eliminations
|
https://www.aboutamazon.com
|
[
"Andy Jassy",
"Ceo Of Amazon"
] |
... Message from CEO Andy Jassy: Some thoughts on Generative AI. Message from CEO Andy Jassy: Some thoughts on Generative AI · Company news. June 17, 2025.
|
Amazon has weathered uncertain and difficult economies in the past, and we will continue to do so. These changes will help us pursue our long-term opportunities with a stronger cost structure; however, I’m also optimistic that we’ll be inventive, resourceful, and scrappy in this time when we’re not hiring expansively and eliminating some roles. Companies that last a long time go through different phases. They’re not in heavy people expansion mode every year. We often talk about our leadership principle Invent and Simplify in the context of creating new products and features. There will continue to be plenty of this across all of the businesses we’re pursuing. But, we sometimes overlook the importance of the critical invention, problem-solving, and simplification that go into figuring out what matters most to customers (and the business), adjusting where we spend our resources and time, and finding a way to do more for customers at a lower cost (passing on savings to customers in the process). Both of these types of Invent and Simplify really matter.
| 2023-01-05T00:00:00 |
2023/01/05
|
https://www.aboutamazon.com/news/company-news/update-from-ceo-andy-jassy-on-role-eliminations
|
[
{
"date": "2023/01/04",
"position": 57,
"query": "AI layoffs"
}
] |
Artificial Intelligence - Sponsored by IFP
|
Artificial Intelligence
|
https://www.insightsforprofessionals.com
|
[
"Manage Big Data Effectively",
"Ana Bera",
"Katie King",
"Tech Insights For Professionals",
"Nadica Metuleva",
"Oliver Morris",
"So What Are You Waiting For",
"Digital Banking Research Report",
"Prit Doshi",
"Lee W. Frederiksen"
] |
According to a NewVantage Partners executive survey, over 90% of business leaders report that challenges to becoming datadriven are people and business process ...
|
We use technology such as cookies on our website, and through our partners, to personalize content and ads, provide social media features, and analyse our traffic. To find out more, read our privacy policy and Cookie Policy. Please also see our Terms and Conditions of Use. By accepting these terms you agree to your information being processed by Inbox Insight, its Partners or future partners, that you are over 18, and may receive relevant communications through this website, phone, email and digital marketing. For more information on how we process your data, or to opt out, please read our privacy policy. Our policies and partners are subject to change so please check back regularly to stay up to date with our terms of use and processing.
| 2023-01-04T00:00:00 |
https://www.insightsforprofessionals.com/hub/artificial-intelligence
|
[
{
"date": "2023/01/04",
"position": 72,
"query": "artificial intelligence business leaders"
}
] |
|
LinkedIn Solution Architect Salary | $199K-$278K+
|
LinkedIn Solution Architect Salary
|
https://www.levels.fyi
|
[] |
View the base salary, stock, and bonus breakdowns for LinkedIn's total compensation packages ... Artificial Intelligence Solutions Consultant - Convers...Wells ...
|
We only need 1 more Solution Architect submission at LinkedIn to unlock!
Invite your friends and community to add salaries anonymously in less than 60 seconds. More data means better insights for job seekers like you and our community!
| 2023-01-04T00:00:00 |
https://www.levels.fyi/companies/linkedin/salaries/solution-architect
|
[
{
"date": "2023/01/04",
"position": 48,
"query": "artificial intelligence wages"
}
] |
|
Precarity and artificial intelligence | History of Canada
|
Precarity and Artificial Intelligence
|
https://cha-shc.ca
|
[] |
The upcoming transformations stemming from artificial intelligence will hit the precarious harder than others. Here is how.
|
by David Lewis, anthropologist and historian,
[email protected]
Dr. Google doesn’t have a professorship (yet), but might as well – and Siri, Alexa and the rest will form its first student cohort. In just a few short years, Google and co. have become ubiquitous in our lives, authoritative sources of knowledge acquisition. Google has even become the supreme knowledge authority of the web – an authority that, moreover, is rarely questioned … even though it depends not only on algorithms that have their own stakes, but also on optimization offered to anyone who can afford it. However, this is only the most telling example of a much more complex – and consequently even more insidious – reality that is materializing on our doorstep, that of artificial intelligence.
The world of education is in fact already affected by this new reality, but we are only at the very beginning of a wave that could well turn into a tsunami … from kindergarten to university, currently lagging behind. It is clear that the entire community will be affected, but it is also clear that some will be more affected than others. The world of education has also been experiencing, for the past few decades, a trend towards the casualization of its community, notably, at the university, with the function of lecturer, essentially conceived as a residual category to that of professor. Now, we can easily suppose that the upcoming transformations stemming from artificial intelligence will hit the precarious harder than others. Here is how.
Education and Precarity
Alexander J. Means notes that, although education is quite generally “called forth in official neoliberal discourse as a solution to precarity” (Means 2019, 2), in reality, precarity is omnipresent – and, more ironically, it is in the academic world, where the most coveted degrees come from, that it is most evident. This is particularly true of the status of lecturer, which is mine, as it is for some 10,000 to 15,000 colleagues in Quebec, and of many others elsewhere. More specifically, I am, like about half of my colleagues, a career lecturer, i.e. someone for whom this is the main longterm occupation. Moreover, although I am an anthropologist by training, I teach primarily in history, one of the many strange effects of our precarity.
Precarious workers in higher education include faculty like myself, but also researchers, adjuncts and others. Precarity and job stability may be relative concepts, but it is particularly easy for us precarious academics to see the differences between our daily lives and those of professors, which is after all the best example of job stability there is, literally the archetype of the genre.
Academic precarity, including that of the lecturers, still comes with better working conditions than those of the majority of precarious workers. In particular, they have little to do with those of day laborers, who not only live in a precarious situation on a daily basis – whereas ours would be more seasonal – but who also have to deal with difficult or even dangerous conditions, and are generally employed in unrewarding or even humiliating jobs. The fact remains that, by its very nature, precarity makes our situation as lecturers much more fragile than it would be if we occupied the same functions with job stability, that is, in the current paradigm, if we were professors. Still, we have the privilege of being around passionate colleagues, both professors and lecturers, and university life can indeed be very stimulating and rewarding. However, it remains difficult to participate fully when everything is decided without us, when we are ignored or even despised. It is true that progress has been made since unionization – notably, at the Université de Montréal, in the last few years, thanks to a revision of the charter (2018) and consequently of the university’s statutes. Indeed, these now recognize the existence of lecturers, and allow our participation in most committees and other formal bodies. Lecturers at other Quebec universities also have representatives on various institutional committees, but now that we have two per committee in most cases (by the logic, which we defended, of ‘one is good, two is betterʼ), our representativity is probably among the best in Quebec. Yet, our presence, in the university structure, is vastly inferior to the level of our participation in the academic mission of the institution, already not in the bodies, and even less so if we include all academic functions.
Moreover, even if we our representativity was at the same level as our academic contribution, the main problem of our precarity is that most of us have to deal with a significant variability in our workload, and therefore in our income as well – which, moreover, inevitably has repercussions on our physical health, and even more so on our mental health (there is a need for a study on burn out among university teachers). This is a reality that does not fit well with the notion of a career, and makes it very difficult to actively participate in society, especially in terms of getting married, buying a house or having children.
The fact remains, as Émilie Bernier notes, that lecturers do have a voice (which does not imply that we are always heard… let alone listened to): “Even though most of the time I speak in front of the turned-off cameras of faceless students, I have a voice. I have a sense of my dignity, I have the means.” (Bernier 2022, 106) … and indeed it is even an integral part of our profession, at least in terms of speaking in its oral form. For the written form, we are, for essentially systemic reasons, at a great disadvantage compared to professors. Thus, despite all the difficulties of our reality, it is undeniable that we are privileged compared to many precarious people.
Artificial Intelligence
As David Lorge Parnas explains “‘Artificial Intelligence’ remains a buzzword, a word that many think they understand but nobody can define” (Parnas 2017, 1). There is indeed a wide spectrum of definitions, both in terms of focus and content – but a consensus has emerged in recent years around a set of computational practices replacing functions that have traditionally been performed with human intelligence. This is reflected, for example, in the Department of Education and Higher Education’s Digital Competency Framework (2019), in which artificial intelligence is presented as a: “Field of study concerned with the artificial reproduction of the cognitive faculties of human intelligence, with the aim of creating software or machines capable of performing functions normally falling within its scope.” (MEES 2019).
This is of course a rapidly expanding field, with multiple applications in a variety of spheres of society – and, as Yoshua Bengio noted at a 2019 conference, these developments are likely to have multiple impacts on our lives: “the transformative potential of these technologies is incredible,” he said. Many of these will certainly be positive, useful to society as well as to individuals – but it is clear that they will also generate concerns, even potential dangers, particularly, in the world of education, in terms of the capabilities we are trying to develop in our students: “the possibility, well, it’s more than a possibility, in fact it’s already happening, to use artificial intelligence to control minds, in advertising and on social media” (Bengio, 2019) … regardless of whether it’s for commercial, ideological or other reasons. This well-advertised shock, especially and particularly always in education, a shock that remains, at least according to François Taddei, is still to be understood: “Our school programs and our educational systems have not currently become aware of the intensity of the shock that the progress of artificial intelligence is about to bring to our ways of living, working, consuming, living together, questioning our legal norms and […] shaking our ethical standards.” (Taddei 2018, 46) – which will inevitably also affect the teacher – student relationship. Now, this predicted shock is no longer simply on our doorstep, it has undeniably passed us by, as Thierry Karsenti notes in Artificial Intelligence in Education: “Artificial intelligence […] is already very present in education, notably with the applications that learners and teachers use daily on their cell phones, or when they carry out research on the Internet.” (Karsenti 2018, 115).
Risks in Education
It is clear that artificial intelligence will allow the development of a panoply of tools, each more wonderful than the last, tools that should help develop the cognitive abilities of our students – and thus, artificial intelligence should be able to support the work of teachers. Its adoption is going to happen, whether we like it or not – but the fact remains that this transformation raises a few questions, including the place of the private sector in higher education. After all, as Yoshua Bengio reminded us, the primary objective of companies is to maximize profits, not to train responsible citizens with a sharp critical mind. The private sector is certainly an essential partner in the shift towards artificial intelligence, since it is able to provide software and other computing tools that are not developed in vitro – but at the same time, these tools are produced by external actors, not only to the academic world, but also to our social reality. They therefore come with a worldview that can imply multiple biases, or worse. Indeed, as Maria Wood notes: “the data the algorithm is learning from could have structural and historical bias baked into it” (Wood 2021), which could have the consequence of reproducing or even amplifying past discrimination. This is all the more insidious because it is a priori invisible: “The algorithm appears impartial because it seemingly does not have biased instructions in it, so its recommendations are perceived to be impartial.” (Wood 2021). The biases that Wood notes are mainly related to administrative aspects such as admissions or the tracking of student records – but as she points out, they could soon also be found in assessment as in other aspects of the teacher – student relationship.
However, the more important these tools become in learning, the more the place of teachers risks being marginalized, weakened, reduced to that of simple facilitators, coaches, advisors, perhaps, big brothers rather than father figures. This may be a desirable development, but whether it is or not, one thing is certain: it can only lead to a questioning of the figure of the teacher, and consequently of his or her place, authority and ability to manage the group. Of course, these transformations are bound to affect the precarious more fundamentally than others, women more than men (and the majority of lecturers are women), and especially those from cultural communities or other marginalized or marginalizable groups – whether in terms of job security, symbolic deficit or otherwise.
In fact, it is not only teachers who will be affected, but the whole of knowledge itself, indeed the whole structure that underpins it: the traditional centers of knowledge are already losing their primacy to a patchwork of variable sources responding to imperatives that are far from being exclusively academic. The current and future transformations in the knowledge economy could weaken not only the teachers that we are, but also the institution that currently sits at the top. The role of the university could eventually be reduced to little more than that of an interface between sources of knowledge … if it even survives.
Still, one can imagine that most applications of artificial intelligence in education will be benevolent (whereas this will certainly not be the case in the wider world). Thus, artificial intelligence as defined by Karsenti as “a field of study whose purpose is the artificial reproduction of the cognitive faculties of human intelligence” (Karsenti 2018, 113) does not seem particularly threatening in education – on the contrary, even. Far more concerning, perhaps, is big data, which Karsenti sees as: “a digital ecosystem that allows for the collection, transfer, archiving, and manipulation of a plethora of data” (Karsenti 2018, 113). In particular, we can think of algorithms, those that affect human behaviour by taking us in certain directions rather than others as search engines and social media already do … and this is just the beginning. Indeed, algorithms have become so ubiquitous and so good at molding our behaviours that it has become almost ‘naturalʼ to compare ‘humanʼ behaviours to them – as Schwirtz et al. do, in the New York Times of December 16, 2022: “A former Putin confidant compared the dynamic to the radicalization spiral of a social media algorithm, feeding users content that provokes an emotional reaction.”
One might also fear a form of the configuring thought of and standardization of knowledge – and thus, potentially another source of minimization of the teacher’s role. Fortunately, this is not yet the case in the classroom, but it is already possible, as Wang, Chang & Li establish, to correct essay exam answers in a way that is relatively similar to what a human would do, as long as they are sufficiently marked up – an option that could become tempting for universities: “To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit students’ constructed responses are beneficial. But the high cost required in manually grading constructed responses could become an obstacle in applying open-ended questions.” (Wang, Chang & Li 2008, 1450). Few teachers will complain about no longer having to grade exams and papers, myself included, but the fact remains that, even when grading is outsourced to auxiliary staff, we teachers are still the authority responsible for assigning grades to students, which involves a series of interactions and judgments – whereas with the advent of artificial intelligence, this may no longer be the case, especially if universities see a cost saving in this. Such a rupture could only weaken the already weakened link between teachers and students.
Artificial intelligence may also soon be found in the classroom itself. This is apparently already the case in China, albeit in a very specific context, with the so-called ‘shadow educationʼ, in this case the tutoring industry – as reported by the Higher Education Council in Artificial Intelligence in Education (2020): “In China, where competition for access to higher education is particularly fierce, some private learning centers offering extracurricular services have invested heavily in intelligent tutors.” (Higher Education Council 2020, 16). Such a practice, of course, raises a variety of concerns, including the consequences of even partial formatting of student thinking: “Despite improvements in learning, some experts fear that these practices will lead to a standardization of learning and assessment, which could leave future generations unprepared for a constantly changing world.” (Higher Education Council 2020, 16). The expected behaviour of teachers could of course also be affected.
Somewhat similarly, Adam L.-Desjardins and Amy Tran note, in L’intelligence artificielle en éducation (2019), a demobilizing effect that could result from developments in artificial intelligence: “Overconfidence and dependence on the use of these technologies could lead to a certain intellectual laziness” (L.-Desjardins & Tran 2019). Worse, this laziness could eventually be easily exploited outside the walls (physical and virtual) of the university, notably by allowing: “some ill-intentioned powers to use them in order to achieve their political goals” (L.-Desjardins & Tran 2019). We can also fear a lack of initiative, even an intellectual apathy among students – which, I believe, will certainly not facilitate our task, on the contrary.
Indeed, the temptation to be taken by the hand may soon become very strong. In a recent and disturbing article entitled Ceci n’est pas un professeur d’université (2022), Jonathan Durand Folco reflects on the transformative role of content production software, “a ‘natural language processing’ technology that can be used to automate academic writing and research” (Durand Folco 2022). As we read the text, we can see the insidious effects that the growing presence of such technology will have on the development of the tools of thought for future generations. Other tools will be needed to counteract negative effects.
Risks for the Precarious
The coming changes will affect the entire world of education, and it is clear that the university will not escape them. As with everything else, the precarious will bear the brunt of these changes more than others.
Not only will the transformations inevitably affect the job security of many of us – but even for those who manage to stay, we will have to adjust, by modifying the format and content of our courses, by undergoing training, by constantly updating our computer tools … and all this, always or almost always at our own expense, both in time and money. We also risk being hit hard by task fragmentation – which could happen in a number of ways, including the automated correction mentioned above.
It is also likely that we will be left out, or at best on the margins, of the reflection on the integration of artificial intelligence in our classrooms and in our teaching. We can therefore predict that the solutions will be tailored to the needs of the professors (who after all populate the administration), and without any regard for our own. This is at least what we can observe these days at the Université de Montréal with the implementation of the CHAL (Création Horaires et Assignation Locaux) software, a system of course schedules management entirely conceived in a professorial logic, and which does not take into account at all the reality of the lecturers … and we are talking about a level of complexity that is at least one order of magnitude lower than artificial intelligence.
This artificial intelligence could also threaten our intellectual authority, at least in areas where there are differences of opinion … that is, in almost all areas. Yoshua Bengio mentioned, among the potential impacts of artificial intelligence, the concentration of power and authoritarian drifts. We are obviously thinking first of dictatorships and other despotic governments – but thanks to artificial intelligence, various forms of more or less established powers could find a way to invite themselves into the classroom, whether they be commercial, political, ideological, religious or other actors. One might think that one day, expressing an opinion opposed to Microsoft, for example, on Teams or in a class run by Microsoft software, could have a negative impact on a career; or that expressing an opinion contrary to that of an ‘influencer’ or public figure (real or virtual) could be dangerous.
It could also become difficult to express an opinion that goes against a dominant discourse, whether in education or in any other sphere of society – whether it is conveyed by a state, a company involved in education, or by any other entity or dynamic with the potential to influence it, including social movements and public opinion. If, for example, I stated in class that the first atomic bomb dropped on Japan could possibly be justified, but certainly not the second, while the truth conveyed to the students by artificial intelligence stated that both were and that there was a consensus on the matter – and that in addition, artificial intelligence had a vast arsenal of tools at its disposal (including metavers and deepfakes), I wouldn’t stand a chance, unless of course I had an authority as strong as that of a professor … which is essentially a question of status.
In fact, my authority here would be remarkably solid, though, precisely, by interposed authority: indeed, this is the position of Edwin Oldfather Reischauer, notably in Histoire du Japon et des Japonais (1970). In fact, I wonder how the artificial intelligence would have reacted if it had been able to interact with Reischauer – but one thing is sure, he would have lost neither his job nor his ability to express his opinion … whereas there is no guarantee that a lecturer who finds himself/herself in a conflict of opinion with an artificial intelligence whose powers are poorly defined by his institution would keep his freedom of expression, or even his/her job. More than just freedom of expression, in fact, it is freedom of opinion that is at stake here – and it is already resulting, even before the coming wave of artificial intelligence, in many lecturers engaging in self-censorship, often even unconsciously. The growing presence of an unquestionable authority will only make it worse.
For me, as for many colleagues I believe, one of my fundamental roles as a teacher is to dismantle the preconceived notions of my students, in my case about Japan and Japanese people, particularly with regard to samurai and geisha. As we know, the fight against myths and half-truths is at the very heart of the mission of most if not all disciplines in the social sciences and humanities, especially in history and anthropology. Generations of historians and anthropologists have worked hard to dismantle nationalistic, identity-based or other constructs… and today, an algorithm and a few entertainment companies could produce more persistent myths!
Fortunately, we at the Université de Montréal should be protected from many of these excesses (while others will no doubt experience them), at least for the next few years. Indeed, by adopting a statement of principles on freedom of expression in the summer of 2021, the Université de Montréal has been a pioneer – and what’s more, the statement has been the subject of broad consultation and a fairly strong community consensus. However, the legislator now requires the Université de Montréal, like all Quebec universities, to adopt a policy on academic freedom by the summer of 2023 (as per Bill 32: An Act respecting academic freedom in the university sector). At the Université de Montréal, the exercise is being carried out with the necessary seriousness, but the fact remains that this is a requirement that, in my opinion, puts far too much emphasis on the game of denunciation to the detriment of the promotion of good practices, thus making the dynamic unnecessarily confrontational – with always and again more immediate risks for the precarious people that we are… as the recent past has too clearly shown us.
The sense of security felt by members of the Université de Montréal community following the adoption of the statement of principles may be short-lived, however, particularly given the potential influence of external actors: one need only think of the interference of the Quebec government in university governance, precisely with the law just mentioned. We can also think of the impact that lawsuits brought by students could have, a trend that has been on the rise in recent years, including in Canada, as explained by Stephen G. Ross and Colleen Mackeigan, who describe the trend as “emerging area in education law” (Ross & Mackeigan 2019). The lawsuits have so far been limited, they say, to questions of contractual relations, but one can imagine that the subjects will eventually expand to other issues, including privacy, image rights and copyright – which could thus also involve teachers or even private companies dissatisfied with the space allocated to them by universities. As for Bill 32, it may have been adopted with good intentions (and almost total lack of understanding by the academic world), but it is still undeniably a form of interference – and a future government may well want to use the same kind of tool for much less noble purposes. We can still hope that civic and academic mobilization, such as the one that produced the 2018 Declaration
for a Responsible Development of Artificial Intelligence, by the way, an initiative of the Université de Montréal, can help protect us all from the most harmful effects and the most serious drifts. Indeed, one of its 10 principles, the 4th, called the principle of solidarity, suggests that: “The development of AIS must be compatible with the maintenance of links of solidarity between people and generations”, while the 6th, the principle of equity, states that: “The development and use of AIS must contribute to the realization of a just and equitable society.” – and states in particular (2nd point) that, “The development of AIS must contribute to the elimination of relations of domination between persons and groups based on differences in power, wealth or knowledge.”
Another of the risks mentioned by Yoshua Bengio is that artificial intelligence will one day develop its own selection criteria, without the need for human input … a form, in short, of artificial selection. Fortunately, we are far from this, and humans are and should remain at the heart of universities – but whatever the dangers that lie ahead, professors are still much better protected than lecturers.
In 1994, Claude Lessard, then Dean of the Faculty of Education at the Université de Montréal, declared at a FNEEQ symposium that we must “civilize precarity” (Lessard 1995 (1994), 99) – before giving a portrait of it which, nearly 30 years later, unfortunately remains all too relevant: “Civilizing precarity means finding new mechanisms of integration and developing a sense of belonging to the institution for those among the teachers who will not be able to enjoy a strong and permanent employment relationship. It means giving them a real place in the pedagogical decision-making process and in the decision-making process” (Lessard 1995 (1994), p. 3). (Lessard 1995 (1994), 99). In education as elsewhere, the most convincing impacts of artificial intelligence will come, says Sahir Dhalla in The problem with AI that acts like you (2020), from a “cooperation between AI and humans”. Let’s hope that we will think about inviting lecturers to this conference.
Bibliography
Bengio, Yoshua [2019] : « Intelligence artificielle, apprentissage profond, logiciel libre et bien commun », Actes du 6e Colloque libre de l’ADTE (4 juin 2019, Université Laval). Association pour le développement technologique en éducation https://adte.ca/actes-du-6e-colloque-libre-de-ladte-2019/
Bernier, Émilie [2022] : « À l’orée du trou noir. La perspective des personnes chargées de cours et la démocratisation de l’université ». Les enseignantes et enseignants contractuels dans l’université du XXIe siècle (Acfas)
https://www.acfas.ca/sites/default/files/2022-11/Acfas_Cahier-scientifique_no120_numerique_VF_nov2022.pdf
Conseil supérieur de l’éducation [2020] : L’intelligence artificielle en éducation : un aperçu des possibilités et des enjeux (Document préparatoire pour le Rapport sur l’état et les besoins de l’éducation 2018-2020)
https://www.cse.gouv.qc.ca/wp-content/uploads/2020/11/50-2113-ER-intelligence-artificielle-en-education.pdf
Dhalla, Sahir [2020] : « The problem with AI that acts like you – Human-like AI models raise questions of bias and our right to personal data », The Varsity
https://thevarsity.ca/2022/11/20/ethics-of-human-like-ai/
[2018] : Déclaration de Montréal pour un développement responsable de l’intelligence artificielle
https://www.declarationmontreal-iaresponsable.com/la-declaration
Durand Folco, Jonathan [2022] : « Ceci n’est pas un professeur d’université », Le Devoir 15 décembre 2022
https://www.ledevoir.com/opinion/idees/774674/idees-ceci-n-est-pas-un-professeur-d-universite
L.-Desjardins, Adam & Tran, Amy [2019] : « L’intelligence artificielle en éducation », L’école branchée
https://ecolebranchee.com/dossier-intelligence-artificielle-education/
Lessard, Claude [1995 (1994)] : « La précarisation de l’enseignement » Actes du colloque sur la précarité dans l’enseignement, FNEEQ-CSN.
Karsenti, Thierry [2018] : « Intelligence artificielle en éducation : L’urgence de préparer les futurs enseignants aujourd’hui pour l’école de demain ? »,
Formation et profession, vol. 26, no. 3 – pp. 112-119
http://formationprofession.org/pages/article/26/21/a159
Means, Alexander J. [2019] : « Precarity and the Precaritization of Teaching » Encyclopedia of Teacher Education. Springer
https://www.researchgate.net/publication/333965390_Precarity_and_the_Precaritization_of_Teaching
Ministère de l’Économie, de la Science et de l’Innovation [2016] : Plan d’action en économie numérique.
https://cdn-contenu.quebec.ca/cdn-contenu/adm/min/economie/publications-adm/plans-action/PL_plan_action_economie_numerique_2016-2021.pdf
Ministère de l’Éducation et de l’Enseignement supérieur [2019] : Cadre de référence de la compétence numérique
http://www.education.gouv.qc.ca/fileadmin/site_web/documents/ministere/Cadre-reference-competence-num.pdf
Parnas, David Lorge [2017] : « Inside Risks – the Real Risks of Artificial Intelligence ». Communications of the ACM, vol. 60, no. 10
http://www.csl.sri.com/users/neumann/cacm242.pdf
Reischauer, Edwin Oldfather [1970] & Dubreuil, Richard, trad. [1973] : Histoire du Japon et des Japonais, vol. 1 – ‘des Origines à 1945’. Seuil.
Ross, Stephen G. & Mackeigan, Colleen [2019] : « Canada: Claims Against Educational Institutions – It’s Not Just Academic Anymore », Mondaq [Rogers Partners LLP]
https://www.mondaq.com/canada/education/795454/claims-against-educational-institutions-it39s-not-just-academic-anymore
Schwirtz, Michael et al. [2022] : « How Putin’s War in Ukraine Became a Catastrophe for Russia », The New York Times, 16 décembre 2022
https://www.nytimes.com/interactive/2022/12/16/world/europe/russia-putin-war-failures-ukraine.html
Taddei, François [2018] : Apprendre au XXIe siècle, Calmann Levy.
Wang, Hao-Chuan ; Chang, Chun-Yen & Li, Tsai-Yen [2008] : « Assessing creative problem-solving with automated text grading », Computers & Education, vol. 51 – pp. 1450-1466
https://doi.org/10.1016/j.compedu.2008.01.006
Wood, Maria [2021] : « What Are the Risks of Algorithmic Bias in Higher Education? », Every Learner, Everywhere
https://www.everylearnereverywhere.org/blog/what-are-the-risks-of-algorithmic-bias-in-higher-education
| 2023-01-04T00:00:00 |
https://cha-shc.ca/precarity/precarity-and-artificial-intelligence/
|
[
{
"date": "2023/01/04",
"position": 57,
"query": "artificial intelligence wages"
}
] |
|
Amazing Image Upscalers Use AI for Optimal Results
|
Amazing Image Upscalers Use AI for Optimal Results
|
https://www.smartdatacollective.com
|
[
"Toni Allen"
] |
AI technology has led to many remarkable changes in the graphic arts sector, including helping artists upscale images more easily.
|
We previously stated that AI is changing the state of graphic design. A growing number of new startups use AI technology to create excellent graphics.
Of course, the biggest story of 2022 was that AI-generated art was making major headway. However, AI can be even more important for more mundane graphic design tasks.
What should you do when you want to share a picture of a Christmas event with your friends, but the picture is blurred? Definitely, you should use amazing AI-based image Upscaler, which uses AI algorithms to make any ordinary image as perfect as you want. VanceAI Anime Upscaler is also a wonderful upscaling product from VanceAI that helps to create a clear anime picture. You can create elegant photos to make your Christmas event more memorable. Let’s see VanceAI Anime Upscaler and VanceAI Image Upscaler in detail.
This is an example of ways that graphic artists and illustrators are using AI to do their jobs more effectively.
Detailed Introduction to VanceAI Image Upscaler and VanceAI Anime Upscaler?
VanceAI Image Upscaler and VanceAI Anime Upscaler both are amazing AI-based image upscaling products from VanceAI that uses AI technology to make a fully detailed and informative image in just one click. Both image upscalers are ultra-fast in action and deliver the best quality result within seconds, no matter how low-resolution your photos are.
What’s amazing in VanceAI Image Upscaler?
VanceAI Image Upscaler is an innovative and amazing AI-based Upscaler from VanceAI that uses AI algorithms to smartly handle object details. It is an all-in-one image upscaler because you can upscale any kind of image such as animal photographs, landscapes, night photographs, Anime, Art and CG images, general portraits, historical photos, etc. This online photo enlarger offers different AI models and up to 8x photo enlargement in just one click approach. You make information in any low-grade image as clear as you please just by AI upscaling it. VanceAI Image Upscaler is a top-class AI-based image Upscaler that is ultra-fast in action and just in seconds it can fill the missing pixels in any ordinary photo. From right now, make your New year photographs more elegant and awesome by upscaling them.
Here are some special features of VanceAI Anime Upscaler.
The most important and special thing about VanceAI Anime Upscaler is that it is designed only to upscale anime images. This Anime Upscaler is based on Waifu2x technology, which makes it perfect to AI upscale anime images by up to 8x. This online AI Anime Upscaler allows you to easily make high-resolution anime wallpapers, waifu pictures, anime profile photos, and iconic posters.
Other AI-based tools are offered by VanceAI.
Some other AI-based tools offered by VanceAI are AI Photo Enhancer, AI Image Denoiser, AI JPEG Artifacts Remover, AI Image Resizer, AI Passport Photo Maker, AI Image Compressor, AI portrait Retoucher, AI Photo Enhancer, AI Image Enhancer, AI Photo Dehaze, and the latest is VanceAI PC to make amazingly perfect and elegant photographs with high-resolution.
Features of VanceAI Image Upscaler
Here are some amazingly cool features of VanceAI Image Upscaler and VanceAI Anime Upscaler.
Based on AI algorithms for photo enlarging online
Waifu2x-based Anime Upscaler for anime image enhancing online
Make anime images sharp and crispier just with one click
Amazingly upgrade resolution by up to 4k to make your Christmas pictures wonderfully awesome
Super-fast to upscale anime images
Also, help in denoise and sharpen images with one click
Intelligently upscale image without losing quality
Amazing AI-based image Upscaler fills up the mixing pixels in pictures automatically
Pros & Cons
Pros
Anime image enhancement with batch-mode processing
Speedily upscale any low-quality picture
100% data security and easy-to-use
Cons
Different AI upscaling models
There are five different AI upscaling models including “Standard, Art & CG, Anime, Text and Low-resolution, and Compressed” to AI upscale any ordinary image at VanceAI Image Upscaler. Here’s a little more about each model.
Standard
The “Standard” is the best AI upscaling model to upscale any kind of image such as paintings, illustrations, text images, general portraits, product images, and simple camera shots.
Anime
“Anime” is the perfect AI upscaling model for anime images, waifu pictures, cartoon wallpapers, or iconic characters to make them HD.
Art & CG
Art & CG is the best AI upscaling model for computer graphics or designs, CG images, drawings, or art.
Text
Text is the best AI upscaling model to upscale images scanned text-based images or any image that includes text or phrases.
Low-resolution & Compressed
Perfect choice for enlarging any low-resolution & compressed portrait including animal photography, wallpaper, scenery photographs, and profile picture.
VanceAI PC
VanceAI PC is the best software from VanceAI that is easy to use and offers all above-mentioned AI upscaling models to AI image upscale online on PCs.
How to Use VanceAI Image Upscaler and VanceAI Anime Upscale
Both our products are operated in the same way. So, we will discuss here just VanceAI Image Upscaler.
Method one: Go to the VanceAI Image Upscaler’s Product Page
Step 1: Go to the VanceAI Image Upscaler’s Product Page and upload your Christmas photo which you want to make elegant by clicking on the “Upload Image” option.
Step 2: Choose the required AI Model and the required scale which you think is perfect for your Christmas image. Now click the “Start to Process” button to AI upscale your Christmas picture online.
Step 3: When your clear picture is ready to download, save it simply by clicking the “Download Image” option. That’s it.
However, there is an alternative way of photo enlarging with VanceAI Image Upscaler. You can also try it.
Method two: Visit VanceAI Upscaler Workspace
Visit the VanceAI Upscaler Workspace and click on to upload option or simply drag your ordinary picture for AI upscaling image online. Click on the “Start to Process” option and get back your highly enhanced image within seconds. Next, save your full-detailed image by clicking the “Download” button.
The same methods are for VanceAI Anime Upscaler, you can make any anime image wonderfully perfect by following the same instruction for Anime Upscaler.
VanceAI Image Upscaler and VanceAI Anime Upscaler Review
In this section, we are going to discuss how our amazing AI-based image Upscaler performed.
Look at the image below, which may be captured at a Christmas event. You can see the uploaded image looks blurred and the detail in the picture is not even precise. Even the word “Merry Christmas” written on the mug, looks blurred when the picture is zoomed in that is special in this picture. While the processed image is fine and the image’s resolution is 100% upgraded because it is upscaled by up to 8x. The processed image is free from pixelation and everything is 100% clear in it. No doubt, after upscaling this picture is pricelessly precious for sharing at a Christmas event.
Here is a Santa Claus that is upscaled up to 8x using our amazing AI-based Image Upscaler.
See the uploaded Santa image that looks too blurry. The resolution of this Santa Claus is too poor, and also details are not clear because of blurriness. However, the processed Santa picture is perfectly sharp and the color contrast is fine here. There is no blurriness, and no pixelation, and also the resolution is 100% upgraded. This Santa picture is sharp enough to appear elegant. Everyone would love to use it to say Merry Christmas.
When you have only ordinary photographs, you must try our online AI-based Image Upscaler. Within seconds, you can make your pictures sharp and clear that can be used for happing wishing on the New Year or Christmas event, and on the other hand to create a storybook or animated clip, or funny cartoon movie.
Conclusion
In conclusion, VanceAI Image Upscaler and VanceAI Anime Upscaler both are the best cloud-based and amazing AI-based upscaling products from VanceAI that allow you to turn any ordinary portrait photograph into a DSLE-level and elegant picture. From now on, within seconds you can make any blurry picture crisp and clear by AI upscaling it.
FAQs
What is VanceAI Passport Photo Maker?
VanceAI Passport Photo Maker is a simple yet powerful AI passport-size photo maker from VanceAI. Here you can smoothly edit your passport photo within no time. It allows you to create a professional passport-size photo with simply one click. You can crop your passport photo, edit it, and make it ready to use just in one click.
What Can I Do with BGremover?
BGremover is an amazing AI-based tool from VanceAI that allows you to remove an image background just in a matter of 5 seconds. You can watch this video to see how AI can accomplish this. Its trained AI background removal technology help in changing background or editing image backgrounds like a pro. Within seconds you can switch the image background and replace it with any solid color background.
| 2023-01-04T00:00:00 |
https://www.smartdatacollective.com/image-upscalers-use-ai-for-optimal-results/
|
[
{
"date": "2023/01/04",
"position": 32,
"query": "artificial intelligence graphic design"
}
] |
|
Will automation pull us through the global labour shortage?
|
Will automation pull us through the global labour shortage?
|
https://www.weforum.org
|
[] |
Automation could speed up the manufacturing process, make the workforce more inclusive, stretch scarce talent and create more jobs than it eliminates.
|
Developed and developing economies are dealing with labour shortages.
Adopting automation speeds up the manufacturing process and stretches scarce talent.
World Economic Forum estimates that by 2025, technologies, such as automation, will create at least 12 million more jobs than they eliminate.
The world is facing a worker shortage. It may be hard to believe as the United Nations announces that the global population tops out at 8 billion people, but it is a growing problem across mature and developing economies. This shortage spans industries and every level of technological development and has serious implications for the global economy.
In developed economies, such as the United States, Canada, Italy, Germany, Japan, Australia, United Kingdom and France, the generational bulge of Baby Boomers is ageing out of the workforce and moving into retirement. Smaller succeeding generations mean that there are fewer people available to fill these newly vacant roles. Stereotypes about what manufacturing work is like, a mismatch of skills needed versus skills possessed and increasing pressure for young adults to pursue college degrees in lieu of entering the workforce, all contribute to the lack of workers in critical jobs.
Discover What is the World Economic Forum doing about blockchain in supply chains? Show more The World Economic Forum has joined forces with more than 100 organizations and 20 governments to accelerate the deployment of blockchain for supply chains – responsibly, securely and inclusively. The multistakeholder team, represents large shippers, supply chain providers and governments – including Maersk, Hitachi, Mercy Corps, Korea Customs Service, Llamasoft and Ports of Los Angeles, Oakland, Valencia and Rotterdam. The group will co-design an open-source toolkit to guide supply chain decision-makers towards utilizing blockchain to maximize the benefits and minimize the risks of the technology. Loading... The World Economic Forum’s project, Redesigning Trust: Blockchain for Supply Chains, is part of our Platform for Shaping the Future of Technology Governance: Blockchain and Digital Assets. Companies globally can join our efforts to streamline new and complex technologies like blockchain, helping to revolutionize sectors and ecosystems and build trust globally. Click here to find out more.
For years, this problem was often addressed by offshoring manufacturing to lower-cost countries. But now even manufacturing powerhouses, such as China, India, Taiwan, Bangladesh and Vietnam, along with Argentina, Colombia, and Turkey, are experiencing worker shortages and a discrepancy of skills that threaten a variety of industries. In Thailand, 500,000 more migrant workers from neighbouring countries are needed to fill roles in food processing, construction and agriculture. Poland also faces pressure as the nearby war has kept Ukrainian migrants from working in Polish factories. Around the globe, declining birthrates ensure that this shortage will remain a persistent issue.
Automation can amplify the workforce
The numbers are not in our favour without changing our approach. As we look towards the future of manufacturing, the best solution, for many reasons, is to empower and upskill the available labour force to amplify their work. The winning hand is a highly trained, engaged workforce working in concert with cutting-edge technology and automation.
Automation not only speeds up the manufacturing process; it also stretches scarce talent. Going forward across industries, automation, the industrial internet of things (IIoT), virtual and augmented reality (AR), and machines equipped with artificial intelligence effectively give workers superpowers. These advancements help create more efficient processes, improve safety, shorten training time and relieve workforce pressures.
Automation can make the workforce more inclusive
Basic automation can also enable machines to carry out heavy lifting and other physically challenging tasks, making manufacturing jobs more inclusive and realistic for a wider ability group. It also improves safety by keeping humans out of hazardous environments, such as chemical or steel manufacturing, and reduces the risk of repetitive injuries and accidents.
With AR technologies, training can be done the moment it is needed, reducing the learning curve to get workers up to speed and allowing for the right training at the right time, to be able to address and fix issues that surface. If something breaks and an expert technician from across the country or even from another part of the world is required to fix the problem, the delays and travel dollars add up. Leveraging AR and secure remote access, an expert can effectively 'teleport' onto the factory floor to guide an on-site worker through the process of solving the problem. This sole expert can work across multiple factories, all without leaving their main job site or, potentially, the comfort of their own home. This technology also allows 24/7 operations, with skilled operators handing off remote supervision duties to colleagues across time zones.
Automation can create more jobs than it replaces
The worries that have surfaced about automation replacing workers or increasing wage inequality are well-intentioned but unfounded. Automation has been shown to create as many jobs as it replaces and wage stagnation and inequality are driven by a variety of factors and cannot be blamed on automation alone. It is important to recognize that automation, along with other innovative technologies, makes manufacturing more attractive to workers. These jobs are family-sustaining and have an upward path via upskilling – in addition to being safer, less repetitive and more inclusive by reducing or eliminating strength requirements for different roles.
There are, however, some barriers to the widespread adoption of automation, in particular for the developing world. The digital divide, for example, will clearly demarcate companies taking advantage of the efficiency, cost savings and workforce benefits of these automation technologies and those that can’t. Regions and countries that do not upgrade and distribute their internet access will be left behind. But this potential for economic gaps is not a reason to fear automation. Rather, it should refocus our efforts on investing in building critical infrastructure and educational opportunities to grow economies, allowing enterprises to take advantage of the latest automation technologies.
Loading...
World Economic Forum estimates that by 2025, technologies, such as Artificial Intelligence and automation, will create at least 12 million more jobs than they eliminate, a sign that in the long run, increasing the sophistication of technology and automation across industries will be a net positive for society. There have been four major changes in how we work – from hunter-gatherers to settled agriculture to the industrial revolution to the information age. Not one of these major transitions has increased unemployment. On the contrary, the innovations that drive these new technologies will drive the demand for labour. As we move into this next age – the informed industrial age – there will again be a shift in the types of jobs needed.
Henry Ford couldn’t have conceived of the need for a software engineer any more than an 18th-century farmer could have conceived of a diesel mechanic. Jobs of the future will not look like jobs of the past and that is OK. Trusting in the superpowers created by automation will drive innovation and efficiency, resulting in more opportunities for the workforce of tomorrow.
| 2023-01-05T00:00:00 |
https://www.weforum.org/stories/2023/01/how-automation-will-pull-us-through-the-labour-shortage-davos23/
|
[
{
"date": "2023/01/05",
"position": 13,
"query": "AI replacing workers"
},
{
"date": "2023/01/05",
"position": 8,
"query": "AI job creation vs elimination"
},
{
"date": "2023/01/05",
"position": 64,
"query": "artificial intelligence wages"
}
] |
|
The Age of Human Augmentation | neoexogenesis
|
The Age of Human Augmentation
|
https://neoexogenesis.com
|
[] |
Artificial intelligence inherently threatens to replace human workers, and Ai companies have swiftly responded by increasing productivity per worker instead of ...
|
The Age of Human Augmentation January 5, 2023
The rise of artificial intelligence to the new electricity of the 21 century raises valid concerns about machines overtaking human jobs. Human beings’ shortcomings become apparent when looking at increasing tasks specific super-human performance achieved by advanced artificial intelligence. Outspoken Ai critics quickly emphasize human intuition, empathy, and ingenuity as compensating traits. A weak argument considering the reality that many tasks artificial intelligence already has overtaken neither required compassion nor imagination.
The ongoing quest to augment human beings with various artificial intelligence tools to improve efficiency threatens the very nature of work precisely through the resulting automation. The real question of our time gravitates around how the future of work will look in the age of human augmentation.
Fundamentally, a company’s profit formula incents efficiency optimization of the existing business, which drives a constant demand for optimizing business operations. Historically, management consultants have served that need very well by establishing best practices in companies across industries. With the ongoing shift of digitalization, consulting companies adopted accordingly and now offer digital transformation as a service, completely overlooking the rise of human augmentation that fundamentally questions the nature of work.
Artificial intelligence inherently threatens to replace human workers, and Ai companies have swiftly responded by increasing productivity per worker instead of replacing anyone. However, the argument purposely omits the implication of eliminating the need for non-managerial jobs due to the very nature of operating advanced artificial intelligence that fully automates digital and increasingly industrial workflows through Blockchain and IoT.
The problem was never the replacement of jobs but the many jobs never created. For example, Amazon already operates over 50000 robots in its warehouse, supervised by only a handful of technical operators. With further progress in artificial intelligence, warehouses and logistics will eventually operate fully automatically under an Ai system’s supervision.
Amazon works hard to lower its TCO to a point where other companies that employ humans can no longer compete. Regarding employment, the outlook becomes grimmer, considering that the profit formula incents efficiency optimization, and the capital market rewards it accordingly. Those who do not automate operations will diminish, while those who automate do not need human labor anymore. The nearly stagnating employment rate in the U.S. retail industry speaks for itself. The tools for complete operations automation continuously improve and accelerate the transition towards a digital economy that tightly integrates with the physical world to ensure completely automated workflows.
Artificial intelligence does not hold the future of work; instead, it amplifies human labor’s diminishing value. Advanced Ai companies generate more significant profit per employee, and, again, the capital market rewards them accordingly. Ueber made $750.000 of revenue per employee in 2017, and it is far from having fully optimized capital efficiency. Letting advanced artificial intelligence do the hard work is just one side of the equation. Equally important, outsourcing work to “contractors” helps increase the revenue per employee metric since contractors do not count towards the actual workforce.
The payout to “contractor” drivers remains one of Uber’s main expenses on its balance sheet. The research focuses on autonomous driving to remove these drivers and the related payout from the balance sheet. It is not the technology that drives Uber to pursue autonomous driving but the goal of maximizing profit.
Ultimately, companies will deploy artificial intelligence for human augmentation to drive productivity per head to unseen heights. Human augmentation inherently threatens traditional employment and opens entirely new opportunities. The fundamental danger to human labor comes from the emerging reality that one single Ai augmented worker can easily outperform an entire team. The same logic applies to advanced robotics since a warehouse full of robots barely needs more than one human supervisor.
The most significant opportunity for human augmentation emerges from human beings becoming capable of solving previously intractable problems. Artificial intelligence amplifies human abilities while simultaneously mitigating human shortcomings.
The age of human augmentation will unavoidably begin with a struggle to define the very meaning of work, especially when mundane and manual tasks become increasingly automated due to incentives from the financial markets.
Automation systematically creates net-negative employment, which means more jobs will be eliminated than created, eventually leading to a systematic under-employment.
The U.S.’s official unemployment rate (U3) remains at a historic low of around 3%, but the underemployment rate (U2) remains high at 12–14%. Even more troubling, the official U3 unemployment rate does not even count people who are neither working nor looking for work anymore. These are captured in the U6 rate, which stays at a relatively high 7% of the U.S. population.
The high underemployment rate (U2) causes a systematic risk of mass unemployment whenever the real economy contracts because temporary and part-time workers usually are the first to let go of otherwise good companies.
Augmenting the skilled workforce with artificial intelligence unavoidably reduces the unskilled workforce and aggravates the already prevalent social divide. Responding to the increasing social shift with a machine tax on automation will only incent more sophisticated tax evasion while leaving the underlying cause of the change untouched. Distributing an unconditional income to society may give some time to breathe and think but ultimately does not answer how the future of work will look in the age of human augmentation.
Increasingly, work becomes a design activity augmented by artificial intelligence, with the execution delegated to automation technology. Therefore, creative skills, design skills, and system thinking will only increase in value while traditional trade skills will be replicated in machines.
Human augmentation through artificial intelligence inherently produces bold and ambitious goals previously unattainable because of the massive automation of trivial and mundane tasks. The bold quest-makers will rule the age of human augmentation by setting ambitious goals that demand an augmented workforce.
From this perspective, the quest to colonize planet Mars all of a sudden appears increasingly as a solution in the sense of becoming a large-scale employment program with a high chance of receiving Government subsidies precisely because of the large quantity of skilled and augmented workers needed for space colonization projects.
However, closing the widening social divide would require a seismic shift in organizational ownership away from investor ownership towards employee ownership to change the nature of financial incentives. Fundamentally, every owner structures financial incentives in an organization to create value for the owner.
The age of human augmentation may give rise to humankind divided or may give rise to a new area of shared prosperity, but the decision of who owns the future rests upon us.
Marvin F. L. Hansen
History:
First published: Nov 27, 2018
Updated: Jan 20, 2021
Republished: Jan 5, 2023 on Medium.com
Moved to personal blog on March 16, 2024
Sources U.S. Bureau of Labor Statistics, Alternative measures of labor underutilization https://www.bls.gov/news.release/empsit.t15.htm
U.S. underemployment rate from July 2016 to July 2017 (by month) https://www.statista.com/statistics/205240/us-underemployment-rate/
Underemployment Takes An Outsized Toll On The Economy https://www.forbes.com/sites/eriksherman/2018/09/25/underemployment-takes-an-outsized-toll-on-the-economy-according-to-a-new-study/#3f82b1a6234e
| 2023-01-05T00:00:00 |
https://neoexogenesis.com/posts/age-of-human-augmentation/
|
[
{
"date": "2023/01/05",
"position": 45,
"query": "AI replacing workers"
},
{
"date": "2023/01/05",
"position": 8,
"query": "AI unemployment rate"
},
{
"date": "2023/01/05",
"position": 32,
"query": "AI job creation vs elimination"
}
] |
|
Industry 4.0: Robots Aren't Coming for Your Jobs
|
Industry 4.0: Robots Aren’t Coming for Your Jobs
|
https://www.flextrades.com
|
[
"Josh Erickson",
"Public Relations",
"Engagement Specialist"
] |
133 million Jobs Expected to be Created. We expect technology (like robots and larger automation processes) to eliminate a lot of jobs around the world.
|
Robots Aren’t Coming for Your Jobs
Have you heard about Industry 4.0? It’s the fourth industrial revolution. The first was about mechanization and happened in the 18th century. The second occurred during the 19th century and centered around electrification. The 20th century saw the third, which was all about computers. Now we’re in the 21st century and smack dab in the middle of the fourth industrial revolution. This revolution is about what are called cyber-physical systems – the convergence of machine and computer. Industry 4.0 is evidenced by automation, robotics, the Industrial Internet of Things (IIoT), and the ongoing move towards “lights out” manufacturing.
Most of you have heard about Industry 4.0, you just don’t realize it. This is because most of what you’ve heard has been misrepresented. In general, talk about the fourth industrial revolution starts with, “The robots are coming to take our jobs!” Does that ring a bell? I’m here to tell you, that isn’t going to happen.
133 million Jobs Expected to be Created
We expect technology (like robots and larger automation processes) to eliminate a lot of jobs around the world. According to a oft-cited study conducted jointly by Hays and Oxford Economics, it can be expected that technology will “cull” 75 million jobs globally by the end of this decade. That’s an undeniably huge number, so why am I saying that the worries about robots taking jobs are being misrepresented? Because an even larger number mentioned in that same study never seems to get the same amount of attention. That number is 133 million, and it’s mentioned in reference to the number of jobs we expect to be CREATED by technology during that same time frame. That’s an almost 2:1 ratio and means that robots and technology are expected to create 100% more jobs than they eliminate worldwide!
Why People are Worried
Why, then, is everybody so worried about robots? In my opinion, it’s mostly because people inherently dislike change. Technology’s biggest benefit to industry (whether manufacturing, finance, retail, etc.) is to move the variable (in most cases, that’s the human element – you) further and further from where machine (mill, lathe, pen, phone call, point of sale, etc.) and material (metal, wood, receipt, service, etc.) meet. This is because that intersection is where errors, inefficiencies, and injuries happen most often. By moving that wild card (you), technology can help deliver better results while simultaneously making jobs safer.
Technology will Change Today’s Jobs
There’s a silver lining – technology is making our human jobs easier, safer, and more secure every day. So, what’s the bad news? It’s that technology will cause those same jobs to change continuously and consistently throughout a career. And that’s not going to change. This is also why a skills gap exists today. We have the jobs. We have the people to fill them. But those people don’t currently possess the skills needed to fill those jobs. The skills gap has led to a hiring shortfall of over 2 million people in American manufacturing alone!
Opportunities for Employers
What does this mean for you? If you’re an employer, it means that your workforce headaches aren’t going away anytime soon. FlexTrades can help with that, if needed. Visit our website to learn more about our manufacturing solutions.
Opportunities for Employees
If you’re an employee, this means that the robots are making more employment opportunities for you than ever before. McKinsey expects somewhere between 75 million and 375 million workers will eventually be “displaced” by technology. The sheer scale of opportunity for career advancement for workers worldwide is mind boggling when you think about it. You just need to keep growing your knowledge and skills, along with the technological advances of your industry, to ensure that you can benefit.
Already in manufacturing or the skilled trades? We could be a good employment option for you! Browse our jobs and bookmark our blog page.
Do you have a topic you’d like to learn more about? Send it to our Writing Team and we’ll try to cover it in a future blog.
| 2023-01-05T00:00:00 |
2023/01/05
|
https://www.flextrades.com/blog/robots-arent-coming-for-your-jobs/
|
[
{
"date": "2023/01/05",
"position": 20,
"query": "AI job creation vs elimination"
}
] |
Intelligent Automation Drives the Future of Work, AI and ...
|
Intelligent Automation Drives the Future of Work, AI and Humans Work Side by Side: WorkFusion’s Predictions for 2023
|
https://www.workfusion.com
|
[] |
Intelligent Automation has the promise to help organizations combat the talent shortage, reduce employee burnout, increase capacity, mitigate risk, and help ...
|
For the past several years, there has been a deluge of macroeconomic events that have impacted our businesses and our lives. The pandemic forced digital transformation efforts to go into hyperdrive. Then, the Great Resignation wave rapidly moved across the labor force, exacerbating the pains of pandemic-affected companies. Next came the Ukraine-Russia war, which upended the global economy. Now, the looming recession has more organizations looking at where they can cut costs and how they can do more with less.
This confluence of events has created a talent dilemma, with no end in sight, and forced businesses to consider alternative sources for employment.
However, out of chaos comes opportunity, and that is where our 2023 predictions focus. We have spoken to business and technology leaders across the company to summarize what they see on the horizon for the year ahead. Our predictions look at broad technology trends and home in on what our financial crime experts anticipate the year ahead will bring to the banking and financial services industry.
1. A new type of “hybrid” work emerges as humans and AI work side by side
As the labor crisis extends into 2023 and beyond, employees will move beyond the fear of AI “coming for their jobs” and more readily adopt AI as their alternative colleague to form “fusion teams.” These fusion teams will have humans and AI working together. As organizations have struggled to fill open job requisitions, existing employees have had to pick up the slack, leading to burnout and mistakes. By leveraging an AI/ML-enabled digital workforce, both businesses and employees will reap the benefits.
2. AI gets responsible and explainable
AI is being used increasingly to make all types of decisions. Some of these decisions have more impact or importance on society than others. If a tool doesn’t write perfect copy (see ChatGPT) or an app doesn’t recognize a face, it’s not ideal and can impact a user experience — however, it has minimal impact on society. On the other hand, if AI is deciding what crops to pick, deciding who gets a loan from a bank, or deciding if someone committed a crime, these types of decisions can wreak havoc on people’s lives. Increasing regulatory scrutiny (Blueprint for an AI Bill of Rights and The AI Act) means that AI will need to be explainable. Explainable algorithms help organizations understand how the AI makes its decisions. For example, among financial institutions, practices like Model Risk Management (MRM) are about reducing the risk to the business and helping explain the AI. With explainable and responsible AI, you reduce the risk of litigation or compliance issues.
3. The demise of the Financial Crime analyst
With the rise of machine learning and Intelligent Automation and its increasing adoption in Financial Services, we anticipate the demise of the “Level 1” operations analyst. Day-to-day, these positions consist largely of repeatable, monotonous and time-consuming tasks; in other words, the exact processes that are ripe to be automated. This will have two major effects. Firstly, cost savings and efficiency gains will be enormous. Secondly, and perhaps not so obvious, is that those “Level 1” analysts will now be re-directed to more valuable work streams and create more value for the business.
4. Perpetual Know Your Customer (pKYC) finally realizes its potential
Financial institutions need to know who they are doing business with to limit their exposure to bad actors. Historically, huge case volumes were expensive and required global coordination, while manual approaches often generated poor customer experiences. With that said, whether it is for cost efficiency, customer experience, regulatory compliance, or a combination of all three, in 2023 traditional banks, challenger banks, and FinTechs alike will recognize that a continuous review cycle (pKYC) is necessary.
5. Companies earnestly lead with purpose
While there are already shining stars of ESG like Patagonia, more companies will define their purpose in the products and solutions they bring to market. People, and without question Gen Z, want to buy and invest in companies that show a demonstrable impact on improving the world. For banking and financial services organizations, this means that in addition to digital transformation efforts and a great customer experience, cracking down on AML and limiting the reach of bad actors, including Russian oligarchs, will demonstrate corporate responsibility.
While we don’t know for sure what the next year will bring, we are confident that 2023 will be the year of Intelligent Automation and AI-led solutions in the workforce. Intelligent Automation has the promise to help organizations combat the talent shortage, reduce employee burnout, increase capacity, mitigate risk, and help navigate any potential pivots and pitfalls that 2023 may bring.
To all of our customers, prospects, partners, and friends, we wish you a healthy and prosperous year ahead.
To learn more about WorkFusion’s AI-led Digital Workers, please schedule a demo at your earliest convenience.
| 2023-01-05T00:00:00 |
2023/01/05
|
https://www.workfusion.com/blog/intelligent-automation-drives-the-future-of-work-ai-and-humans-work-side-by-side-workfusions-predictions-for-2023/
|
[
{
"date": "2023/01/05",
"position": 5,
"query": "future of work AI"
}
] |
AI at Work: The Impact of Artificial Intelligence on the Future ...
|
Amazon.com
|
https://www.amazon.com
|
[] |
This book compiles the insights and predictions of ChatGPT, a leading expert in the field of artificial intelligence, on how AI will shape the future of work.
|
Click the button below to continue shopping
| 2023-01-05T00:00:00 |
https://www.amazon.com/AI-Work-Artificial-Intelligence-Employment/dp/B0BRLY7JZJ
|
[
{
"date": "2023/01/05",
"position": 47,
"query": "future of work AI"
}
] |
|
AI revolution: India must embrace technology to be world leader
|
AI revolution: India must embrace technology to be world leader
|
https://www.policycircle.org
|
[
"Policy Circle Bureau"
] |
Further, companies are also faced with the problem of equipping the workforce to handle AI. Without proper training, leveraging the potential of artificial ...
|
The calls for AI regulation and concerns about its threat stem from fears about the impact on jobs, data privacy and individual rights.
India’s strides in AI: On his recent trip to India, Microsoft Chairman and CEO Satya Nadella said India’s future is promising in the field of technology owing to a large base of software developers and that nation’s involvement in a variety of artificial intelligence projects. India will be a frontrunner in AI and other evolving technologies such as cloud computing which will drive economic growth, he said.
India will become the most populous country in the world this year and has a huge pool of talent and skilled workforce. The country is the second largest contributor to the global developer ecosystem. In terms of artificial intelligence projects, India is number one. The country is already endowed with human capital to research, innovate and help technology evolve in an era which will largely be dominated by AI.
Another factor driving India’s progress towards machine learning is rising upskilling aspirations. India has twice the rate of skilling, according to LinkedIn data. Realising the potential of the Indian workforce, tech giant Microsoft is all set to expand its current business in the country and will set up its fourth data centre in Hyderabad.
An artificial intelligence hub
Artificial Intelligence is useful not just for the technology sector but also for other areas that can benefit from machine learning. Multiple industries that will benefit include healthcare where AI-powered tools and techniques can be used to improve diagnosis, treatment, and patient care while reducing costs and increasing efficiency in decision-making.
READ | Supreme Court upholds demonetisation, but cash is still king
AI can also find use in finance for analysing market trends, predicting stock prices, and identifying fraudulent practices. Companies have been using AI in transportation as it can be deployed for improving traffic flow, optimising delivery routes, and developing autonomous vehicles.
Other sectors such as manufacturing, and agriculture can also benefit. In fact, for countries like India where agriculture is central, AI may be used to optimise crop yields and improve the efficiency of farming operations. Currently, governments across the world are facing the danger of climate change which has also resulted in crop failure. According to a recent 2022 global survey, clear adoption of AI helped countries tackling major climate-related issues.
The government has also emphasised the importance of AI and in the Union Budget 2022-23, AI has been described as a dawning technology that can assist in scaled-up sustainable development and modernisation of the country.
India has also started using AI in defence. It has set up Defense AI Council and Defense AI Project Agency with an annual budget of Rs 1,000 crore. The Centre for AI and Robotics is developing an Al-based signal intelligence system for intelligence gathering. The government has announced the deployment of 140 AI-enabled sensor systems across its borders.
Challenges galore
One of the major challenges facing artificial intelligence is the biases that these machines learn from their creators. Many critics of AI and machine learning have already pointed out how artificial intelligence may reproduce human biases based on the data models provided to them. For example: many social media companies have been accused of having algorithms that promote fair skinned people over those having dark skin. Companies need to work towards alleviating this and make machine learning more diverse and tolerant.
Further, companies are also faced with the problem of equipping the workforce to handle AI. Without proper training, leveraging the potential of artificial intelligence might not be possible. Getting the workforce ready for the AI era is a must. The government must also look into growing concerns over data privacy, algorithmic risk, and black box.
Applied wisely, AI has the potential to become a source of global competitive advantage. Governments across the world are now investing in national AI strategies, involving both public and private sectors. India’s national AI strategy identifies healthcare, agriculture, education, smart cities, infrastructure and mobility as key areas where AI can enable development and can create greater inclusion.
| 2023-01-05T00:00:00 |
2023/01/05
|
https://www.policycircle.org/industry/india-must-embrace-tech-for-ai/
|
[
{
"date": "2023/01/05",
"position": 52,
"query": "government AI workforce policy"
}
] |
Workforce and education
|
Workforce and education
|
https://www.asce.org
|
[] |
Related ASCE policy statements. PS 377 – Science, technology, engineering, and ... government to include skilled workers in their long-term workforce plans.
|
How does ASCE define workforce and education?
ASCE supports programs that foster and appreciation for, and education in, science, technology, engineering, and mathematics (STEM). Increased awareness and support for STEM education fields, such as engineering, is critical to develop the pipeline of civil engineers that are necessary to design, maintain, and build our nation’s infrastructure into the future.
Related ASCE policy statements
Talking points
To realize the full potential of the Infrastructure Investment and Jobs Act, it will be critical to have the civil engineering workforce in place or the nation will not be able to effectively utilize the influx of funding. While Congress recognizes the recent workforce needs across the construction and engineering sectors, it will be paramount for federal policymakers to encourage state and local government to include skilled workers in their long-term workforce plans. Federal agencies should partner with the engineering community to develop programs that can assist state STEM education and workforce plans. The nation must continue to foster a diverse pipeline of skilled workers, and not only bring students into the industry, but keep engineers in the United States. Furthermore, policymakers should fund targeted outreach to disadvantaged communities in order to address the ongoing gender, racial, and ethnic diversity gap that persists in the engineering field.
ASCE advocacy highlights
ASCE staff contact: Martin Hight - Senior Manager, Government Relations
| 2023-01-05T00:00:00 |
https://www.asce.org/advocacy/priority-issues/workforce-and-education
|
[
{
"date": "2023/01/05",
"position": 72,
"query": "government AI workforce policy"
}
] |
|
AI Researcher at Playground
|
AI Researcher at Playground
|
https://www.ycombinator.com
|
[] |
At Playground, we are making a superhuman AI designer. Our immediate goal is to build the designer of the future where humans and machines work together to ...
|
At Playground, we are making a superhuman AI designer.
Our immediate goal is to build the designer of the future where humans and machines work together to exceed the capability of any single designer. It has exceptional taste, it never tires, it is your partner, it has an eye for details, and hopefully it designs something you couldn’t have imagined yourself.
Read about our latest advancement: https://playground.com/pg-v3
If you join us, you’ll be an early team member in helping shape:
Our future company culture Our engineering practices People that we hire The direction & focus of our products
Researchers on the team today:
Work primarily in Python and PyTorch
Implement cutting-edge research papers to explore and learn
A perseverance to experiment but understand not everything they accomplish will be used in our products
Are supportive—especially when teammates are faced with new challenges
Are left to autonomously figure out the solutions to their challenges
Value clear, frequent communication (we do a lot of reading & writing)
Are naturally curious and willing to take a step to learn something they don’t have experience in
Feel a great sense of accountability to each other
Uphold best practices in engineering, security, and design
You might be the wrong fit if…
Publishing papers and earning citations is one of your top 3 priorities
Working on research that ships to users hasn’t been a priority for you
Writing production-worthy code is something you tend to avoid
Teaching or mentoring people from time to time isn’t how you want to spend your time
Skills & Experience
Background in Computer Vision and/or Natural Language Processing
Track record of implementing research you can show
Bonus: Scaling distributed systems for the purposes of training
Having a PhD is not important to us but a track record of building real things in production is
4+ years of working full-time
Here are examples of things we’ve worked on:
Building an aesthetic classifier to predict aesthetically pleasing images by user ratings
Predicting highly accurate object masks from bounding boxes
You can read our FAQ here.
| 2023-01-05T00:00:00 |
https://www.ycombinator.com/companies/playground/jobs/BPpGIYS-ai-researcher
|
[
{
"date": "2023/01/05",
"position": 18,
"query": "generative AI jobs"
}
] |
|
Get a jump start on your career with Cogito
|
Get a jump start on your career with Cogito.
|
https://www.cogitotech.com
|
[] |
Cogito offers a resounding career path. Cogito always looks for individuals with a keen interest in artificial intelligence (AI) and machine learning (ML).
|
Cogito is an Equal Opportunity Employer where Respect-For-All is a Norm
At Cogito, we value diversity and inclusion at the workplace. As an equal opportunity employer, we hire only on the basis of the qualification, skill, and experience of a candidate.
As a company, we strive to instill a culture of all-inclusiveness in which everyone feels valued and appreciated for their unique skills and contributions. Diversity contributes to the success of our organization and is highly encouraged in our workplace.
Cogito does not discriminate against any employee or applicant on the basis of their race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, or veteran status.
| 2023-01-05T00:00:00 |
https://www.cogitotech.com/careers/?srsltid=AfmBOooBaKzSRO6iVtEdFXkdW9YGBVXrJV3JsDCdo2sDknmlmixfOkKg
|
[
{
"date": "2023/01/05",
"position": 24,
"query": "generative AI jobs"
}
] |
|
What is generative AI, and why is it suddenly everywhere? - Vox
|
What is generative AI, and why is it suddenly everywhere?
|
https://www.vox.com
|
[
"Rebecca Heilweil"
] |
That kind of journalism isn't easy. We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today? Join now.
|
Artificial intelligence is suddenly everywhere — or at least, that’s what it seems like to me: A few weeks ago, a friend mentioned in passing that his law professor had warned students not to cheat with AI on an upcoming exam. At the same time, I couldn’t escape the uncanny portraits people were generating with the image-editing app Lensa AI’s new Magic Avatar feature and then sharing on social media. A guy on Twitter even used OpenAI’s new machine learning-powered chatbot, ChatGPT, to imitate what I said on a recent podcast (which, coincidentally, was also about ChatGPT) and posted it online.
User Friendly A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Welcome to the age of generative AI, when it’s now possible for anyone to create new, original illustrations and text by simply sending a few instructions to a computer program. Several generative AI models, including ChatGPT and an image generator called Stable Diffusion, can now be accessed online for free or for a low-cost subscription, which means people across the world can do everything from assemble a children’s book to produce computer code in just a few clicks. This tech is impressive, and it can get pretty close to writing and illustrating how a human might. Don’t believe me? Here’s a Magic School Bus short story ChatGPT wrote about Ms. Frizzle’s class trip to the Fyre Festival. And below is an illustration I asked Stable Diffusion to create about a family celebrating Hanukkah on the moon.
Stable Diffusion’s take on a lunar Hanukkah includes a menorah with five candles and plenty of oversized Christmas ornaments. Stable Diffusion
Generative AI’s results aren’t always perfect, and we’re certainly not dealing with an all-powerful, super AI — at least for now. Sometimes its creations are flawed, inappropriate, or don’t totally make sense. If you were going to celebrate Hanukkah on the moon, after all, you probably wouldn’t depict giant Christmas ornaments strewn across the lunar surface. And you might find the original Magic School Bus stories more entertaining than my AI-generated one.
Still, even in its current form and with its current limitations, generative AI could automate some tasks humans do daily — like writing form emails or drafting simple legal contracts — and possibly make some kinds of jobs obsolete. This technology presents plenty of opportunities, but plenty of complex new challenges, too. Writing emails may suddenly have gotten a lot easier, for example, but catching cheating students has definitely gotten a lot harder.
It’s only the beginning of this tech, so it can be hard to make sense of what exactly it is capable of or how it could impact our lives. So we tried to answer a few of the biggest questions surrounding generative AI right now.
Wait, how does this AI work?
Very simply, a generative AI system is designed to produce something new based on its previous experience. Usually, this technology is developed with a technique called machine learning, which involves teaching an artificial intelligence to perform tasks by exposing it to lots and lots of data, which it “trains” on and eventually learns to mimic. ChatGPT, for example, was trained on an enormous quantity of text available on the internet, along with scripts of dialogue, so that it could imitate human conversations. Stable Diffusion is an image generator created by the startup Stability.AI that will produce an image for you based on text instructions, and was designed by feeding the AI images and their associated captions collected from the web, which allowed the AI to learn what it should “illustrate” based on the verbal commands it received.
While the particular approaches used to build generative AI models can differ, this technology is ultimately trying to reproduce human behavior, creating new content based on the content that humans have already created. In some ways, it’s like the smart compose features you see on your iPhone when you’re texting or your Gmail account when you’re typing out an email. “It learns to detect patterns in this content, which in turn allows it to generate similar but distinct content,” explains Vincent Conitzer, a computer science professor at Carnegie Mellon.
This method of building AI can be extremely powerful, but it also has real flaws. In one test, for example, an AI model called Galactica that Meta built to help write scientific papers suggested that the Soviet Union was the first country to put a bear in space, among several other errors and falsehoods. (The company pulled the system offline in November, after just a few days.) Lensa AI’s Magic Avatar feature, the AI portrait generator, sometimes illustrates people with additional limbs. It also has the concerning tendency to depict women without any clothing.
It’s easy to find other biases and stereotypes built into this technology, too. When the Intercept asked ChatGPT to come up with an airline passenger screening system, the AI suggested higher risk scores for people from — or who had visited — Syria and Afghanistan, among other countries. Stable Diffusion also reproduces racial and gender stereotypes, like only depicting firefighters as white men. These are not particularly new problems with this kind of AI, as Abeba Birhane and Deborah Raji recently wrote in Wired. “People get hurt from the very practical ways such models fall short in deployment, and these failures are the result of their builders’ choices — decisions we must hold them accountable for,” they wrote.
Who is creating this AI, and why?
Generative AI isn’t free out of the goodness of tech companies’ hearts. These systems are free because the companies building them want to improve their models and technology, and people playing around with trial versions of the software give these companies, in turn, even more training data. Operating the computing systems to build artificial intelligence models can be extremely expensive, and while companies aren’t always upfront about their own expenses, costs can stretch into the tens of millions of dollars. AI developers want to eventually sell and license their technology for a profit.
There are already hints about what this new generative AI industry could look like. OpenAI, which developed the DALL-E and ChatGPT systems, operates under a capped-profit model, and plans to receive $1 billion in revenue by 2024, primarily through selling access to its tech (outside developers can already pay to use some of OpenAI’s tech in their apps). Microsoft has already started to use the system to assist with some aspects of computer programming in its code development app. Stability AI, the Stable Diffusion creator, wants to build specialized versions of the technology that it could sell to individual companies. The startup raised more than $100 million this past October.
Some think ChatGPT could ultimately replace Google’s search engine, which powers one of the biggest digital ad businesses in the world. ChatGPT is also pretty good at some basic aspects of coding, and technologies like it could eventually lower the overall costs of developing software. At the same time, OpenAI already has a pricing program available for DALL-E, and it’s easy to imagine how the system could be turned into a way of generating advertisements, visuals, and other graphics at a relatively low cost.
Is this the end of homework?
AI tools are already being used for one obvious thing: schoolwork, especially essays and online exams. These AI-produced assignments wouldn’t necessarily earn an A, but teachers seem to agree that ChatGPT can create at least B-worthy work. While tools for detecting whether a piece of text is AI generated are emerging, the popular plagiarism detection software, Turnitin, won’t catch this kind of cheating.
The arrival of this tech has driven some to declare the end of high school English, and even homework itself. While those predictions are hyperbolic, it’s certainly possible that homework will need to adapt. Some teachers may reverse course on the use of technology in the classroom and return to in-person, paper-based exams. Other instructors might turn to lockdown browsers, which would prevent people from visiting websites during a computer-based test. The use of AI itself may become part of the assignment, which is an idea some teachers are already exploring.
“The sorts of professionals our students want to be when they graduate already use these tools,” Phillip Dawson, the associate director of the Centre for Research in Assessment and Digital Learning, told Recode in December. “We can’t ban them, nor should we.”
Is AI going to take my job?
It’s hard to predict which jobs will or won’t be eradicated by generative AI. Greg Brockman, one of OpenAI’s co-founders, said in a December tweet that ChatGPT is “not yet ready to be relied on for anything important.” Still, this technology can already do all sorts of things that companies currently need humans to do. Even if this tech doesn’t take over your entire job, it might very well change it.
Take journalism: ChatGPT can already write a pretty compelling blog post. No, the post might not be particularly accurate — which is why there’s concern that ChatGPT could be quickly exploited to produce fake news — but it can certainly get the ball rolling, coming up with basic ideas for an article and even drafting letters to sources. The same bot can also earn a good score on a college-level coding exam, and it’s not bad at writing about legal concepts, either. A photo editor at New York magazine pointed out that while DALL-E doesn’t quite understand how to make illustrations dealing with complex political or conceptual concepts, it can be helpful when given repeated prodding and explicit instructions.
While there are limits on what ChatGPT could be used for, even automating just a few tasks in someone’s workflow, like writing basic code or copy editing, could radically change a person’s workday and reduce the total number of workers needed in a given field. As an example, Conitzer, the computer science professor, pointed to the impact of services like Google Flights on travel agencies.
“Online travel sites, even today, do not offer the full services of a human travel agent, which is why human travel agents are still around, in larger numbers than many people expect,” he told Recode. “That said, clearly their numbers have gone down significantly because the alternative process of just booking flights and a place to stay yourself online — a process that didn’t exist some decades ago — is a fine alternative in many cases.”
Should I be worried?
Generative AI is going mainstream rapidly, and companies aim to sell this technology as soon as possible. At the same time, the regulators who might try to rein in this tech, if they find a compelling reason, are still learning how it works.
The stakes are high. Like other breakthrough technologies — things like the computer and the smartphone, but also earlier inventions, like the air conditioner and the car — generative AI could change much of how our world operates. And like other revolutionary tech, the arrival of this kind of AI will create complicated trade-offs. Air conditioners, for example, have made some of the hottest days of the year more bearable, but they’re also exacerbating the world’s climate change problem. Cars made it possible to travel extremely long distances without the need for a train or horse-drawn carriage, but motor vehicle crashes now kill tens of thousands of people, at least in the United States, every year.
In the same way, decisions we make about AI now could have ripple effects. Legal cases about who deserves the profit and credit — but also the liability — for work created by AI are being decided now, but could shape who profits from this technology for years to come. Schools and teachers will determine whether to incorporate AI into their curriculums, or discard it as a form of cheating, inevitably influencing how kids will relate to these technologies in their professional lives. The rapid expansion of AI image generators could center Eurocentric art forms at the expense of other artistic traditions, which are already underrepresented by the technology.
If and when this AI goes fully mainstream, it could be incredibly difficult to unravel. In this way, the biggest threat of this technology may be that it stands to change the world before we’ve had a chance to truly understand it.
| 2023-01-05T00:00:00 |
2023/01/05
|
https://www.vox.com/recode/2023/1/5/23539055/generative-ai-chatgpt-stable-diffusion-lensa-dall-e
|
[
{
"date": "2023/01/05",
"position": 44,
"query": "AI labor union"
}
] |
How companies decide to lay off workers - Marketplace.org
|
How companies decide to lay off workers
|
https://www.marketplace.org
|
[] |
Layoffs are often one of the first ways companies cut costs. But they don ... Website bots could help publishers fight off traffic loss from AI crawling ...
|
Amazon is laying off workers again. It started making cuts in November and is continuing with another round, bringing the tally to more than 18,000 layoffs — or roughly 5% of its workforce. This comes a day after Salesforce said it will cut 10% of its staff. Both companies say they’re doing this because they overhired during the pandemic.
Layoffs are often one of the first ways companies cut costs. But they don’t take them lightly because there’s a lot at stake, not least employee goodwill and public image.
Layoffs are all about numbers. The process starts off pretty formulaic, said Paul Wolfe, an independent HR advisor. “What do we need to save? How much do we need to cut?” he said.
And then how many people does that include? That requires that companies make bets on the future and how the economy is going to impact their business. In an uncertain world, Wolfe said it’s better to overshoot, “rather than realizing in two or three months, ‘Oh crap, we underestimated and then I have to let more people go.'”
Because another round of layoffs creates another round of headlines — and a greater feeling of uncertainty for employees who remain.
Who gets cut is a little less formulaic. A company might start with voluntary buyouts, but that rarely gets firms close to their target. So they’ll identify departments that have too much slack or business ventures that underperform or are too experimental. After that, it usually comes down to performance, per Jason Winmill at the consulting firm Argopoint.
“Employees of any corporate organization perform on a bell curve, meaning some are super performers, most are around average performers, but some are underperformers,” he said.
Laying off underperformers makes financial and legal sense if companies can show someone’s not making their sales quota or customer service call volume.
There are some alternatives to layoffs. Companies can make pay cuts or furlough workers. But “there’s been historically mixed results in this area,” said Michael Sturman, who chairs the human resource management department at Rutgers.
Because even though in the short term employees may feel like it’s a better deal — they keep their jobs and help save colleagues — any cutback signals trouble. And no worker wants to stay on a sinking ship.
| 2023-01-05T00:00:00 |
2023/01/05
|
https://www.marketplace.org/story/2023/01/05/how-companies-decide-to-lay-off-workers
|
[
{
"date": "2023/01/05",
"position": 66,
"query": "AI layoffs"
}
] |
The coming expectations for business leaders owning risk ...
|
The coming expectations for business leaders owning risk within today’s organizations
|
https://www.mindbridge.ai
|
[
"Joe Welch"
] |
Putting data analytics, automation, and Artificial Intelligence at the center of your approach enables you to assess and detect risk within the organization.
|
Where AI meets internal controls and risk management
For many, crossing into a new year brings with it resolutions to progress, update, upgrade and enhance all parts of one’s life. It’s not so different in businesses and organizations; there is an annual plan, and teams start quickly to shape and reshape programs throughout.
For businesses and entities, with December 31st year ends, a new year also comes with the need to ‘close the chapter’ on the previous year and report to stakeholders their audited financial statements. Needless to say, this is a very busy time for accounting, finance, governance, risk, compliance, and controls teams. They will be working hard to dot the i’s and cross the t’s in presenting the outcomes while also getting a head start in proposing and implementing program updates in their various functions.
This year, new nuances are emerging as more jurisdictions update their oversight to increase the level of liability and personal responsibility for failures. Recently the PCAOB/SEC conference provided an update on new SEC claw back rules on executive compensation for financial statements. This was just after PCAOB put a proposal out for comments on Audit firm quality control requirements that will push further into rooting out critical audit matters. Combined, it appears that there will be a new focus on how businesses manage their risk, the responsibility for disclosing risks, and increased pressure on executives, directors, and officers of the businesses to be free of misstatements in reported figures.
Not only are there new requirements and regulations coming into effect, but there is also the fact that financial and operational data has been more than doubling in amount generated every 18 months. This explosive increase leaves business leaders, control owners, and external auditors with challenges using existing programs, tools, and mindsets. No longer can we solely rely on interviews, surveys, summary-level analytics, and sampling – the coming expectations will be too great.
The risk environment is constantly changing.
With factors such as staffing shortages (Since the pandemic, undergraduate enrollment in public colleges and universities has declined by 9.4% — roughly 1.4 million students.), new regulations, data volume issues, and budget pressures, organizations must be aware of how these pressures affect their risk profile.
Putting data analytics, automation, and Artificial Intelligence at the center of your approach enables you to assess and detect risk within the organization. Anomalies are proven to be a strong indicator of where there is inconsistency in process, human errors, or lack of oversight within your organization. Creating this added transparency helps the organization pursue improvement in enterprise risk management and compliance programs.
AI, specifically, enables the ability to aggregate extreme amounts of data that would typically otherwise be highly cumbersome to aggregate and use for decision-useful information. Therefore, instead of going through a theoretical exercise, you’re able to use actual concepts and actual risks that are permeating through your data.
How do you get started?
Leading global organizations are employing a combination of control technologies with complimentary features and functionality to deliver on their financial reporting objectives and maximize the value of their investment. Adding MindBridge enables a ‘best of breed’ approach that will make the highest impact in the shortest amount of time. Putting AI to work will ultimately alleviate more manual analytics efforts and provide higher value by presenting anomalies and risks, enabling you to prioritize higher-risk items.
MindBridge has been partnering with CPA firms, advisors, and other software companies to create templates and frameworks that accelerate your upgrading of programs and add value almost immediately.
Stay tuned for more on this topic with interactive webinars, panel discussions, white-papers, and more in the coming weeks and months.
Have questions or want to learn more? Don’t hesitate to reach out directly [email protected], or click on the ‘Book a demo’ link, and the MindBridge team will be in touch with you shortly.
| 2023-01-05T00:00:00 |
2023/01/05
|
https://www.mindbridge.ai/blog/where-ai-meets-internal-controls-and-risk-management/
|
[
{
"date": "2023/01/05",
"position": 10,
"query": "artificial intelligence business leaders"
}
] |
Data and Analytics Summit - Executives' Club of Chicago
|
Data and Analytics Summit
|
https://www.executivesclub.org
|
[] |
Combined with top talent, AI has the power to optimize your organization. With the integration of human and artificial intelligence, companies can empower their ...
|
Join Us For The Inaugural Data and Analytics Summit
While artificial intelligence (AI) is still an abstract concept to some, the technology is rapidly growing and affecting how workplaces function across industries. With its rapid expansion, AI is no longer just for those working at cutting edge technology. From ChatGPT to self driving cars to smart personal assistance, artificial intelligence has been integrated into daily life. The exciting advancements happening in the AI world offer organizations the opportunity to gain competitive advantages, engage and empower employees, boost security, and deliver stronger results to stakeholders. To reshape their workforce and stay competitive, organizations need to consider their relationship with AI and determine steps to adopt and implement it in their workplace.
How can organization tech leaders effectively convey the urgency of adoption to colleagues? How can AI be used to optimize your workforce and better employee experience? What do the early stages of adoption look like for organizations of all sizes? What factors need to be considered for successful implementation of AI in the workplace? What are examples of success stories? Join us on February 1, 2023 for answers to all these questions and more at our Data and Analytics Summit.
Schedule of Events (details below):
1:30 pm: Registration
2:00 pm: Opening Session
3:10 pm: Breakout Panels
4:15 pm: Closing Session
5:10 pm: Happy Hour & Stories From The Ignition Center
6:15 pm: Conclusion of Event
Pricing:
Members: Complimentary
Member Guests: $75
Non-Members: $150
| 2023-01-05T00:00:00 |
https://www.executivesclub.org/events/data-analytics-summit/
|
[
{
"date": "2023/01/05",
"position": 41,
"query": "artificial intelligence business leaders"
}
] |
|
Take Charge of Your Career by Developing Your Business ...
|
Take Charge of Your Career by Developing Your Business Leadership Skills
|
https://innovationatwork.ieee.org
|
[
"Allison Moy"
] |
Learn how AI can be used to address business pain points, optimize processes, better serve customer needs, and improve an organization's bottom line. Get the ...
|
A successful career in engineering isn’t only about having strong technical expertise. It also hinges on your ability to communicate clearly, engage and motivate others, demonstrate business acumen, and lead teams effectively. Deficits in any of these skillsets can significantly impair an engineer’s career trajectory.
Strong leadership skills are key to any manager’s or company’s success. Conversely, weakness in this area can undermine that pursuit. For example, a study found that nearly four out of five employees who recently quit their job attributed their decision to a lack of leadership or recognition in their company. Similarly, a Gallup survey of more than one million employees nationwide revealed that 75% of respondents who had quit their jobs did so because of their manager, not the position. The results confirm the old saying that “people leave managers, not companies.”
This reality is especially hard-felt in the engineering community. Many electrical and electronics engineers confirm that all or most of their academic training focused on mastery of STEM-related technical skills, with little to no time spent on developing their leadership, communication, business, or people skills.
More Than Technical Knowledge Needed to Succeed
The fallout of this skills gap has been felt across many tech-related fields. Based on discussions with dozens of executives in tech companies, a recent report identified the top five reasons why advanced-degree scientists and engineers fail in leadership roles – and they don’t relate to their technical knowledge at all. Rather, their failures were attributed to poor communication skills, lack of people skills, lack of strategic thinking, inability to develop talent, and poor time management.
As engineers progress in their careers, their responsibilities often expand beyond just technical expertise. Successive positions up the ladder will require skillsets such as managing projects, engaging and motivating employees, collaborating with other teams, planning and budgeting, demonstrating vision, and employing a range of other business and leadership skills.
This is confirmed by a Harvard Business School study, which identified “leadership” as one of the top business skills that tech and engineering employers seek in their candidates, along with strengths in communication, management, problem-solving, business operations, research, and critical thinking.
Experts agree that without these foundational skills, technical professionals will only go so far. In a recent study, for example, 73% of companies surveyed felt that business, leadership, and cognitive skills were lacking among prospective candidates. This gap will limit the growth and success of organizations and candidates alike.
The good news in all of this?
A recent study cited in Forbes revealed that only 20-30% of leadership skills are actually innate and that some 70% of leadership qualities can be acquired through experience and education. In other words, tech professionals can learn to be strong and effective leaders.
Let the IEEE Professional Development Suite Help You and Your Team Hone Your Business and Leadership Skills
Invest in your professional development and further your goal of moving up the corporate ladder by exploring the IEEE Professional Development Suite. This collection of training programs is specially designed to suit the needs of professionals at any stage of their career.
Resources:
Powitzky, Elizabeth. (25 May 2018). Great Leaders Are Made, Not Born: Six Strategies for Becoming a Better Leader. Forbes.
Kizer, Kristin. (29 June 2023). 35+ Powerful Leadership Statistics [2023]: Things All Aspiring Leaders Should Know. Zippia.
Lewis, Greg. (11 August 2022). Industries with the Highest (and Lowest) Turnover Rates. LinkedIn.
Boyles, Michael. (10 January 2023). Leadership in Engineering: What It Is & Why It’s Important. Harvard Business School.
Hyacinth, Brigette. (27 December 2017). Employees Don’t Leave Companies, They Leave Managers. LinkedIn.
Upwork.Adams, Angelique. Top 5 Reasons Advanced-Degree Scientists and Engineers Fail in Leadership Roles. LinkedIn.
Landry, Lauren. (5 January 2023). 6 Business Sills Every Engineer Needs. Harvard Business Review.
Barnes, Cory. Soft Skills for Engineers: The importance of communication, teamwork, and other non-technical skills in a highly technical field. LinkedIn.
| 2025-02-27T00:00:00 |
2025/02/27
|
https://innovationatwork.ieee.org/take-charge-of-your-career-by-developing-your-business-leadership-skills/
|
[
{
"date": "2023/01/05",
"position": 49,
"query": "artificial intelligence business leaders"
}
] |
Doing more with less in the age of digital transformation
|
Doing more with less in the age of digital transformation
|
https://www.genpact.com
|
[] |
AI, data, and analytics have the potential to help businesses do more, even when they have less. Here, we'll explore how AI supports a data-driven enterprise.
|
Though enterprise leaders have made strides in advancing their digital agendas, economic headwinds and a rapidly changing business landscape are forcing them to do more with less – fewer people, lower budgets, and minimal costs.
In years past, such a disconnect between available resources and business mandates would have created an impasse. Today, rather than slowing digital transformation, teams can use technology to thrive amid the constraints.
Consider the importance and wide adoption of artificial intelligence technology across industries, which is quickly transforming how we work. Whether it's generative AI like DALL·E 2, using machine learning (ML) algorithms to create images from text, or ChatGPT, an AI chatbot that uses deep learning techniques to produce human-like speech, the potential for business use is endless.
AI, data, and analytics have the potential to help businesses do more, even when they have less. Here, we'll explore how AI supports a data-driven enterprise and the three ingredients for success.
| 2023-01-05T00:00:00 |
https://www.genpact.com/insight/doing-more-with-less-in-the-age-of-digital-transformation
|
[
{
"date": "2023/01/05",
"position": 68,
"query": "artificial intelligence business leaders"
}
] |
|
Raises for some IT pros could jump 8% in 2023, exceeding ...
|
Raises for some IT pros could jump 8% in 2023, exceeding inflation
|
https://www.computerworld.com
|
[
"More This Author",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline",
".Wp-Block-Co-Authors-Plus-Avatar",
"Where Img",
"Height Auto Max-Width",
"Vertical-Align Bottom .Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow .Wp-Block-Co-Authors-Plus-Avatar",
"Vertical-Align Middle .Wp-Block-Co-Authors-Plus-Avatar Is .Alignleft .Alignright"
] |
In 2022, however, merit increases for IT pros lept to 5.61%, with the median salary for all IT professionals rising from $95,845 to $101,323. The median salary ...
|
After failing miserably to keep up with inflation over the past two years, it appears salaries for IT pros are beginning to catch up, according to a new study from Janco Associates.
In 2021, the mean compensation for all IT pros rose just 2.05%, according to a mid-year salary survey from the business consultancy. In 2021, the median salary for IT pros at large enterprises was $100,022, and $95,681 for those at mid-sized firms.
In 2022, however, merit increases for IT pros lept to 5.61%, with the median salary for all IT professionals rising from $95,845 to $101,323. The median salary for an IT executive rose to $180,000.
In the year ahead, salaries could rise by another 8%, according to Janco Associates CEO Victor Janulaitis. “We project that salaries for IT pros in SMBs will exceed inflation. In large companies, we think it may lag since the salaries are greater,” he said.
Janco Associates
Recent salary increases were mostly due to a shortage of qualified IT pros at a time when organizations were embarking on digitization projects, the Great Resignation, and inflation pressures, according to industry analysts.
The Janco data focused largely on companies in the US and Canada.
In 2022, the overall US inflation rate was 7.68%, according to the US Bureau of Labor Statistics (BLS). During last year, the US inflation rate climbed to a high of 9.1% in June, dropping back to 7.1% by December. (That means prices for goods and services were 7.1% higher in December 2022 than a year earlier, according to the Consumer Price Index.)
Even as inflation soared in 2022, the pool of IT talent shrank as employees quit to re-evaluate their career and personal lives. Employers were also rolling out more technology projects in response to the global pandemic’s effect on remote work, sales and services.
Over the past year, more companies have been investing in IT with an emphasis in e-commerce and mobile computing.
“At the same time, with the ever-increasing cyberattacks and data breaches, CIOs are looking to harden their sites and lock down data access so that they can protect all of their electronic assets,” the Janco report said. “Added to that is an ever-increasing array of mandated requirements from the EU with GPDR and US federal and state requirements to protect individual users’ privacy. All of these factors increase demand for experienced IT pros and salaries they are paid.”
In addition, retirements among IT workers increased as more Baby Boomers opted out of returning to work, Janco noted.
Tony Guadagni, senior principal analyst in the HR practice at Gartner Research, said the 5.61% salary increase cited by Janco Associates over the past year “passes the sniff test.”
“It’s pretty well aligned with what we’ve seen,” Guadagni said, adding that salaries in all professions — not just IT — are expected to increase on average about 3.5% as of the start of 2023. That would represent a significant jump compared to the 2% to 2.5% merit increase Gartner typically sees throughout industries.
Gartner’s data is based on a November survey of 150 organizations worldwide; the merit increase figures are based on “core contributor” employees — or the 80% of employees who are neither underperforming nor overperforming, Guadagni said.
He took exception, however, to the 8% increase Janco Associates projects for IT professionals in 2023.
“That’s one that I think is a little tougher to rationalize at the moment,” Guadagni said. “Most organizations we’ve talked to have already planned for what they expect for merit increases…in the first quarter of 2023. I’ve not talked to a lot of organizations who say with any certainty how they expect compensation to grow over the next calendar year.”
Andrei Barmashov / Getty Images
The balance of power, which had been in favor of IT workers, has shifted somewhat in favor of employers, according to Guadagni, due in part to a “tumultuous business environment” — especially over the past eight months or so.
“I think organizations have a lot more leverage in compensation discussions than they did even four months ago. I think a lot of that is based on some of the high-profile layoffs…we’ve seen from major tech platforms,” Guadagni said. “Even today, we saw Salesforce announce they are parting ways with 10% of their workforce.”
Other high profile layoffs over the past few months included Meta (Facebook’s parent company) and Amazon.
Even in light of well-publicized layoffs, however, there remains a dearth of tech talent.
Janco’s 2022 Salary Survey findings indicate organizations added 190,000 IT positions during the past four quarters, but there remains a wide gap between positions available and workers available to fill them.
Salary compression is also occurring as “new hires” are offered salaries at the top end of the pay ranges for existing positions — often getting more than current employees in the same positions, according to Janco.
“Staffing and retention are now a primary priority of C-Level management,” Janco argued.
The report also noted:
Attrition rates in mid-sized enterprises are rising faster than in large enterprises;
Salary levels in mid-sized enterprises are rising faster than in large enterprises;
Consultants who augment IT staff and skills now are in high demand.
While projections for future tech hiring fell back in November (the latest month for which figures are available), those future postings still totaled nearly 270,000, according to CompTIA, a nonprofit association for the IT industry and workforce. Openings for software developers and engineers accounted for about 28% of all tech jobs postings last year. Demand for IT support specialists, systems engineers, IT project managers, and network engineers also remained solid, CompTIA’s data showed.
| 2023-01-05T00:00:00 |
https://www.computerworld.com/article/1616596/raises-for-some-it-pros-could-jump-8-in-2023-exceeding-inflation.html
|
[
{
"date": "2023/01/05",
"position": 37,
"query": "artificial intelligence wages"
}
] |
|
What is the future of manufacturing look like in South Africa?
|
What is the future of manufacturing look like in South Africa?
|
https://www.stdmt.com
|
[
"Standard Machine Tools"
] |
Increased automation and digitization - Manufacturing companies are ... This could lead to some job displacement, but it could also create new jobs ...
|
It's difficult to predict the future of manufacturing in South Africa with certainty, as it will depend on a variety of factors such as economic conditions, government policies, and technological developments. However, there are some trends that are likely to shape the future of manufacturing in South Africa:
Increased automation and digitization - Manufacturing companies are likely to adopt more automation and digitization to improve efficiency and reduce labor costs. This could lead to some job displacement, but it could also create new jobs in fields such as programming and data analysis. Growing demand for sustainable products - As consumers become more environmentally conscious, there may be increased demand for sustainably-produced goods. This could create opportunities for South African manufacturers to produce eco-friendly products and differentiate themselves in the market. Increasing competition from other countries - South Africa's manufacturing sector may face increased competition from other countries that have lower labor costs and/or more advanced technology. To stay competitive, South African manufacturers may need to focus on producing high-value, specialized products and investing in technology and skills development. The role of the government - The government will also play a role in shaping the future of manufacturing in South Africa. For example, policies that support the development of new technologies and improve the business environment could help to create a more favorable climate for manufacturing.
What can we do to grow manufacturing in South Africa?
There are several steps that could be taken to support the growth of manufacturing in South Africa:
| 2023-01-06T00:00:00 |
https://www.stdmt.com/blogs/news/what-is-the-future-of-manufacturing-look-like-in-south-africa?srsltid=AfmBOoqdEY8UjGEtoFhTushENtvieaVskGnhKzisUtUaENMJBpzUrjPY
|
[
{
"date": "2023/01/06",
"position": 78,
"query": "automation job displacement"
}
] |
|
How Automation is Transforming Healthcare and Saving ...
|
How Automation is Transforming Healthcare and Saving Lives
|
https://blog.timetap.com
|
[
"Nahla Davies",
"Starr Campbell",
"Charlie Bedell"
] |
For instance, AI may replace certain tasks and thus lead to making certain jobs superfluous. Since decision-makers in healthcare tend to be practitioners ...
|
Automation anxiety describes the fear associated with losing your job to machines. Major discoveries and advancements in artificial intelligence have become more frequent recently. Automations powered by artificial intelligence are able to rapidly improve productivity and accuracy on routine tasks. Yet, the concept of automation can arouse a sense of uncertainty and fear. Concerns are generally with accuracy, privacy and job loss when automation is used.
The pandemic illustrated the massive importance of modernizing healthcare through automation. Sophisticated scheduling software streamlining vaccination programs and AI assisting in contact tracing are only a few examples of the positive impact of automation on healthcare. The following guide will examine other ways that automation has revolutionized medicine, saved lives and why it should be embraced.
Levels of Automation in Medicine: An Overview of Current and Future Automation in Healthcare
Before we can understand automation’s impact on healthcare, we must understand all facets of its current and future implementations. According to Dr. Bertalan Mesko (MD, Ph.D.), there are at least five levels of automation in medicine: human only, shadowing, AI assistance, partial automation and full automation.
Level 1: Human Only
There is virtually no artificial intelligence involved in the first level of automation. It is essentially traditional medicine where most procedures are performed manually by a licensed practitioner. The practitioner may consult AI or algorithm-driven tools such as medical search engines. However, diagnostics and treatments are mainly carried out by the physician. This is the automation paradigm of the past, where medical professionals are at the forefront of healthcare with very little assistance from automated systems.
Level 2: Shadowing
Medical students are often assigned to an experienced physician in medical school. This practice of “physician shadowing” gives students the opportunity to observe the physician-patient interaction while gaining practical experience in the medical field.
Artificial intelligence is also capable of performing a variety of shadowing where it observes a medical professional conducting a diagnosis or a procedure. The AI will shadow the physician without influencing the procedures.
The AI’s primary objective is to accumulate and log as much data as possible. This data can then be used in the future to determine if and why certain treatments are more effective than others. Furthermore, it allows physicians to analyze the data and identify any mistakes in how they conduct procedures. This will help minimize malpractice.
Level 3: AI Assistance
At this level, the AI provides guidance and supports the medical practitioner in diagnostics and clinical decision-making. The AI may make suggestions by using past data and evidence to assist the physician during the diagnostic process.
IBM’s Watson Health is a good exampleof this. It aids oncologists by providing faster and more accurate cancer detection and treatment. Watson Health mainly does this by amassing a large collection of data on the latest research. Additionally, it scans through the patient’s history to identify the most viable treatment solutions.
There are challenges in ensuring that the recommendations are accurate and easy for practitioners to use and interpret. Yet, the data shows large improvements of accuracy when AI is used.
Level 4: Partial Automation
During partial automation, the AI will produce its own diagnosis. However, it may flag cases and consult physicians if it encounters any uncertainties. This provides can free physicians for other tasks while still ensuring that complex cases are addressed by them.
In 2019, a team of medical investigators from the Massachusetts General Hospital’s Department of Radiology developed and tested an AI system that could rapidly detect and categorize brain hemorrhages.
The system produced many promising results, such as 93.0% diagnostic accuracy for main types of intracranial brain hemorrhaging (ICH). However, the results were less desirable for specific subtypes of ICH. Nevertheless, a system like this (after some refinement) can be used by radiologists to detect ICH quickly and potentially save lives.
Level 5: Full Automation
This level of automation is what most AI and machine learning scientists in healthcare envision for the future. This is where healthcare processes are performed by an AI without any human input. For instance, a fully automated diagnostic system could analyze a CT scan and initiate subsequent testing. It would perform the entire procedure without consulting a human physician.
While attaining this level of automation is ideal, many experts believe that it’s nigh impossible to achieve anytime in the near future.
The Holistic Potential of AI-Assisted Medicine
The proposed five levels of automation provide a succinct overview of automation in the medical industry. From this perspective, focus is mostly centered on making the lives of professionals easier. This will no doubt improve patient care too. However, it is still imperative that we address all the tangible ways automation can assist in patient care and ensure consistent treatment.
AI and Automation in Mental Health Care
The mental health implications of the Covid-19 pandemic and its subsequent fallout have been well documented. According to a study conducted and published by the Kaiser Family Foundation, 4 out of 10 American adults reported symptoms of anxiety and/or depressive disorder.
Shutdowns, curfews and other restrictions made maintaining treatment for mental health a challenge. There has always been a seemingly underreported mental healthcare crisis looming. How can AI and automation help?
One of the biggest challenges that has always been present in mental health diagnosis is establishing objective metrics to identify disorders and illnesses. Historically, practitioners have relied on subjective approaches to discover what ails the patient and determine which treatments are the most suitable. This is more of a troubleshooting approach to treatment and could be improved with the assistance of AI.
Mental health professionals can now use breathing, speech patterns, typing patterns on their devices, physical activity, and social interactions to diagnose their patients and recommend treatments. All this data can be collected from IoT devices and AI-driven diagnostic tools. See a few ways that IoT is changing healthcare.
However, AI technologies' contributions to mental healthcare aren’t limited to diagnoses. They can also assist in treatment. Edith Cowan University found that 30% of people are more comfortable sharing negative experiences with virtual reality avatars than they are with actual people.
Therapy has long been inaccessible for many patients. VR, in conjunction with AI, can help people overcome this barrier while providing more accurate diagnoses.
AI and Automation in The Treatment of Diabetes
There are many challenges related to diagnosing and treating diabetes globally. According to the CDC, 1-in-10 Americans (over 37 million) have diabetes. However, 1-in-5 may not know that they have it. Furthermore, they may never get diagnosed or treated.
Diabetes is a massive public health problem and one of the leading causes of blindness (through diabetic retinopathy) in the US for people aged under 75. IDX-DR is a level 4 autonomous system that can analyze retinal images and make diagnostic decisions based on them. Paired with the latest optometry software, this can provide early preventative treatment against diabetes-related blindness.
However, this is only one way that AI-powered technology is revolutionizing diabetes treatment. Continuous glucose monitoring (CGM) systems are another way. These are medical devices that sit semi-permanently under a patient’s skin. The patient can then use an external wireless device such as their smartphone or smartwatch to check their glucose levels before administering insulin.
Telemetry and health data gathered from these devices have the potential to be used by insurance companies to form life insurance policy quotes. This could allow them to streamline costs. Additionally, the CGM can send reminders or alerts to the user regarding insulin (as well as other medications such as metformin) intake.
Diabetics are more at risk of passing out from hyperglycemia and hypoglycemia. The CGM system can automatically signal emergency health care services if it detects that you have lost consciousness abnormally.
The most popular and pragmatic applications of machine learning and AI have always been in data collection and sorting. As more people embrace IoT and health-related smart devices, such as fitness trackers, the greater the potential for early detection. This allows healthcare professionals to collect more accurate information related to patient behavior. This can improve the early detection of diabetes and other diseases.
However, human operators cannot effectively sort through this information manually. That’s why we employ machine learning to capture and find relevant data to assist medical professionals in making informed decisions.
AI and Automation in Pharmaceuticals
Many pharma companies had already begun employing Artificial Intelligence and machine learning to improve their drug and vaccine production protocols even before the COVID-19 pandemic. However, AI’s potential was magnified during the pandemic.
Vaccine discovery that typically takes over a decade was fast forwarded thanks to AI-powered technology. A fine example of this is AstraZeneca.
AstraZeneca is one of the largest pharmaceutical companies in the world. Incidentally, it has begun implementing AI and data science-driven tools across its research and development fronts.
The company is using machine learning-powered image analysis and knowledge graphs to collect important insights about diseases. AI has been shown to identify markers 30% faster than human pathologists.
Additionally, AstraZeneca has recently released a few AI-driven tools for chemistry. This is part of their initiative to shorten the drug discovery process and substantially reduce drug production costs.
Pfizeris another great example of AI powering pharmaceutical innovation during the COVID-19 pandemic. The company employed the help of Artificial Intelligence throughout its vaccine development process.
Most notably during its vaccine trials. Pfizer used its AI tech and data retrieval tools to parse through millions of data points to find signals in its 44,000-person COVID study. Even before the pandemic, Pfizer began digitizing its research and development operations.
It began a partnership with IBM’s Watson for drug discovery in 2016. Pfizer has also allied itself with Iktos, a virtual drug design firm, to leverage some of its Artificial Intelligence capabilities for improving drug discovery and production.
These examples are only the beginning. However, they provide enough evidence to indicate that the future of medicine and drug production will be heavily influenced by AI. This offers optimism for more effective and cheaper treatments and medication.
Other Ways Automation is Benefiting Healthcare
The risks and challenges of implementing AI and other automations in healthcare are far outweighed by the benefits. Not only can it reduce the cost of medications and treatments – it can also result in faster discovery of new medicines. Studies show that it leads to greater accuracy in imaging and preventing unnecessary biopsies.
5 Ways AI is Benefiting Healthcare:
Early diagnosis – algorithms and machine learning speed up the processing of information and can help make accurate diagnosis. Cost reductions – automated process can access loads of data and lab results and make predictive results - which means less appointments for patients. Surgery assistance – AI surgical systems can now perform the smallest movements with complete accuracy – removing the risk of human error. Enhanced patient care – AI can automate scans of patient data and reports, and provide direction for patients and doctors. Improved information sharing – AI can track specific patient data, such as health monitors.
Automations in healthcare are not without challenges but each day AI implementation is helping to provide better and more affordable healthcare to larger and more diverse people groups.
Areas That Would Benefit Most From AI in HealthCare
Even with all the benefits of AI automations in healthcare, there are still many additional areas that could better benefit from AI. Technology costs for automation tools are becoming increasingly affordable – making the adoption of more AI tools increasingly feasible.
A 2020 study conducted by Statista’s official research department surveyed over 1,000 pharmaceutical and healthcare workers. The purpose of the study was to find which areas in healthcare would benefit from AI the most. 60% of respondents believed that AI had the potential to benefit quality control the most. Other areas included:
Customer care (44%)
Monitoring and diagnostics (42%)
Inventory management (31%)
Personalization of products and services (25%)
Cybersecurity (24%)
Once again, it’s important to understand the healthcare worker’s perspective. Artificial Intelligence has the potential to make jobs easier and more accurate. Understanding which areas healthcare workers believe AI is the most beneficial will be crucial to improving adoption.
Nevertheless, healthcare workers and employees are only a part of the equation. Directors, executives and leaders in the industry may be able to provide a more nuanced view. A2020 IDC report surveyed executives from 210 hospitals in the US(105), Germany (54) and the UK (51). They found that the use cases they were most concerned with were:
Inferencing to improve data quality (35%)
Reading images to assist in diagnosis (30%)
Early identification of hospital-acquired infection (30%)
Patient risk stratification (27%)
Improving back-office productivity (25%)
Predicting adverse events (25%)
Forecasting hospital patient admission (23%)
Supply chain (23%)
Early identification of sepsis (22%)
Chatbots for patient education and coaching (21%)
The Risks and Challenges of AI in Healthcare
Adoption of AI is not without concern and challenges. Arecent studyconducted by researchers at the Royal Free Hospital found that 80% of respondents (who were medical staff at the NHS foundation) had privacy concerns regarding the implementation of AI in healthcare.
This highlights one of the challenges of integrating technology and health care – the attitudes towards it. Some medical staff may be reluctant to use AI. Furthermore, it may require them to retool and relinquish the past approaches. However, some may argue that their fears are somewhat justified. AI often requires more robust network infrastructures, which may force hospitals to restructure their network security.
Data leaks or breaches may result in hefty fines for medical institutions. For instance, New York and Presbyterian Hospital (NYP) was forced to pay a $3.3 million settlement fee for HIPAA violations related to a data breach. To prevent situations like these, hospitals and other healthcare institutions will have to nurture a culture of cyber security savviness.
Even so, technology has never been perfect. It can be prone to bugs that may result in false positives/negatives and other issues. These issues may result in injury and death. When the failure of AI leads to dire consequences, who takes the blame? This failure creates unclear lines of accountability. The blame will most likely fall on the shoulders of the entire medical institution. Hence, many healthcare workers fear overreliance on Artificial Intelligence.
However, these are just some of the reasons why AI adoption has been slow. Others include:
Algorithmic and function limitations: While technology driving AI has advanced over the last ten years, it still has limitations. Most pundits worry most about the implications of issues such as bias in neural networks. Additionally, it may often be difficult for healthcare workers to trust or understand how AI produces results. There is very little transparency in the inner workings of AI.
Regulatory limitations and barriers: While regulations that focus on maintaining privacy are extremely important, they can stagnate development in AI. Again, lawmakers and those in charge of implementing AI for healthcare are most concerned about liability. Thus, it can take a long time for certain technologies to be approved, especially for an industry as sensitive as healthcare.
The Argument of ethics and incentives: Decision makers may retract from AI adoption because of the implications it may have on the medical profession. For instance, AI may replace certain tasks and thus lead to making certain jobs superfluous. Since decision-makers in healthcare tend to be practitioners themselves, they may not be too enthusiastic about replacing their peers and subordinates with AI.
Implications on Policy: In the long run, lawmakers will have to rewrite certain policies and laws to accommodate for AI’s eventual mass adoption in healthcare. Again, it is important to address liability. Of course, this will impact insurance companies and not just medical institutions.
Preparing for AI in Healthcare
Physicians and all medical staff members must first seek thorough training in the use of AI to curb the risks associated with AI in medicine. Furthermore, they must apply strict adherence to the rules and standards established by medical device and software companies.
Not only will this teach them the best practices related to using AI, but it will also give them the ability to articulate the potential risk to patients to obtain full informed consent.
The American Medical Association (AMA) has also suggested that AI training should be incorporated as a standard component of medical education.
Additionally, hospitals and other medical practices are fundamental to ensuring the proper development, implementation and monitoring of the best practices and standards in the use of AI systems in healthcare. Healthcare workers can only benefit from AI related tools if they can understand how to use them. Technology that is too complicated or mis-understood will be under-utilized.
Directors and healthcare executives must take certain steps to ensure that the transition to and implementation of AI are smooth.
Steps to efficiently implementing AI
Set a goal or machine learning statement for what you would like to achieve by implementing AI. Define data collection policies that consider ethical and demographic considerations. Research and focus on human-centric AI tools: this will make the technology less esoteric. If healthcare practitioners can understand how the technology fits into their daily routines, they’ll be more open to learning and working with those tools. Human-centric AI should feature: Human override: the ability to override AI processes or tasks, so workers still feel that they are in control.
Human integration and not replacement: Workers should not feel that they are being replaced by AI. Thus, medical institutions should first assess tools that assist human workers. You should first aim for partial automation.
4. Train your workforce that will use the data. This should include:
Setting clear guidelines on the level of transparency required for training.
Understanding how data is gathered and recorded to make tools work.
Teaching the semantics and terms of AI technology.
Understanding where the AI originated.
The composition of AI tools - how they function, from the basics to some of the most complex processes.
The relationship between end-users, regulators and vendors.
Conclusion
AI is unlikely to completely replace medical professionals anytime soon. After all, the algorithms and technology that drive AI are just tools. And tools are utilized best when they are placed in the talented hands of professionals. Thus, physicians and medical staff who take advantage of the capabilities of AI will most likely replace those who do not – at least in the next 50 years.
We can expect AI in healthcare to make subtle but noticeable differences in the next few years. For instance, we’ll see more assistance with case triage, enhanced image scanning and segmentation, AI-supported diagnosis and improved workflows for medical professionals. Thanks to AI, the future of healthcare is encouraging.
Looking to automate your scheduling related tasks? See these six TimeTap features for healthcare facilities.
| 2023-01-06T00:00:00 |
https://blog.timetap.com/blog/posts/how-automation-transforms-healthcare
|
[
{
"date": "2023/01/06",
"position": 47,
"query": "AI replacing workers"
}
] |
|
What the latest employment surge means for technology jobs
|
What the latest employment surge means for technology jobs
|
https://www.zdnet.com
|
[
"Joe Mckendrick",
"Contributing Writer",
"Jan.",
"At A.M. Pt"
] |
Total US employment increased by 223,000 in the month of December 2022, edging the unemployment rate down to 3.5 percent, the US Bureau of Labor Statistics ...
|
Some tech skills remain extraordinarily high-paying. skynesher/Getty Images
Total US employment increased by 223,000 in the month of December 2022, edging the unemployment rate down to 3.5 percent, the US Bureau of Labor Statistics reported (PDF) Friday. At a time when the news is filled with stories of layoffs at technology companies such as Amazon and Salesforce, the bigger picture still points to massive technology skills shortages for mainstream organizations in the months ahead.
Special Feature The Tech Trends to Watch in 2023 Learn about the leading tech trends the world will lean into over the next 12 months and how they will affect your life and your job. Read now
Industries seeing significant jumps in employment include leisure and hospitality (likely as a continuing response to pent-up demand from the early COVID era), health care (also echoes of COVID), and construction.
These numbers, of course, reflect broad and general employment, from knowledge workers to bricklayers. There are implications for technology professionals, suggesting there will be no letup in skills shortages in specialized areas.
Even if companies opt to either cut back or compensate for their skills shortages with artificial intelligence, robots, and digital business flows, this means even more work for the people needed to design, build, program, and maintain such systems.
Also: Despite the gloomy headlines, demand for tech skills is strong
"We are constantly looking out for good-quality technology resources and we are recruiting to support our growth plans," relates Raju Seetharaman, senior vice president of IT and transformation at Legal and General America. "The technology layoffs we are seeing across the industry so far are a result of the additional demand created during pandemic with additional capital availability and unique business opportunities. This is partly due to return to pre-pandemic model and also due to global slowdown," he says.
"The skilled labor shortage will only get worse in the near future, as the need for tech talent continues to grow and the gaps between the available supply and demand for these individuals are exacerbated," predicts Laura Baldwin, president of O'Reilly Media.
"Leaders need to come up with new solutions for how to manage this shortage now. If we can't hire the talent that we need, we need to invest in learning and development to train for the skills that will help our businesses succeed.
"There really isn't an alternative. Companies that allow themselves to fall behind will come out of the recession in much worse condition than those that don't."
Legal and General America continues to seek "good coding skills, along with evolving technology skills around machine learning, artificial intelligence, data science, and analytics," says Seetharaman. His company also seeks skills in cloud computing and cyber security, "all critical to digital transformation and innovation."
Also: Programming languages: Why this old favorite is on the rise again
Over the coming year, Baldwin predicts "sustained demand for cloud engineers, data, and machine learning talent. As the world moves toward more virtual offerings, more e-commerce, and more real-time on-demand requirements, there's a greater need for site reliability engineers, mobile engineers, and those with the skills to ensure anytime, anywhere access with virtually no downtime. Demand for these skills will not be impacted by the economy."
Still, while hard technology skills will be in demand, the two most important skills professionals can bring to the table are communication and team management. "Communication skills have always been critical of course, but in our new virtual and hybrid work world they're even more so," says Baldwin. "Tech leaders need to be able to effectively update their teams, bring other teams along on their projects, and collaborate across the organization."
Also: Managing and leading aren't the same thing. Here's why it matters
Qualities important to Legal and General America include a "growth mindset," Seetharaman states. "Business-aware technology professionals will be best placed to advance their careers. Learn more about the why rather than the what. Stakeholder management and communication skills continue to be key for increased collaboration and partnership needed to enable business growth."
Organizations, many of which may be facing the headwinds of a rough economy, will be turning to their technology executives and professionals for leadership and guidance. As was the case with Covid in 2020, digital solutions will provide ways to navigate and thrive in the turbulence.
| 2023-01-06T00:00:00 |
https://www.zdnet.com/article/what-the-latest-employment-surge-means-for-technology-jobs/
|
[
{
"date": "2023/01/06",
"position": 9,
"query": "AI unemployment rate"
}
] |
|
How Skills are Disrupting Work: The Transformational ...
|
How Skills are Disrupting Work: The Transformational Power of Fast Growing, In-Demand Skills
|
https://www.bhef.com
|
[] |
This state of skills report analyzed hundreds of millions of job postings from 2015 through 2021 to track critical changes in the job market.
|
How can today's learned skills impact an individual's earnings 20 years down the road? The Burning Glass Institute and BHEF recently investigated today's rapidly evolving job market and its most sought-after skills, offering a blueprint on preparing for future success.
Sponsored by Wiley, this state of skills report analyzed hundreds of millions of job postings from 2015 through 2021 to track critical changes in the job market. It found that one in eight job postings now requires one of the four skills that are growing the fastest and spreading the most rapidly across sectors. Together, these four emerging skills—Artificial Intelligence/Machine Learning, Cloud Computing, Product Management, and Social Media—unlocked 2.6 million job postings last year, with demand growing almost seven times faster than the job market overall.
Share your information to gain access to the State of Skills report.
| 2023-01-06T00:00:00 |
https://www.bhef.com/publications/how-skills-are-disrupting-work-the-transformational-power-of-fast-growing-in-demand
|
[
{
"date": "2023/01/06",
"position": 19,
"query": "machine learning job market"
}
] |
|
Machine Learning Course in Vadodara
|
Machine Learning Course in Vadodara
|
https://www.tops-int.com
|
[] |
The machine learning market is projected to continue its impressive growth experiencing an impressive growth of 38.8% in the next 10 years, with the market ...
|
Machine Learning Course in Vadodara for Students and Professionals
A Machine Learning Course in Vadodara can provide the perfect opportunity to gain the necessary skills and knowledge to stay ahead of the curve due to the high demand in the field. The machine learning market is projected to continue its impressive growth experiencing an impressive growth of 38.8% in the next 10 years, with the market value reaching a staggering USD 209.91 billion. Firstly, the increasing proliferation of data and the need for advanced analytics drive the demand for machine-learning solutions. Additionally, the increasing demand for artificial intelligence and the need for automation are other significant factors fuelling the market growth.
The Machine Learning Course In Vadodara is becoming increasingly popular due to the immense potential of this technology. Various organizations offer training and certification courses to help professionals gain the technical skills and knowledge necessary to operate and manage machine learning systems. Additionally, the increasing number of research and development activities in this field provides ample opportunities for professionals to gain the necessary expertise and experience to work with machine learning technologies.
If you are looking for the best machine learning course in Vadodara, you have come to the right place. TOPS Technologies provide the best and most comprehensive machine learning course in Vadodara, designed to help you understand the fundamentals of this technology and develop the necessary skills to use it effectively. Our course includes topics such as data analysis, machine learning algorithms, deep learning, big data tools, natural language processing, and more. With our in-depth and comprehensive course, you will gain a comprehensive understanding of machine learning and its applications in real-world scenarios.
Our Machine Learning Classes provide hands-on practice to help you gain the skills and knowledge required to use this technology effectively. Our experienced and expert instructors will help you make the most of this technology, enabling you to take your organization to the next level. So, enroll in our best machine learning course today and gain the skills you need to make the most of this technology.
What is Machine Learning?
Machine learning is a burgeoning field of artificial intelligence that leverages algorithms to analyze and interpret data, ultimately leading to the generation of informed predictions and decisions. This technology enables automated solutions to various intricate issues across sectors such as natural language processing, computer vision, and robotics. By utilizing machine learning, businesses can uncover patterns, trends, and behaviors from massive datasets, automating tasks and optimizing processes for enhanced efficiency and cost-effectiveness. In short, machine learning is a powerful tool for driving progress and streamlining operations in various industries.
Machine learning is revolutionizing various sectors as it grows in sophistication and complexity. This advanced technology allows businesses to stay agile, efficient, and informed by analyzing data and discovering patterns and trends that would otherwise go undetected. The potential of machine learning is vast, as it can drive innovation and transformation in various industries and help businesses gain a competitive edge. With its predictive capabilities, machine learning can improve decision-making and help businesses stay ahead. As technology progresses, more and more industries will be able to harness the power of machine learning to drive progress and success.
The machine learning market is expected to experience significant growth in the years to come, driven by the increasing demand for advanced analytics, automation, and artificial intelligence. Furthermore, the increasing availability of sophisticated hardware, cloud computing, and research and development activities also provide ample opportunities for the further expansion of this market. Therefore, the Machine Learning Course In India is gaining traction as professionals looking to gain the necessary skills and knowledge to manage and operate machine learning systems.
The Machine Learning Course in Vadodara offered by TOPS Technologies is a comprehensive and in-depth program that covers the essential concepts of machine learning. As one of the top machine learning institutes in Vadodara, TOPS Technologies offers students the opportunity to gain practical experience in the field through hands-on training and real-world projects. The course begins with Python essentials and moves on to data preprocessing, two crucial foundations for any machine learning enthusiast. From there, students will delve into the practical application of machine learning, working with libraries such as Scikit-Learn, Keras, and TensorFlow. Whether you are a beginner or an experienced professional, this course is perfect for anyone looking to gain a thorough understanding of machine learning and its applications.
Machine Learning Classes in Vadodara by TOPS Technologies is the perfect opportunity for anyone looking to dive into the exciting world of machine learning. With comprehensive training courses and experienced instructors, you'll gain the practical skills and knowledge you need to succeed in this rapidly growing field. Whether you're just starting out or looking to take your career to the next level, this course is designed to provide the guidance and support you need to excel. Enroll in the Machine Learning Training In Vadodara of TOPS Technologies today and begin your journey to a rewarding career.
Why should you take a Machine Learning Course in Vadodara?
Benefits of Machine Learning
Wide Range of Applications
Machine learning applications are vast and diverse, allowing us to leverage their power across various industries, including medicine, finance, technology, and science. Its impact on customer interactions is particularly noteworthy, as it helps to detect diseases and enhance businesses quickly. In light of these benefits, investing in Machine Learning technology is a sound and wise decision for any organization.
Automation of Work
The rise of automation through Machine Learning transforms how we work and think. By automating tedious and laborious tasks, Machine Learning allows us to focus on more creative pursuits while still producing reliable results. Additionally, it has enabled the creation of more advanced computers capable of processing and executing Machine Learning models and algorithms with greater efficiency. While the widespread adoption of Machine Learning automation is undeniable, it is also met with caution as it continues to shape the industry at a rapid pace.
Efficient Data Handling
Efficient data handling is a crucial aspect of Machine Learning and vital for successfully implementing any Machine Learning model. Machine Learning algorithms are equipped to handle various data types, including multidimensional or heterogeneous data, and can process and analyze this data in a manner that traditional systems cannot. Data is the lifeblood of Machine Learning, and the proper management of it is a field of study in and of itself. In short, efficient data handling is essential for unlocking the full potential of Machine Learning.
Trends Identification for Business
A machine's ability to understand and draw insights from data is unmatched. As it is exposed to more information, it can identify trends and patterns within that data. For instance, on a social networking site like Facebook, users provide the platform with data about their interests, which can then be used to create a more personalized experience for them. By utilizing machine learning, the platform can detect and understand the patterns in this data and, in turn, display similar trends to the user to maintain engagement. In this way, machine learning can be applied to recognize and discern patterns and trends.
Machine Learning Training in Vadodara by TOPS Technologies is designed to help business professionals understand the fundamental principles of Machine Learning and its practical applications in their business. Through this course, professionals will learn how to utilize Machine Learning algorithms to gain valuable insights from data and make informed decisions. By providing the opportunity to apply Machine Learning models to real-world business problems, this training helps professionals develop the skills necessary to automate processes, predict future trends, and improve their organization's performance. Invest in your professional development and join our Machine Learning Training in Vadodara today.
Industries that use Machine Learning
Healthcare: In the healthcare industry, machine learning is utilized to analyze and interpret patient data for improved diagnosis and treatment.
Finance: The finance industry uses machine learning to detect fraudulent transactions and predict market trends.
Retail: Machine learning is used to personalize customer experiences and optimize supply chain management.
Manufacturing: Manufacturing companies utilize machine learning to predict maintenance needs and optimize production processes.
Agriculture: Machine learning optimizes crop yields and predicts weather patterns.
Transportation: The transportation industry employs machine learning to optimize logistics and improve traffic flow.
Machine Learning Job Roles and Salary
There are many different types of Machine Learning job roles. However, here is a list of some common ones:
Machine Learning Engineer
As a highly coveted profession in the machine learning industry, Machine Learning Engineers are responsible for designing and deploying machine learning models, optimizing data pipelines, and creating complex datasets. These skilled professionals utilize their models to uncover trends and make predictions that aid businesses in reaching their goals. In addition, Machine Learning Engineers are responsible for developing recommender systems that power various digital platforms. If you aspire to become a Machine Learning Engineer, it is crucial to understand the role's requirements and how to excel in your job search. These sought-after professionals can earn an annual salary of up to 15 LPA in Vadodara.
Machine Learning Scientist
These professionals are responsible for designing and implementing machine learning models and algorithms and conducting research to advance the field. They may work in various industries, including healthcare, finance, and technology, and their expertise is in high demand due to the increasing reliance on machine learning in these sectors. Machine Learning Scientists are responsible for the end-to-end process of building machine learning models, from data preprocessing and feature selection to model training and evaluation. They must also be able to communicate their findings and results effectively to both technical and non-technical audiences. As a result of their specialized skills and expertise, Machine Learning Scientists can command high salaries of up to 52 LPA in their field.
Robotics Engineers
As robots' role in our daily lives continues to expand, the demand for skilled Robotics Engineers only grows. From developing a robot's computer vision to enable it to interpret and understand the visual world around it and make safe decisions to designing algorithms to process large amounts of data produced by machines that assemble vehicle parts, the opportunities for Robotics Engineers are virtually limitless. With a background in machine learning, you possess an even more significant advantage as robots often require the ability to mimic human behavior or optimize efficiency. If you are seeking a fulfilling and innovative career, consider pursuing a role as a Robotics Engineer. In this field, you can create machines that make people's lives easier while earning an average salary of up to 7 LPA.
Data Scientist
As a Data Scientist, it is essential to have a thorough understanding of data analysis and processing techniques and the ability to interpret data and use the appropriate tools to create predictive models. In addition to strong technical and mathematical skills, it is also essential to communicate findings and ideas in a way that can be easily implemented. Data Science is a rapidly growing field with numerous opportunities, and with the right qualifications and skills, you can pursue a lucrative and fulfilling career in this field. The role of a Data Scientist is invaluable and in high demand, making it a highly sought-after position. Data Scientists can earn an average salary of up to 29 LPA annually.
Software Developer
As technology advances, Software Developers are at the forefront of innovation, creating solutions to challenges that were once thought impossible. From developing mobile applications to constructing robust operating systems, these professionals are responsible for creating efficient and effective software solutions in various industries. With the right skills and knowledge, Software Developers can develop tools that make a real impact on people's lives all over the world. Whether creating a mobile application or designing a cutting-edge operating system, these professionals always find new and creative ways to solve complex problems. With an average annual salary of up to 12 lakhs, a career as a Software Developer is both rewarding and lucrative.
Who can enroll in our Machine Learning Course at TOPS Technologies in Vadodara?
At TOPS Technologies, our Machine Learning Course in Vadodara suits learners of all levels, from beginners to professionals. Our comprehensive course materials and experienced instructors ensure that you will gain the knowledge and skills necessary to succeed as a Machine Learning expert by the end of our Machine Learning Training in Vadodara. Take advantage of this opportunity to learn and advance your skills in this rapidly growing field. Enroll in our Machine Learning Classes in Vadodara and increase your knowledge.
Upon completion of TOPS Technologies ' Machine Learning Course in Vadodara, students will be equipped with the skills and knowledge necessary to apply concepts learned to real-world problems, swiftly build machine learning models, and understand the fundamental principles of artificial intelligence. Additionally, they will be able to comprehend the complexity of the algorithms utilized and the significance of their applications. If you have questions regarding the Machine Learning Fees in Vadodara, please do not hesitate to contact our team to receive affordable rates. Invest in your professional development and join our esteemed Machine Learning Course in Vadodara.
Machine Learning Course in Vadodara With 100% Job Placement Assistance
You will find the best of the faculties at TOPS Technologies in Vadodara who are experts in the machine learning field. Get job-ready post-completion of the machine learning course as you pick up crucial skills like enabling prediction, classification, information retrieval, and clustering. Get acquainted with machine learning techniques as you create an algorithm and play around with real-time data. Our balanced machine learning course in Vadodara emphasizes theory and practicals to provide well-rounded subject knowledge.
We have successfully pioneered many IT courses in Vadodara, with over 12000 careers shaped nationally. Our best-in-class learning infrastructure and learning methods have won us students’ faith over time. If you want to take the next giant career leap, look no further as we help you reshape it most suitably at our training center in Vadodara.
TOPS Technologies' Complete Machine Learning Training in Vadodara is designed to provide industry-relevant topics and the latest trends and technologies to help students become job-ready. Our experienced instructors guide students through the fundamentals of machine learning, gradually progressing to more advanced concepts.
This course is suitable for anyone looking to get started with machine learning or enhance their skills, learn the fundamentals of data processing, clustering, classification, and more, gain exposure to the latest machine learning tools and techniques, and understand the critical elements of machine learning such as statistical pattern recognition and data mining.
In addition, students will work with machine learning libraries and learn how to evaluate the performance of algorithms to gain insights and make predictions. Enroll in our Machine Learning Classes in Vadodara and take your knowledge and skills to the next level.
Get a Customised Machine Learning course in Vadodara.
Get your Machine learning course customized as our experienced professional coaches set it up at your school and workplace. Avail of a machine learning course at your doorstep at your convenience.
For more details regarding course curriculum, duration and fees, reach out to us by sending us an email at [email protected] or calling us at +91 – 7622011173 for a free demo.
| 2023-01-06T00:00:00 |
https://www.tops-int.com/machine-learning-course-in-vadodara
|
[
{
"date": "2023/01/06",
"position": 64,
"query": "machine learning job market"
}
] |
|
Here's how artificial intelligence can benefit the retail sector
|
Here's how artificial intelligence can benefit the retail sector
|
https://www.weforum.org
|
[] |
AI opens the door for businesses to make "smart" staffing and replenishment decisions that optimize labour and replenishment costs, while also helping to ...
|
Artificial intelligence can support retail operations, increasing profits and optimizing business processes.
Specific benefits include automation, loss prevention and sustainability.
It can also bring down costs, optimize supply chains and increase customer satisfaction.
Retail businesses need to prioritize profit and productivity to remain competitive in today's global market. It’s necessary to act quickly and effectively to ensure success and to stay ahead of competitors. Artificial intelligence (AI) can provide support for retail operations, increasing profits and optimizing business processes. AI services in the retail sector are predicted to increase from $5 billion to above $31 billion by 2028.
There are concerns about the impact of AI on jobs in the retail sector. For example, one report for the UK civil service found that "significant net employment reductions are projected in wholesale and retail, finance and public administration areas in the short to medium term". However, in the longer term, AI is likely to change the role of cashiers rather than eliminate it. By automating certain tasks, such as inventory tracking, AI can free cashiers to focus on more complex tasks requiring human interaction.
AI opens the door for businesses to make "smart" staffing and replenishment decisions that optimize labour and replenishment costs, while also helping to eliminate out-of-stock situations and maximize sales. AI will shift what retail roles look like, making businesses more efficient.
As technology continues to evolve, retail organizations are increasingly seeking to understand how artificial intelligence is reshaping the industry. Here are some of the principal AI applications for the sector:
1. Automation
Artificial intelligence plays a significant role in automating many tasks that were once done by humans. This allows workers to spend less time on mundane tasks and more time devoted to customer service. Overall, this process increases efficiency and helps to improve the customer experience.
The presence of computers in the retail industry has allowed businesses to handle more complex tasks, such as customer problem-solving. These advances improve productivity, overall profit, environmental outcomes, and increased sales.
2. Loss prevention
AI technology is spurring self-checkout innovation, offering a secure method of scanning that also helps prevent shoplifting. It can run without any human assistance and gives customers more control over the shopping process. Under the new system, AI authentication will be used to log data on suspicious shoplifters.
3. Sustainability
Artificial intelligence has the potential to make retail operations much more sustainable. AI forecasting tools help businesses achieve carbon neutrality by monitoring emission rates and promoting recycling. In addition to many other advantages, artificial intelligence reduces the environmental strain caused by travelling to physical stores and minimizes waste materials sent to landfills.
4. Bringing down costs
Implementing artificial intelligence can improve the organization and streamline the workforce of retail businesses, giving them the power to make informed decisions based on data. It is anticipated that AI will be responsible for automating mundane tasks and optimizing more demanding work, such as delivering, tracking and scheduling. All of these advances have the potential to make jobs much easier and more efficient for employees.
5. Supply chain optimization
Artificial intelligence technology can review consumers' previous purchase patterns and provide an alert when the stock of best-selling products may reach a critically low level. Maintaining a well-stocked inventory is of utmost importance to retailers. AI can also provide insights into the temporal patterns of consumer demand, including identifying seasonal item trends and estimating when these items will be in the highest demand.
6. Customer satisfaction
AI also brings benefits to consumers. For example, chatbots can help customers navigate the store quickly and receive personalized product recommendations. By providing personalized recommendations, AI makes checkout faster and more efficient. By using AI in this way, businesses show customers that they value their time and are willing to go the extra mile to make sure they have the best possible experience.
At SandStar, we have developed services including the Smart Kiosk system, which is designed to alleviate operational challenges faced by retailers through visual analysis and multidimensional data. Our Smart Stores allow businesses to make smart decisions to optimize daily operations.
Artificial intelligence (AI) can provide support for retail operations, leading to an increase in profits. Image: SandStar
| 2023-01-06T00:00:00 |
https://www.weforum.org/stories/2023/01/here-s-how-artificial-intelligence-benefit-retail-sector-davos2023/
|
[
{
"date": "2023/01/06",
"position": 34,
"query": "AI job creation vs elimination"
}
] |
|
Artificial Intelligence and the Future of Occupations
|
Artificial Intelligence and the Future of Occupations: Comparative Perspectives from the US and the UK
|
https://www.kcl.ac.uk
|
[
"King'S College London"
] |
Will robots take over our jobs? This project examines how artificial intelligence has influenced knowledge services sectors.
|
Will robots take over our jobs? This Cornell University and King's College London collaboration examines how artificial intelligence (AI) has influenced major knowledge-intensive services sectors, such as telecommunications and health care -- and how governments, employers and workers have responded to the challenges that smart technologies pose for the world of work.
Taking the United States and the United Kingdom as our case studies, we will explore a wide range of emerging issues and countervailing forces (e.g., public policies, professional associations, vocational training systems, licensing bodies and laws, unions and labor market regulation).
The study aims to be the first to systematically map these issues in the United States and the United Kingdom, with the goal of launching a mixed-methods project that covers a broader set of country cases. In so doing, the collaboration leverages the interdisciplinary expertise of our institutions to inform policy debates at the intersection of AI and work.
| 2023-01-06T00:00:00 |
https://www.kcl.ac.uk/research/artificial-intelligence-and-the-future-of-occupations-comparative-perspectives-from-the-us-and-the-uk
|
[
{
"date": "2023/01/06",
"position": 31,
"query": "future of work AI"
},
{
"date": "2023/01/06",
"position": 13,
"query": "AI labor union"
}
] |
|
Washington and the Future of Work
|
Washington Workforce Training & Education Coordinating Board
|
https://wtb.wa.gov
|
[] |
Prior to the pandemic, future of work discussions were often framed in terms of advancing technologies, such as Artificial Intelligence (AI), automation ...
|
Washington and the Future of Work
What is the future of work? It’s a pressing question, as the state recovers from a world-changing pandemic. Automation is advancing rapidly. Remote work has become the new normal. Frontline workers, many of whom risked their health to keep their jobs, face choices about what to do now that the economy has begun to recover. For others, whose jobs disappeared during economic shutdowns, starting something new isn’t a choice, but a necessity.
What we know…so far
Washington was the first state in the nation to begin to answer the question of “what does the future of work look like?” The Legislature funded a Future of Work project and Task Force in 2018. The Task Force crafted 17 recommendations across five key topic areas and delivered a robust report. Several of these recommendations were adopted by the state Legislature and are now in law or budget. Since then, funding for the project, and task force, have ended. But the highlighted issues have only grown more prominent, with COVID-19 putting many of those changes in overdrive.
What’s next?
A U.S. Department of Labor Dislocated Worker Grant is focused on helping local workforce development boards advance “future of work” strategies for dislocated workers (see sidebar).
Our job now is to help workers, job seekers, businesses across industry and sector, along with workforce system practitioners, adapt to the future of work with new thinking and new resources. These tools must be relevant to an increasingly digital, data driven, and more inclusive economy. We intend to keep these web pages fresh with the latest information to help drive economic recovery—from an organized library of research and reports, to top tips from across the nation and world.
Focus on equity key
Prior to the pandemic, future of work discussions were often framed in terms of advancing technologies, such as Artificial Intelligence (AI), automation, big data, facial recognition, machine learning, and robots. However, since the onset of the COVID-19 pandemic and resulting economic and social upheavals, these discussions are now more focused on equity, inclusion and economic inequality issues.
This does not mean that technology isn’t advancing. It is, in fact, accelerating. But policy efforts are taking into account many more factors related to income inequality and economic disparity experienced by Blacks, Hispanics, women, residents of rural communities, and other disadvantaged populations. These inequities were already present pre-pandemic but made them visible in a big way. Tackling these inequities and providing our workforce and employers with the tools to succeed is our challenge.
| 2023-01-06T00:00:00 |
https://wtb.wa.gov/planning-programs/future-of-work/
|
[
{
"date": "2023/01/06",
"position": 45,
"query": "future of work AI"
},
{
"date": "2023/01/06",
"position": 45,
"query": "government AI workforce policy"
}
] |
|
How AI will change the future of UI design
|
How AI will change the future of UI design
|
https://indulge.digital
|
[
"Russell Isabelle"
] |
I wouldn't panic just yet, we're not all hanging up our design tools tomorrow. Actually, I think the opportunity is presenting itself to work together with AI ...
|
AI is a hot topic at the moment with OpenAI training ChatGPT and causing shockwaves in the industry. My business partner Patrick Cunningham has covered ChatGPT in his latest blog.
It shouldn't be a surprise that an AI chatbot has been able to become highly intelligent and mimic human response. 20 years ago I wrote my degree dissertation on the blurred boundaries between virtual reality and reality. The human race has been on a mission to try and recreate and improve our own world and minds using machines for a long time. Forbes date mechanical learning all the way back to 1308. Digital computers have been around since the 1940s. As technology evolves the output improves and there are more ways it can speed up or replace organic processes.
With my interface designer hat on, I’m still making my own design decisions today. This includes creating design systems, layouts or conducting UX research. How long before it’s normal for these decisions to be guided by AI or at the extreme owned by AI and the code flawlessly published? I wouldn’t panic just yet, we’re not all hanging up our design tools tomorrow. Actually, I think the opportunity is presenting itself to work together with AI and produce unimaginably good websites and apps. Our best work yet will be AI assisted.
To gather my thoughts on the impact it might have I’ve listed out some pros and cons.
Pros
Processing complex datasets and displaying the data
Layout decisions to improve conversions
Ideas for new app features
Transcribing speech (AWS Amazon Transcribe)
Designing advertising banners
Creation of persona profiles
Compelling content strategies with perfect grammar
Image sourcing and editing
... it's more difficult for me to think of an aspect of interface design that couldn't be benefited. The amount of time saved would be incredible and the implementations are endless. AI would quickly become an indispensable cog in the production cycle.
Cons
Communicating the AI output and justifying the reasoning behind any decisions to clients and the team
Integrating AI seamlessely into existing processes
Human error with the initial input into the AI could acheive the wrong results
Biased output due to problems in the way the AI was initially built
Managing underperforming AI generated work
These are all big potential challenges that will exist with humans operating AI to offer digital services to other humans.
One final thing, I’ve also asked ChatGPT what it thinks.
| 2023-01-06T00:00:00 |
2023/01/06
|
https://indulge.digital/blog/impact-ai-ui
|
[
{
"date": "2023/01/06",
"position": 85,
"query": "future of work AI"
}
] |
The Economics of AI | Darden Ideas to Action
|
The Economics of AI
|
https://ideas.darden.virginia.edu
|
[
"Insights From"
] |
The past year has seen a dramatic shift in the landscape for the economics of AI. Artificial intelligence has made remarkable progress, and this progress ...
|
Cognitive automation: This is set to be a major trend in 2023 and beyond. As the capabilities of large language models continue to expand, cognitive workers (such as myself) are increasingly at risk of being automated. This means that economists must abandon the notion that automation only affects routine jobs, and that human creativity is somehow — miraculously — immune to automation. To keep up with this new reality, economic models must be adapted accordingly.
| 2023-01-06T00:00:00 |
https://ideas.darden.virginia.edu/economics-of-ai-roundup
|
[
{
"date": "2023/01/06",
"position": 4,
"query": "AI economic disruption"
}
] |
|
How COVID-19 impacted supply chains and what comes next
|
How COVID-19 impacted supply chains and what comes next
|
https://www.ey.com
|
[
"Sean Harapko",
"Authorsalutation",
"Authorfirstname Sean Authorlastname Harapko Authorjobtitle Ey Americas Consumer Products Growth",
"Beverage Sector Leader Authorurl Https",
"Www.Ey.Com En_Gl People Sean-Harapko",
"Content Dam Content-Fragments Ey-Unified-Site Ey-Com People Global En S Sean-Harapko",
"Ey Americas Consumer Products Growth",
"Beverage Sector Leader",
"About This Article",
"Passionate About Friends"
] |
We can think about autonomous operations in terms of “lights-out,” “hands-free” and “self-driving,” where organizations use AI technologies across the end-to- ...
|
Big changes are on the horizon for supply chains and greater supply chain visibility, efficiency and resilience are top of mind.
The executive supply chain surveys indicate that visibility, efficiency and reskilling supply chain workers are top priorities. These findings are not surprising as cost-optimization in the supply chain will always be a focus, even in the face of building out additional resiliency. Cost reduction has in the past been achieved through lean operations, longer lead times and low-cost labor. But in the future, agility, visibility, automation and upskilled people will be key, which together drive not only cost reductions but better decision-making, and process standardization and excellence across the supply chain and clients ecosystem partners.
The surveys show that supply chain visibility was a top-three priority in all years the research surveys were conducted. While increased broad supply chain visibility is a continuous top priority, it remains a work in progress, especially as supply chains continue to increase in complexity. Modernizing the supply chain at scale through technology and digital solutions such as generative artificial intelligence GenAI can certainly help improve visibility; however, these are enablers for a bigger challenge — Integrated Business Planning with process and organizational discipline to drive integration across the enterprise among commercial, supply chain and finance, and then beyond into the supplier network.
With the need for increased visibility across typically hundreds or thousands of suppliers, we are already seeing a shift from linear supply chains to more integrated networks connecting many players. Enabling this sea change are technologies such as Internet of Things (IoT) devices or sensors that provide valuable data on where goods are in the chain and their condition — for example, products for which temperature monitoring may be critical (i.e., frozen foods, vaccines or other medicines). Cloud-based platforms for collaboration among suppliers and supply chain orchestration (i.e., control tower) also increased in terms of piloting and adoption.
Our 2022 survey found that 61% of respondents said they will retrain and reskill their workforce. And going forward there will be efforts to help workers use digital technologies, adapt to changing company strategies and ways of working like increased virtual collaboration, and assist people in operating equipment with health and safety in mind. Top workforce measures identified in the 2022 survey include increased automation (63%) and investments in AI and machine learning, with 37% of respondents already deploying these technologies and another 36% planning to use them soon.
It may be safe to assume that because of COVID-19 pandemic, companies put their sustainability goals on hold in order to manage through the pandemic. The 2022 survey found just the opposite — 80% were more focused on environmental and sustainability goals (ESG). Cost savings, compliance and pressure from the workforce and suppliers were the top motivators for improving supply chain sustainability. With investors seeking information on a company’s ESG performance, employees wanting to work for companies with sustainability built into their mission statements, increased customer expectations for sustainability and increasing regulation from various countries, sustainable supply chain practices no doubt are here to stay.
| 2024-01-23T00:00:00 |
2024/01/23
|
https://www.ey.com/en_gl/insights/supply-chain/how-covid-19-impacted-supply-chains-and-what-comes-next
|
[
{
"date": "2023/01/06",
"position": 43,
"query": "AI economic disruption"
}
] |
2022 Labor & Employment Year in Review … and Looking ...
|
2022 Labor & Employment Year in Review … and Looking Ahead to 2023
|
https://www.lawandtheworkplace.com
|
[
"Evandro Gigante",
"Laura M. Fant",
"Arielle E. Kobetz",
"Olympia Karageorgiou",
"David Gobel",
"January",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline"
] |
Our analysis of the proposed changes can be found here. Posted in Artificial Intelligence, Discrimination, Harassment and Retaliation, Workplace Policies and ...
|
There is no doubt that 2022 was an eventful year in employment law. In this post, we review some key developments from the prior year that employers should be aware of and hot topics to watch out for as we move forward into 2023.
Salary and Pay Transparency
The trend of enacting salary and pay transparency laws continued in 2022 and shows no signs of slowing down. As discussed in more detail in our previous blog post, several jurisdictions passed or enacted salary transparency legislation last year, including New York City (effective November 1, 2022), Westchester County, New York (effective November 6, 2022), California (effective January 1, 2023), and Washington State (effective January 1, 2023). Though employers’ specific obligations under pay transparency laws vary among jurisdictions, these laws generally require employers to disclose a prospective salary or salary range when advertising an open employment position. For example, New York City’s pay transparency law (discussed in more detail here) provides that “it shall be an unlawful discriminatory practice for an employment agency, employer, or employee or agent thereof to advertise a job, promotion or transfer opportunity without stating… the lowest to the highest annual salary or hourly wage the employer in good faith believes at the time of the posting it would pay for the advertised job, promotion or transfer opportunity.”
Limits on NDAs and Mandatory Arbitration
The “Speak Out Act,” which was signed into law by President Biden on December 7, 2022, renders pre-dispute nondisclosure and non-disparagement clauses judicially unenforceable with respect to sexual assault or sexual harassment disputes.
Earlier in 2022, President Biden enacted the “Ending Forced Arbitration of Sexual Assault and Sexual Harassment Act,” which similarly prohibits enforcement of mandatory pre-dispute arbitration agreements, as well as agreements prohibiting participation in a joint, class or collective action in any forum, “at the election of the person alleging conduct constituting a sexual harassment dispute or sexual assault dispute, or the named representative of a class or in a collective action alleging such conduct.”
In 2023, employers should be on the lookout for the potential expansion of similar laws at the more local level, as several states (including California, Hawaii, Maine, New York, Oregon, and Washington State) have recently enacted more expansive laws regarding non-disclosure and arbitration agreements in the wake of the #MeToo movement.
Expanding Leave Benefits
While there is still no paid leave benefit requirement for employers under federal law, states and localities are continuing the trend of enacting (or otherwise expanding existing) paid sick, paid medical and/or paid family leave laws (including New York, California, Massachusetts, the District of Columbia, and Chicago, Illinois). In 2022, Maine and Maryland became the latest states to enact new paid family and medical leave legislation. Payroll deductions under the laws are scheduled to begin on October 1, 2023 in Maryland and on January 1, 2025 in Delaware. Employees will be eligible to take statutory paid leave in Maryland beginning in 2025 and in Delaware starting in 2026. Our analysis of the new paid leave laws can be found here.
Employee Monitoring and Data Collection
With remote and hybrid work looking like it’s here to stay – and with the recent emergence of “quiet quitting,” referring to employees doing the bare minimum required of their job – some employers may consider expanding electronic monitoring of employees in response. These employers should be mindful of certain laws that limit the ability of employers to monitor employee activity.
Effective May 7, 2022, New York law requires all private employers with a place of business in New York state to provide written notice upon hire to new employees if they monitor, or plan to monitor, their employees through telephone, email, or internet communications. New York City is also currently considering an ordinance that would place significant limitations on employers’ ability to utilize electronic monitoring for purposes of discipline or discharge. And effective January 1, 2023, the California Privacy Rights Act (CPRA) expanded to apply to employer/employee data, and requires employers to let workers know what personal information they collect about them, among other provisions. Enforcement of the CPRA, however, will not begin until July 1, 2023.
In addition to state law action, the proposed bipartisan federal Data Privacy and Protection Act seeks to regulate the kind of personal data companies can collect on employees, and could preempt existing state law if passed. However, the law faces opposition from Democratic leadership at the federal and state levels, as well as from business interests. The National Labor Relations Board (NLRB) may also become more active in this area, as NLRB General Counsel Jennifer Abruzzo recently issued a memo advocating for “zealous enforcement” to protect employees from intrusive or abusive forms of electronic surveillance.
New California Laws to Watch Out For In 2023
A new year in California brings the arrival of many new labor and employment laws, and 2023 is no exception. Here, we highlight some of these recently enacted laws:
Fast Food Accountability and Standards (FAST) Recovery Act: On September 6, 2022, California Governor Gavin Newsom signed this law that reshapes standards for employers in the “fast food” industry. The bill applies to non-unionized fast food chains with more than 100 locations, and establishes a council that will determine a minimum wage and working conditions for covered employees. The law was set to come into effect on January 1, 2023, but enforcement of the FAST Act was temporarily enjoined in December 2022, pending a challenge to the statute.
State of Emergency: This law, effective January 1, 2023, prohibits employers from taking adverse employment action against employees who have a reasonable belief that a worksite is unsafe.
Leave Law Developments: California passed two leave expansion laws, both of which took effect on January 1, 2023. AB 1041 expands the definition of a “designated person” when an employee takes medical leave to care for others. “Designated person” now includes “any individual related by blood or whose association with the employee is the equivalent of a family development. AB 1949 provides employees with up to five days of bereavement leave upon the death of a qualifying family member.
Additionally, California enacted pay transparency requirements, discussed further in our previous blogs here and here.
PAGA and Viking River Cruises
The California Labor Code Private Attorneys General Act of 2004 (“PAGA”) also will remain a hot-button issue for California employers in the aftermath of the Supreme Court’s ruling in Viking River Cruises, Inc. v. Moriana, Case No. 20-1573, 142 S.Ct. 1906 (June 15, 2022). The opinion reversed previous California Supreme Court precedent by holding that PAGA claims can be separated into “individual” and “representative” claims, and that “individual” claims can be subject to mandatory arbitration. Our full summary of the ruling can be found here.
However, it remains uncertain whether “representative” claims can be subject to mandatory arbitration. The California Supreme Court is poised to determine in Adolph v. Uber Technologies, Inc., whether an employee who has had “individual PAGA claims” forcibly arbitrated maintains statutory standing for representative claims. Alternatively, the California legislature could follow advice from Justice Sotomayor’s concurrence in Viking River Cruises and modify PAGA to explicitly allow an employee to litigate representative PAGA claims on behalf of other employees, even if the employee loses individual standing because the employee-plaintiff’s claims have been compelled to arbitration.
Finally, in 2024, California voters will have an opportunity to vote on the future of PAGA directly, by choosing whether to replace it entirely with the California Fair Pay and Employer Accountability Act (“FPEAA”). We will continue to report evolving developments in the PAGA space in 2023 on California Employment Law Update.
DEI (Diversity, Equity, and Inclusion) Initiatives
Many employers continued to focus in 2022 on improving DEI (diversity, equity, and inclusion) in their workplaces. However, those initiatives have not been without challenge. Some state legislatures have passed laws restricting activities that have long been associated with DEI efforts. For example, Idaho passed legislation specifically targeted to stop “preferential treatment” when hiring employees based on race, while Florida’s Individual Freedom Act, which amended Florida’s Civil Rights Act, prohibits employers from endorsing various race-, sex-, and national origin-based concepts during mandatory trainings. A federal judge enjoined much of the Florida law, but that decision is currently on appeal.
In 2023, the Supreme Court is also poised to decide what could be a landmark decision on race conscious admissions in higher education. On October 31, 2022, the Court heard oral arguments in two cases related to affirmative action programs at universities. Students for Fair Admissions, Inc. v. Pres. and Fellows of Harvard College, 142 S. Ct. 2810 (2022); Students for Fair Admissions, Inc. v. U. of N. Carolina, 142 S. Ct. 2809 (2022). A ruling on the lawfulness of these programs, even if narrowly tailored to college affirmative action programs, could have implications down the road for employer-sponsored DEI initiatives.
Employee v. Independent Contractor Tests
Uncertainty surrounding how various federal and state laws classify workers will keep challenging employers in 2023.
On October 13, 2022, the Department of Labor (“DOL”), issued a proposed new rule for classifying workers. The proposed rule largely mirrors the Obama-era test, and aims to totally replace guidance made in the waning days of the Trump administration. The proposed rule seeks to focus “on the economic realities of the workers’ relationship with the employer,” and frames a worker’s economic dependence on their employer as the “ultimate inquiry” for the test.
Ultimately, courts have the power to apply the new multi-factor test how they think is best. In addition, certain states, including California, Massachusetts, and New Jersey, have adopted the “ABC test” in recent years, an even more stringent test used to determine whether workers should be classified as employees, rather than as independent contractors. The test’s stringency has made it a target for employers and certain workers who desire to be classified as independent contractors, including truckers in California who protested earlier this year. This year, a California appellate court held that the “ABC test” for determining employee vs. independent contractor status can be applied even if workers do not first establish that they were actually hired by the defendant-employer or its agent. No sector is more in flux with these laws than companies in the “gig economy.”
The independent contractor rules are another area of law that employers should monitor, particularly for developments throughout the various states that adopt different standards.
Artificial Intelligence (AI) Tools in Employment
In recent years, an increasing number of employers have turned to artificial intelligence (AI) tools to help more effectively evaluate and select job candidates. Federal, state and local governments have begun scrutinizing the use of these tools in response to research suggesting that the widespread use of AI tools may increase the possibility of bias or discrimination on the basis of protected characteristics.
For example, in May 2022 the DOJ and EEOC issued guidance for employers concerning the use of AI tools. On the local level, beginning in April 2023, New York City employers will be prohibited from using automated employment decision tools to screen applicants and employees, unless the tool has been subject to a bias audit and the employer satisfies a series of potentially burdensome notice requirements. While the law was initially set to take effect on January 1, 2023, the NYC Dept. of Consumer and Worker Protection has announced that due to the pendency of proposed rules, the law will not be enforced until April 15, 2023.
Further, two measures were proposed in 2022 to regulate the use of AI tools in employment in the state of California. In January 2022, AB 1651— Worker Rights: Workplace Technology Accountability Act— was introduced into the California State Legislature. This bill would (1) grant California employees the right to “know, review, correct, and secure data collected from them by their employer”; (2) “impose various limitations on the collection and use of data via electronic monitoring”; (3) limit the use of “machine learning, statistics, or other data processing or artificial intelligence techniques, that makes or assists an employment-related decision”; and (4) “require employers to prepare and publish impact assessments for the use of various technology.” In March 2022, the California Fair Employment and Housing Council published a Draft Modification to Employment Regulations Regarding Automated-Decision Systems, which seeks to further regulate California employers’ use of AI tools.
OFCCP Enforcement
The U.S. Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) has increased its enforcement efforts under the Biden administration, as indicated in Directive 2022-02, Effective Compliance Evaluations and Enforcement, and through OFCCP’s subsequent adoption of new directives and regulatory procedures. Among other things, OFCCP has indicated that contractors will no longer be guaranteed advance notice of audits, and can expect more requests for additional data, witness information and witness interviews.
Moving into 2023, OFCCP is also considering adopting a revised Compliance Review Scheduling Letter and Itemized Listing for OFCCP audits, which would impose significant new initial audit submission requirements on federal contractors. The public has until January 20, 2023 to submit comments on proposed changes. Our analysis of the proposed changes can be found here.
| 2023-01-20T00:00:00 |
2023/01/20
|
https://www.lawandtheworkplace.com/2023/01/2022-labor-employment-year-in-review-and-looking-ahead-to-2023/
|
[
{
"date": "2023/01/06",
"position": 68,
"query": "government AI workforce policy"
}
] |
Consulting jobs: Lead change
|
https://www.accenture.com
|
[] |
Emerging technologies like generative AI are changing everything from how our clients compete to how they turn ideas into products. You'll work with clients ...
|
Use VR for better public service
Frontline public sector workers go into unpredictable situations every day—and the stress of the unknown can lead to burnout.
We created AVENUES to transform hiring, training, and ongoing skill development for frontline staff across the public sector.
This an immersive, VR-based learning experience that creates in-the-field scenarios without the risk.
See how we did it.
| 2023-01-06T00:00:00 |
https://www.accenture.com/us-en/careers/explore-careers/area-of-interest/consulting-careers/_jcr_content/root/container_main?jk=Technology&sk=Consulting%7CManagement+Consulting&ba=Strategy+%26+Consulting
|
[
{
"date": "2023/01/06",
"position": 66,
"query": "generative AI jobs"
}
] |
||
Generative AI Beginner's Guide - Lore
|
Generative AI Beginner's Guide
|
https://lore.com
|
[] |
From generating realistic images and composing music to writing code and creating 3D models, Generative AI is transforming how we interact with technology. This ...
|
Generative AI
A Beginner's Guide from Lore
Generative AI represents a revolutionary leap in artificial intelligence, enabling machines to create new, original content that mimics human creativity. From generating realistic images and composing music to writing code and creating 3D models, Generative AI is transforming how we interact with technology. This guide will walk you through everything you need to know about this exciting field.
Introduction
The field of Generative AI has exploded in recent years, with breakthroughs like DALL-E 3, Midjourney, and Stable Diffusion revolutionizing image generation, while GPT-4 and Claude are transforming text generation. These advancements are not just academic curiosities - they're being actively used in industries ranging from entertainment and healthcare to finance and software development.
Whether you're a developer looking to integrate AI into your applications, a business leader exploring AI solutions, or simply curious about this transformative technology, this guide will provide you with a solid foundation in Generative AI concepts, tools, and applications.
What is Generative AI?
Generative AI refers to artificial intelligence systems that can create new content - whether it's text, images, music, or code - that resembles but is distinct from their training data. These systems use sophisticated neural network architectures like Transformers, GANs (Generative Adversarial Networks), and Diffusion Models to learn patterns from vast datasets and generate novel outputs.
What makes Generative AI unique is its ability to create original content rather than just analyze or classify existing data. This capability has opened up new possibilities in creative fields, problem-solving, and automation that were previously thought to be exclusively human domains.
Read more: Generative AI Explained
The Beginner's Guide to Generative AI
Dive deep into the technical foundations of Generative AI, including transformer architectures, attention mechanisms, and the latest advances in diffusion models. We'll explain how these systems learn from data and generate new content, with practical examples and visualizations to help you understand the underlying concepts.
From the early days of Markov chains to the latest breakthroughs in multimodal models, trace the evolution of Generative AI. Learn about key milestones, influential research papers, and the technological advances that have shaped the field into what it is today.
Explore the critical ethical considerations surrounding Generative AI, including bias mitigation, copyright issues, deepfakes, and responsible AI development. We'll discuss real-world examples and provide frameworks for ethical AI implementation.
Get hands-on with the latest Generative AI tools and platforms. From OpenAI's GPT-4 and DALL-E to Stability AI's Stable Diffusion, we'll cover the most powerful tools available today, their features, use cases, and how to get started with each.
Discover the leading companies shaping the future of Generative AI. From established tech giants to innovative startups, learn about their contributions, business models, and the unique value they bring to the field.
Explore real-world applications and success stories of Generative AI across industries. From AI-generated art winning competitions to automated code generation revolutionizing software development, see how this technology is being applied in practice.
Glossary
Master the key terms and concepts in Generative AI:
LLMs: Large Language Models, such as GPT-4, Claude, and Gemini, are advanced neural networks trained on vast amounts of text data to understand and generate human-like language.
GPT: Generative Pre-trained Transformer, a family of language models that have revolutionized natural language processing with their ability to generate coherent, contextually relevant text.
GANs: Generative Adversarial Networks, a powerful architecture where two neural networks (generator and discriminator) compete to create increasingly realistic synthetic data.
VAEs: Variational Autoencoders, neural networks that learn to compress and reconstruct data, enabling efficient generation of new samples while maintaining data distribution characteristics.
StyleGAN: A sophisticated GAN architecture that enables precise control over image generation, particularly effective for creating photorealistic faces and artistic styles.
Neural Style Transfer: A technique that combines the content of one image with the artistic style of another, creating unique visual compositions.
DeepDream: A visualization technique that reveals the patterns and features that neural networks learn, creating dream-like, psychedelic images from ordinary photos.
Attention Mechanism: A crucial component in modern AI that allows models to focus on relevant parts of input data, enabling better understanding of context and relationships.
Transfer Learning: A powerful technique that leverages pre-trained models to solve new tasks with less data and computational resources, accelerating AI development.
Image-to-Image Translation: A family of techniques that transform images from one domain to another, such as converting sketches to photorealistic images or changing seasons in landscape photos.
Learn more industry terms in our Generative AI Glossary
Generative AI for Industries & Applications
Discover how Generative AI is transforming various industries and creating new opportunities for innovation and efficiency. From automating routine tasks to enabling creative breakthroughs, these applications demonstrate the technology's versatility and impact.
| 2023-01-06T00:00:00 |
https://lore.com/generative-ai
|
[
{
"date": "2023/01/06",
"position": 77,
"query": "generative AI jobs"
}
] |
|
Algorithms Need Management Training, Too
|
Algorithms Need Management Training, Too
|
https://www.wired.com
|
[
"Wired Ideas",
"Will Knight",
"Reece Rogers",
"Steven Levy",
"Kylie Robison",
"Zoë Schiffer",
"Kate Knibbs"
] |
The European Union is expected to finalize the Platform Work Directive, its new legislation to regulate digital labor platforms, this month.
|
The European Union is expected to finalize the Platform Work Directive, its new legislation to regulate digital labor platforms, this month. This is the first law proposed at the European Union level to explicitly regulate “algorithmic management”: the use of automated monitoring, evaluation, and decision-making systems to make or inform decisions including recruitment, hiring, assigning tasks, and termination.
WIRED OPINION ABOUT Aislinn Kelly-Lyth, formerly a researcher at the Bonavero Institute of Human Rights, University of Oxford, works at Blackstone Chambers in London. M. Six Silberman and Halefom Abraha are postdoctoral researchers at the Bonavero Institute of Human Rights, University of Oxford. Jeremias Adams-Prassl is Professor of Law at Magdalen College, University of Oxford, and the Principal Investigator of the ‘iManage’ Project on regulating algorithmic management, funded by the European Research Council (ERC).
However, the scope of the Platform Work Directive is limited to digital labor platforms—that is, to “platform work.” And while algorithmic management first became widespread in the labor platforms of the gig economy, the past few years—amid the pandemic—have also seen a rapid uptake of algorithmic management technologies and practices within traditional employment relationships.
Some of the most minutely controlling, harmful, and well-publicized uses have been in warehouse work and call centers. Warehouse workers, for example, have reported quotas so stringent that they don’t have time to use the bathroom and say they’ve been fired—by algorithm—for not meeting them. Algorithmic management has also been documented in retail and manufacturing; in software engineering, marketing, and consulting; and in public-sector work, including health care and policing.
Human resource professionals often refer to these algorithmic management practices as “people analytics.” But some observers and researchers have developed a more pointed name for the monitoring software—installed on employees’ computers and phones—that it often relies on: “bossware.” It has added a new level of surveillance to work life: location tracking; keystroke logging; screenshots of workers’ screens; and even, in some cases, video and photos taken through the webcams on workers’ computers.
As a result, there is an emerging position among researchers and policy makers that the Platform Work Directive is not enough, and that the European Union should also develop a directive specifically regulating algorithmic management in the context of traditional employment.
It’s not hard to see why traditional organizations are using algorithmic management. The most obvious benefits have to do with improving the speed and scale of information processing. In recruiting and hiring, for example, companies can receive thousands of applications for a single open position. Résumé screening software and other automated tools can help sort through this huge quantity of information. In some cases, algorithmic management might help improve organizational performance, for example by more smartly pairing workers with work. And there are some potential, if so far mostly unrealized, benefits. Designed carefully, algorithmic management could reduce bias in hiring, evaluation, and promotion or improve employee well-being by detecting needs for training or support.
But there are clear harms and risks as well—to workers and to organizations. The systems aren’t always very good and sometimes make decisions that are obviously erroneous or discriminatory. They require lots of data, which means they often occasion newly pervasive and intimate surveillance of workers, and they are often designed and deployed with relatively little worker input. The result is that sometimes they make biased or otherwise bad management decisions; they cause privacy harms; they expose organizations to regulatory and public relations risks; and they can erode trust between workers and leadership.
The current regulatory situation regarding algorithmic management in the EU is complex. Many bodies of law already apply. Data protection law, for example, provides some rights to workers and job candidates, as do national systems of labor and employment law, discrimination law, and occupational health and safety law. But there are still some missing pieces. For example, while data protection law creates an obligation for employers to ensure that data they store about employees and applicants is “accurate,” it’s not clear that there is an obligation for decision-making systems to make reasonable inferences or decisions based on that data. If a service worker is fired because of a bad customer review but that review was motivated by factors beyond the worker’s control, the data may be “accurate” in the sense of reflecting the customer’s unsatisfactory experience. The decision based on it may therefore be lawful—but still unreasonable and inappropriate.
This leads to a curious paradox. On the one hand, more protection is needed. On the other hand, the welter of already existing law creates unnecessary complexity for organizations trying to use algorithmic management responsibly. Confusing matters further, the algorithmic management provisions of the new Platform Work Directive mean that platform workers, long underprotected by law, are likely to have more protections against intrusive monitoring and error-prone algorithmic management than traditional employees.
| 2023-01-06T00:00:00 |
2023/01/06
|
https://www.wired.com/story/platform-work-labor-economy-ai/
|
[
{
"date": "2023/01/06",
"position": 8,
"query": "AI labor union"
}
] |
News/Events | TransFormWork
|
TransFormWork
|
https://transformwork.eu
|
[] |
... Artificial Intelligence (AI) and its impact on work. He said that AI experts ... Workers Union. More than 60 participants took part in the Round Table ...
|
The third European Roundtable was held in Ireland on 11-12 October 2022. The opening day of this event focused on the Europe Social Partners Framework Agreement on Digitalisation and in particular on its relevance to new technology developments within the EU. The second day debated the relevance of the Framework Agreement to the Irish context and the views of the national social partners. It was attended by over forty participants, of which seventeen were from the project partner Member States.
11 October 2022
He was pleased to see a number of participants from SIPTU, from other Irish trade unions and from business and workers’ organisations, as this project is an important learning experience for SIPTU and the whole Irish trade union movement. As with other EU Member States, the Irish trade unions face the difficult challenges of dealing with organisational change resulting from the increased use of new technologies, robotics and Artificial Intelligence (AI).
Kevin Callinan, President, Irish Congress of Trade Unions (ICTU) and General Secretary, FÓRSA (the public sector trade union), gave the opening address to the Roundtable. In doing so, he said that digitalisation and climate change are two of the biggest issues facing society today and for future generations. Ireland is no stranger to embracing technological change and was an early participant in the technology age. From the emergence of digitalisation, Ireland has been the location of choice for computer manufacturing, providing high levels of employment. He said that Ireland has shown it can adapt and embrace the changes that society faces and that he is sure it will continue to do so with creativity and energy.
ICTU has played an active part in working with Government, Employers and other Civil Society Organisations in addressing these challenges, in particular those faced by workers across public and private sectors organisations.
He went on to discuss the issue of remote working, saying that the ICTU has been very vocal in supporting development, something now made possible because of technological advances which were accelerated during the COVID-19 pandemic. He said that remote working makes good business and economic sense. It can open up employment opportunities for people with a disability, carers and older workers, widening the talent pool, improving workforce diversity, narrowing the gender pay gap and income inequality between households. It can also benefit the environment and rural regeneration. That is why trade unions were the first to call for legislation on the right to remote working, bringing Ireland into line with long-established employment law in most other EU Member States.
While the Government is committed to introduce remote working legislation, the priority of the ICTU now is to ensure that the proposed legislation is fit for purpose and that remote working does not come at the expense of hard-won worker’s rights or lead to greater casualisation and outsourcing of jobs. Remote working also presents challenges and dangers for workers, particularly in terms of isolation and, in the long-term, how remote working impacts on women workers in areas such as pay equality and access to promotion opportunities.
One of the earliest and most far-reaching achievements of the trade union movement is the shorter working day. The ICTU has long advocated for concrete actions to achieve work-life balance, including the regulation of hours of employment. In this context he drew attention to another, related, issue, the right to disconnect. With so much technology and an ‘always on’ culture, workers must be protected and ensure that burnout, fatigue and stress do not become greater dangers in terms of workplace safety and health. He also expressed the ICTU support for the global campaign for a 4-day week.
In hoped that this important Roundtable will be a success, Mr Callinan concluded by saying that the Framework Agreement on Digitalisation, represents an important opportunity to develop an agreed approach to implementing strategies around digital skills and securing employment as well as modalities of connecting and disconnecting.
– Study of the national contexts and the challenges to social dialogue of digitalisation in each participating Member State
– To exchange experiences and good practice in the implementation of the Framework Agreement
– To stimulate debate and raise awareness so as to improve the understanding of employers, workers and their representative organisations of both the opportunities and challenges of digitalisation
– To promote ‘good practices’ for the digital transformation of workplaces and to publish a Catalogue of Good Practices to assist employers’ and workers’ organisations
– To make recommendations on legislative changes in order to accelerate the implementation of the various aspects of the Framework Agreement.
The methodology adopted by the project is in four stages:
– ‘Desk’ analysis at the national level of the challenges from digitalisation to employment and the organisation of work
– Undertake an online survey of a) employers; b) worker representatives; c) HR managers, followed up by face-to-face interviews of a selected survey respondents
– Five national reports based on the ‘desk’ analysis, survey responses and interviews
– Publication of a Comparative Study of the five national reports.
It is proposed that the Framework Agreement will be presented in each participating country at two Information Days (a total of ten!) and it was intended that three European Roundtables would be held to present the research data and good practice examples. This Roundtable is the third of these Roundtables.
The complete the work of the project, a final European Conference will be held in Sofia during February, 2023. To follow the work of the project, participants in this Roundtable were invited to log on to the Project website: www.transformwork.ie
This presentation was followed by a joint online contribution from Ruairí Fitzgerald, European Trade Union Confederation (ETUC), and Isaline Ossieur, BusinessEurope.
It covers four key issues and measures to deal with the challenges of digitalisation:
– Digital skills and securing employment
– Modalities of connecting and disconnecting
– Artificial Intelligence (AI) and guaranteeing the ‘human in control’ principle
– Respect for human dignity and surveillance
While inviting national member organisations of the signatory organisations to implement and promote the Agreement within three years and have asked each national-level group of organisations to report back each year to the EU-level Social Dialogue Committee.
So far, these organisations in twelve Member States have reported having successfully started the process of implementation and a number of ‘best practices’ were highlighted from
– The Netherlands: The Social and Economic Council, in collaboration with the Social Partners, have set up a Digital Transition Working Group to:
o Monitor the impact of the digital transition
o Organise meetings and issue exploratory reports
o Ensure the internal coherence of ongoing Council activities with a digital component
– Austria: New legislation was enacted regulating ‘home office’ which includes provisions on working time and working conditions, such a safety and health. These new provisions were agreed between the social partners (AK; WKÖ; and ÖGB) and the Government and, with regard to implementing this new legislation, no difficulties have been reports from either the employers or from worker representatives
– Luxembourg: The Economic and Social Council and the social partners signed an inter-professional agreement on telework in October, 2020. This new agreement maintains the voluntary nature of teleworking for the worker and the employer. It regulates both regular teleworker and occasional telework by setting thresholds, which were absent from the previous legislation. Provisions on the right to disconnect have also been negotiated between the social partners and these have been incorporated into the Labour Code.
– Denmark: In October, 2017, a tripartite agreement on access to upskilling and improving the quality and flexibility in adult education and training was concluded. This agreement provided for €50 million to expand the programmes for adult education to include programmes on digital technologies. It was agreed, as part of the implementation of the Framework Agreement, to continue this agreement in October, 2021, for a further year and to re-negotiate it in 2022.
Following this presentation there was a range of questions from the participants on various issues, including clarification on some aspects of the national implementation examples.
Session 2 was chaired by Mariya Mincheva, Vice-President, Bulgarian Industrial Association (BIA). The session looked at recent research into digitalisation in the EU and the possible impact of artificial intelligence (AI) on the workplace and employment.
Sara Riso, Research Manager in the Employment Unit, Eurofound, who is involved in research projects in the areas of digitalisation, employment change and restructuring.
Her presentation covered the findings of the research undertaken as part of the Foundation’s input to the European Commission’s work programme – Priority 2: A Europe fit for the digital age: Empowering people with a new generation of technologies and focused on the
For example, in the Nordic and continental Member States there has been a long-standing debate focused on the human and ethical implications of digitalisation, in particular AI. While in Eastern Member States this is a relatively new debate, with a focus mainly on investment and infrastructure and skills related to the spread of digital technologies.
(It was noted that Ireland falls within the latter group of Member States and there is a need to start the national debate on the impact of digitalisation on the Irish labour market)
The research shows that problem solving and partnerships work well, while there is need for improvement in such areas as:
– Formal skills training
– Greater employee involvement
– Systematic monitoring.
She demonstrated how digital technologies impact on the organisation of work. The findings show that there is a negative impact on employee monitoring and control in most areas (although not with 3D printing!). Also, with regard to ‘job quality’ there were negative effects on ‘work intensity’ and on future ‘prospects and earnings’.
With regard to advanced robotics, the impacts were:
– Lower physical strains and reduced risk of injury, but a greater risk of negative psychosocial effects
– There is a move from manual work to ‘intellectual’ skills
– There is increased complexity in the interaction between the ‘human input’ and robots
– There is a potential for shorted working weeks, more part-time work, but a greater risk for workers been ‘on-call’ 24/7
– There is also less worker ‘autonomy’ because of the potential for increased monitoring through the use of technology.
This last point, regarding increased monitoring, can lead to threats to individual privacy, with implications for data protection issues, for negative psychological impact and an increased emphasis on the ‘measurement of performances’.
Also, related is this and the debate around ‘surveillance’, is the design of the technology to monitor worker performance. Regarding new technology trends that allow for monitoring practices:
– Technological advances have increased the possibilities of employee monitoring beyond work performance
– The move to remote work during the COIVD-19 pandemic has created a new market for a range of monitoring tools, including software to track computer based activity
– New organisational and management practices have emerged to facilitate the greater visibility of employees
– The rise of the ‘always-on’ culture is blurring the boundaries between workers’ private and work lives (work/life balance) – Six EU Member States have already addressed the issue of the ‘right to disconnect’!
‘There are no good or bad technologies, just management choices!!’
The second input for this session was an online presentation by Prof Alexiei Dingli, University of Malta, on Artificial Intelligence (AI) and its impact on work. He said that AI experts are rare and in a recent survey in Malta 10% of employers said that they were ‘not sure’ about AI. However, we are all involved in the use of AI on a daily basis, for example:
The ‘down-sides’, however, are a) that the initial capital investment is high, in particular, for small businesses and b) it might replace existing jobs, but some jobs would continue to be carried out better by humans, such as caring roles in health care and social services.
Research carried out by Oxford University estimated that up to 47% of jobs ‘will vanish’, while other estimates in the UK speculate on the loss of 10 million jobs in 15 year. In contrast, the Davos Economic Forum estimates that 85 million jobs will be replaced by machines, while 97 million new jobs will emerge. Furthermore, the Economic Forum estimates that 52% of working hours will be automated by 2025.
The future of work has already arrived for a large majority of the online white-collar workforce. Eighty-four per cent of employers are set to rapidly digitalise working processes, including a significant expansion of remote work, with the potential to move 44% of their workforce to operate remotely.
In his presentation, Prof Dingli listed a ‘top-twenty’ job roles that will be created or increase in demand and another ‘top-twenty’ job roles that will decrease or disappear. He also listed the ‘top skills’ in the workforce by 2025, all related to the increased use of AI.
Other potentials for new working are through, for example:
a) The Gig Economy, where enterprises, rather than employ full-time staff, hire part-time, temporary, independent contractors, or ‘freelancers’ – a process which is already well underway
b) The Platform Economy, such as Amazon, Uber and Airbnb
c) Digital Nomads, where worker can work from anywhere on the planet, so long as there is a digital connection!
So there is no possibility to slow or stop the ‘march’ of AI and robotics. Indeed, The European Commission has already called for Member States to prepare for the digital age:
We must make this Europe’s Digital Decade … There are three areas we need to focus – first, data; second, technology, in particular AI; and third, infrastructure.
12 October, 2022
The third and fourth sessions on the morning of 12 October focussed on the implications of the Framework Agreement for the Irish labour market.
L to R: Ger Gibbons, SIPTU, Tracey Donnery, Skillnet Ireland, Mary Connaughton (chair), CIPD
and Caroline Murphy, UL
Session 3 was chaired by Mary Connaughton, Director, CIPD Ireland, and the first contribution was by Ger Gibbons, Social and Legislative Officer, ICTU, who is involved in the national ‘reporting back’ process of the Framework Agreement for Ireland.
He explained that, under the Agreement the social partners in each EU Member State are required to submit reports on the progress made in its implementation in the first three years. He said this was a new experience for both ICTU and IBEC. They agreed to submit factual information on the four key areas of the Agreement – Digital Skills; the Right to Connect and Disconnect; Human Control of Artificial Intelligence; and Respect for Human Dignity and Surveillance.
The ICTU and IBEC, in the Irish reports drew attention to the submissions on relevant Government consultations, including on support for post-COVID employment, on the right to request remote working and the national digital strategy.
He went on to outline other issues involving the Irish social partners in the EU Social Dialogue framework, such as the European Commission proposals for a Directive on an ‘adequate living wage’. The EU Social Dialogue Work Programme, 2022-24, consists of six
joint actions:
– Telework and right to disconnect
– Green Transition
– Youth employment
– Work related privacy and surveillance
– Improving skills matching in Europe
– Capacity building.
Tracey Donnery, Director, Policy and Communications, Skillnet Ireland, presented the role that her organisation plays in skills training for the digital workplace. Skillnet is an Irish Government agency that works to identify and address the skill needs of businesses and enterprises, so that they will have a highly skilled digital workforce into the future.
The agency reports to the Department (Ministry) of Further and Higher Education, Research Innovation and Science and it is supported by the employer, trade union and other business organisations and is partnered by some fifty-plus sectoral organisations.
In 2021 it was responsible for over 9,800 training programmes for 86,500 trainees in 22,500 enterprises. These programmes cost in total €60.2 million. There are seventy-three sectoral and regional business networks contributing €22.5 million to the costs of these training programmes. The budget for 2022 is set at €80 million.
The work of Skillnet has the involvement of all the relevant State agencies and is carried out through eleven pillars, one of which is A Future in Technology. To date, 70% of the graduates from this pillar are now in technology jobs.
Skillnet also partners with the universities and third-level colleges, in particular the new Technology Universities, to provide advanced technical and leadership education programmes, through a wide range of MSc post graduate degrees in every aspect of digitalisation and new technologies. This initiative is also backed by the enterprise State agencies, such as the Industrial Development Authority (IDA) and Technology Ireland (an affiliated organisation of IBEC). Dell Technologies also partners with Skillnet on 167 transform initiatives .
Her presentation finished with ‘testimonies’ from a few of the past participants in Skillnet programmes – for example, Marijana, in data analytics and Mike, in software development. Michelle McKinney, Quality Training Specialist, Allergan, also gave positive feedback on the impact of Skillnet a training programme (in association with Technology University, Dublin) on the development of VR training for its workforce. All were very positive about their experience and how the training programmes impacted on their careers.
The final speaker in this third session was Dr Caroline Murphy, University of Limerick. Her presentation was on Designing Work for the Digital Age: Critical concerns in the Irish context,
posing the question: Is the future of work utopia or dystopia?
She drew on a 2020 study by the State training agency, Solas, which identified some 370,000 workers in Ireland whose jobs are at high risk of automation and a further 600,000 jobs were considered to be at ‘medium’ risk. The six groups whose jobs are considered to be most ‘at risk’ are:
– Operatives
– Sales and customer service
– Administration and secretarial
– Hospitality
– Agriculture and animal care
– Transport and logistics.
While Industry 4.0 focused on technical drivers, Industry 5.0 is about a knowledge based economy and this Solas report identified re-skilling and job redesign as crucial for the transition to Industry 5.0. To emphasis this point, Dr Murphy used the following definition:
Industry 5.0 recognises the power of industry to achieve societal goals beyond jobs and growth to become a provider of prosperity, by making production respect the boundaries of our planet and placing the wellbeing of the industry worker at the centre
of the production process.
Within this concept, such actions as Corporate Social Responsibility (CSR) and the United Nations Environmental, Social and Governance goals (ESG) will be increasingly important.
From her research a number of critical issues are emerging:
– Productivity – critical for economic growth
– Scalability – administrative roles, including support roles, such as finance and IR
– Employee involvement – encouraging key tech interests
– Employee wellbeing, cultural and role changes:
Organisations struggle with remote / hybrid working
Jobs need to have a ‘pull factor’
Ergonomic, health and safety issues
Young cohort of workers want high pay and job security.
Participants in the Roundtable
Brian McGann, Head, Organisational Development & Support Services, SIPTU, and TransFormWork Project Co-ordinator, Ireland, chaired Session 4.
– Organisation of the Working Time Act, 1997
– The Safety, Health and Welfare at the Work Act, 2005
– Employee (Miscellaneous Provisions) Act, 2018
– Terms of Employment (Information) Acts, 1994 to 2014.
Under the Workplace Relations Act, 2015, the relevant minister can request the Workplace Relations Commission (WRC) to draft a Code of Practice on an issue of relevance to employment relations. Such Codes are written guidelines agreed, after a consultative process and set out guidance and best practice for employers and employees to comply with employment legislation. Adhering to Codes of Practice is voluntary, but can be used in legal cases as evidence of a breach of employment rights.
With regard to the Code of Practice dealing with the right to disconnect, it outlines the rights of employees to disengage from work where a worker’s duties are through electronic communications, such as e-mails, ‘phone calls, etc., outside normal working hours. The objective is to protect employees working remotely from working excessive hours.
Employer obligations under the Code are:
– To provide detailed information to employees on their working time
– Ensure that employees take rest periods
– Ensure a safe workplace, including reviewing the risk assessment and, where necessary, the safety statement
– Take into account their obligations to
o …managing and conducting work activities in such a way as to prevent, so far as is reasonably practicable, any improper conduct or behaviour likely to put the safety, health and welfare at work of his or her employees at risk
– Not penalising an employee.
For their part, employees are responsible for:
– Ensuring that they manage their own working time
– Co-operating fully with any appropriate mechanism utilised by an employer to record working time, including when working remotely
– Be mindful of their colleagues, customers/clients and all other people’s right to disconnect
– Notifying the employer in writing of any statutory rest period or break to which they are entitled to and were not able to avail of on a particular occasion and the reason for not availing of such rest period or break.
– Being conscious of their work pattern and aware of their work-related wellbeing and taking remedial action, if necessary.
The Code of Practice recognises that there may be occasional circumstances where communications are sent and received outside normal working hours. If, however, contacts outside normal working hours becomes the norm, then this should be addressed by the employer.
Tara Coogan was followed by a pre-recorded video presentation by Erik O’Donovan, Head of Digital Economy Policy, IBEC.
The key points from his presentation covered issues such as:
– Why digital transformation matters
– What is the policy / strategy on digitalisation
Leadership
European digital compass – EU targets for 2030
Infrastructure
Skills
Digitising the public service
Trust
Safeguarding rights and safety
The internet is 39 years old, but there have been much more advances in technology in the past 5 years
What next? IBEC publication: Backing our digital future
Digital inclusion important for social inclusion
The final presentation of this session and of the Roundtable was by Aidan Connelly, CEO, Idiro Analytics. website, if you are a robot? and to click on the photos with traffic lights, you are teaching an algorithm for AI!
In the history of mankind, progress in technology developments has been slow. From the beginning of the 19th century machine technology began to impact on the workplace and this has accelerated during the past two centuries to where there is now an equality between human and machine input at present as machines are ‘taken over’ from humans work tasks through robotics and AI. He estimates that by 2035 machines will have replaced human input and will continue to do so thereafter.
As AI is further developed into the future, it will have the ability to reason like a human, self-learn and ultimately have human cognitive abilities, changing the world of work in unimaginable ways. Mr Connelly listed possible areas where AI been developed and is already in use, including:
– Language translations
– Document research
– Drug discoveries
– Facial recognition.
Economically we could see raising unemployment at the same time as economic growth, so there is a need for planning for ‘liveable’ incomes. The societal impact may affect relationships, cause a disconnect from ‘reality’ and social unrest. There is also already the growing application of robotics and AI for military use, the use of autonomous drones and missiles, and the use of cyber-hacking of energy, water and transport infrastructure, as we already see in the Ukraine war.
However, AI can be used for positive societal uses, for example:
– Advances in pharmacology, genetic sciences and customised medication
– Better and more accessible health care
– Automation of transport and manufacturing
– Development of new materials
– The reduction and/or elimination of repetitive tasks
– Reduction of errors
– More efficient public services through the application of AI for e-government.
He stressed that we can’t ignore AI, as it also has significant positive aspects. However, human control is paramount and the following are some of the issues that we need to address to ensure the ‘human in control’ principle:
– EU legislation on AI needs to regulate the more destructive use of AI
– Transparency is vital and should not be left in the hands of technologists alone
– AI auditing should become ‘common place’
– We need an Irish legal framework for AI
– We need better sharing of and access to data, in particular State held data
– There is a need to focus on educational efforts to understand AI
– AI ‘literacy’ is needed at governmental and business levels
– We need greater investment in advanced AI research and regulatory innovations.The Roundtable was closed by the chair, Brian McGann. He thanked all the speakers for their presentations, the participants from Ireland and from the other partner countries, the staff of the Grand Hotel, Malahide, the technical staff who provide the audio-visual systems and the online links for those speakers who contributed to the Roundtable from Brussels and Valletta. He also thanked the SIPTU team involved in the planning and organisation of the event, which he said had been very interesting and successful.
In conclusion, Kevin P O’Kelly, Project Researcher, outlined the programme for the remainder of this project, which includes a Steering Committee meeting in Liberty Hall, the HQ of SIPTU, immediately after the Roundtable. The final consolidated report will now be prepared and a further Steering Committee meeting is scheduled for Cyprus in January to ‘sign-off’ on this report, which will be published as part of the final conference in Sofia in February, 2023.
| 2023-01-06T00:00:00 |
https://transformwork.eu/category/news-events/
|
[
{
"date": "2023/01/06",
"position": 41,
"query": "AI labor union"
}
] |
|
OpenAI's ChatGPT Has Plenty to Say, Except on Parent's ...
|
OpenAI's ChatGPT Has Plenty to Say, Except on Parent's $29 Billion Valuation
|
https://www.investopedia.com
|
[
"Lyle Niedens",
"Lyle Spent Most Of The Past Two Decades In A Variety Of Product",
"Communication",
"Financial Writing Roles With Large Asset Managers",
"Mutual Fund Distributors",
"Mostly Recently As Vice President",
"Director Of Product Development With Waddell",
"Reed Ivy Distributors Inc. Previously",
"He Spent A Decade In Senior Roles As An Editor",
"Reporter With Business Publications"
] |
Chatbot creator's planned stock sale to venture capital firms would mark bright spot for technology sector struggling with layoffs and falling market ...
|
KEY TAKEAWAYS Deal would double OpenAI's valuation from just two years ago.
Microsoft stands to benefit from the firm's potential.
Questions remain, however, about what artificial intelligence can achieve.
ChatGPT, the bot that upended artificial intelligence (AI) upon its release two months ago, is happy to answer just about any question — except how much it's worth.
"I'm sorry, but I don't have information about the value or worth of ChatGPT or any other specific language model," it responded when posed that very query.
The bot's human creators at parent OpenAI have a pretty specific answer: $29 billion, more than double its valuation two years ago.
OpenAI has entered talks to sell existing shares to venture capital firms Thrive Capital and Founders Fund, The Wall Street Journal reported. The firms would buy shares in a tender offer from existing shareholders, including employees, that would value the company at $29 billion.
The tender deal reportedly could total $300 million. A similar deal in 2021 valued the company at $14 billion.
If completed, the deal would represent one of the few market bright spots for a technology sector facing myriad challenges. Technology startups have struggled in recent months, with many laying off workers amid plunging valuations in private market transactions. Layoffs have increased among large tech firms as well, as the tech-heavy Nasdaq Composite Index lost a third of its value last year.
Write Me An Essay
OpenAI has quickly captured the public's fascination with ChatGPT, a language processing tool that interacts with users via human-like conversations. The program has won acclaim -- and some derision -- for its ability to answer complex questions.
For example, it has shown the capability to write complete essays based on user requests -- even though the chatbot's main page warns that it "may occasionally generate incorrect information" and "harmful instructions or biased content."
Still, its essays are good enough that New York Public Schools have outlawed student use of ChatGPT.
Microsoft invested $1 billion in OpenAI in a 2019 deal that made it the startup's preferred partner in helping it market new technologies. In addition to ChatGPT, OpenAI last year introduced Dall-E 2, an image-generation system, and the company has said it one day hopes to market programs that fully mirror human intelligence and capabilities.
AI's Elusive Potential
Led by tech investor Sam Altman, the firm has generated tens of millions of dollars in revenue, The Journal reported, through selling its artificial intelligence software to developers. But questions remain about the money-making potential of OpenAI's technology.
A report issued by Deloitte in October found that while 94% of business leaders surveyed called AI "critical" to their organizations' success in the next five years, half of those surveyed reported low achievement from the AI they've employed so far.
Nonetheless, AI's potential remains appealing, and most analysts agree tech firms who supply it ultimately will benefit.
In a research note released this week, Gia Luria, a tech analyst with D.A. Davidson, issued a price target of $270 per share for Microsoft. Luria cited the "unprecedented activity" associated with ChatGPT's release as a key reason for his "buy" rating on the stock, which rose as much as 0.6% to $223.65 per share in Friday trading.
"We believe Microsoft's investment in OpenAI will translate to significant underappreciated upside," Luria stated in his report. "Longer-term, we believe incorporating ChatGPT into Bing (Microsoft's search engine) may provide Microsoft with a once-a-decade opportunity to unseat Google's Search dominance."
| 2023-01-06T00:00:00 |
https://www.investopedia.com/chatgpt-venture-capital-7092317
|
[
{
"date": "2023/01/06",
"position": 98,
"query": "AI layoffs"
}
] |
|
Master of Business Analytics and Intelligence | Graduate
|
Master of Business Analytics and Intelligence
|
https://case.edu
|
[] |
Use artificial intelligence to enhance business outcomes. Big Data Analytics. Analyze vast datasets to drive strategic decisions. Business Leadership. Combine ...
|
The Weatherhead Way
At Case Western Reserve, you’ll study at one of the nation’s leading research universities—in one of the country’s most culturally robust neighborhoods. You’ll live and learn in a city known for industry and healthcare innovation, where nearly 40% of Fortune 500 companies are represented. You’ll learn breakthrough business concepts from the people who literally wrote the book on them.
Plus, you’ll:
Learn more than just the core skills of business management,
Discover yourself better as a person and as a leader, and
Build the skills you need to reach beyond problem solving to solution innovation.
Want to find out how?
Request Information Today
| 2022-06-07T00:00:00 |
2022/06/07
|
https://case.edu/weatherhead/academics/graduate/master-business-analytics-and-intelligence
|
[
{
"date": "2023/01/06",
"position": 54,
"query": "artificial intelligence business leaders"
}
] |
Why Communication Is Crucial In Graphic Design
|
Why Communication Is Crucial In Graphic Design
|
https://penji.co
|
[
"Katrina Pascual"
] |
It's no secret that artificial intelligence is slowly taking over the creative industry. Many business owners are inclined to use these AI-powered software ...
|
It’s no secret that artificial intelligence is slowly taking over the creative industry. Many business owners are inclined to use these AI-powered software applications to create designs instantly. But what that lack is the human mindset, emotion, and touch that graphic designers give when producing these assets. That’s why graphic design communication with actual human designers still matters. And here’s why you should continue hiring graphic designers for your projects.
What is Communication in Graphic Design?
In simple terms, graphic design communication refers to the exchange between you and the graphic designer during a project. This involves the following:
Reaching out to them for the first time
Submitting a design request
Asking questions
Providing feedback
But graphic design communication can also mean the design conveyed in the image prepared by a designer. We’ll present examples of these below.
Why is Communication Important in Graphic Design?
You can fill out a prompt in AI graphic design software applications and let them take care of business. But one thing’s for sure; they may not convey what you envisioned down to a T.
Graphic designers may not get your designs on the first try. Instead of critiquing them immediately, you can provide constructive feedback on how to improve their design.
Aside from that, they humanize your designs and present a story in just one image. Working with them lets you show your brand personality in a still image. In turn, your target audience will connect with your better.
What Should You Remember When Talking to Your Graphic Designer?
Be Clear About Your Goals and Prepare a Design Brief
What is your reason for this design?
You should ask yourself this every time you’re working on a new project. Like in marketing, think of your objectives when creating designs:
Increase brand awareness
Generate leads
Obtain new customers
Boost engagement rates
Drive more traffic to your website
When you have these goals, envision a design. But if that doesn’t come easy, you can look into graphic design inspiration.
With these two elements in mind, you’re on your way to creating a design brief. In creating a design brief, make sure to add these details:
About the business
Target audience/market
Competition
Design examples
Budget
Schedule
Assets
Trust Your Designer
This will be tricky if you’re still looking for a new designer for your project. After all, you don’t have experience working with that designer yet, and it can be challenging to place your trust in the first few days. But to ease your mind, here are ways to trust your designer before working with them:
Make sure to check their portfolio page
Ask for references
Conduct an interview
Once you’ve hired your designer, placing your trust in them requires you to communicate from the get-go. For example, you can set a time when you can talk to them or when you prefer receiving messages from them. Plus, avoid micromanaging, as this could be counterproductive for the designer.
Set Realistic Expectations
When graphic designers produce your requests, one thing is sure: they may not get it right the first time. You could put everything you want in a design on your design brief. But graphic designers may need a little more push in the right direction. When working with a graphic designer, set realistic expectations immediately.
Don’t expect your designs to come out as museum-level art. Most designers will provide a rough draft. Once you review this draft, they will finalize or revise the design based on your notes and feedback.
Finally, if you’re working with freelance designers, don’t expect them to be at your beck and call and available 24/7. They might work on other projects or with other clients. That said, you can already establish working hours with them. Or, you can ask them when they can submit your project.
Ask Questions
Asking questions is a big part of graphic design communication. During the design process, you can ask the designer about the rationale behind their submitted project. You can also inquire about certain elements being used. This further establishes your trust in the freelancer. Plus, it will strengthen your professional relationship.
Provide Feedback
Graphic designers aren’t mind readers. If you don’t like the design, tell them clearly and be straightforward. It can be difficult to leave feedback on designs considering you might have notes on parts of the design only. You can visualize your feedback by leaving them a video message. Or you can download the design and point it out using an image editing service.
Examples of Poor Graphic Design Communication
Let’s face it, not all graphic design projects work. But you can avoid graphic design communication mishaps by looking at these unfortunate examples.
1. Thomson Reuters
Image source: John Russell from Twitter
This Venn diagram fail from Thomson Reuters misses the mark. Venn diagrams should intersect, but their design shows us that the listed values on the left aren’t exactly what the company stands for. There probably was a miscommunication issue when presenting this image.
2. PlayStation
Image source: The Polygon
PlayStation is no stranger to ad controversies. But this ad for their PSP had people questioning the game company’s stance on racial equality. It failed to communicate the launch of its new console color by making it racially motivated.
3. The Office of Government Commerce
Image source: Fast Print
At first glance, the logo for The Office of Government Commerce seems harmless. But when you turn it vertically, it looks like an inappropriate image. Kerning is one of its missing elements, and it should have been applied to avoid this debacle.
4. John Fluevog
Miscommunication errors like this one from John Fluevog show us that slashes aren’t necessary for these designs. Taking out the word “average” discourages passersby from going to their shoe store.
5. Where Magazine
Image source: Creative Bloq
Magazine covers are hit-and-miss. This one from Where Magazine looks like your average magazine cover. However, Where Magazine won’t be read as such if you look closely. Placement matters in design, and this wasn’t adequately reviewed before it hit the shelves.
How Can Penji Help You?
Modern graphic design communication allows clients and designers to talk over email and chat. With Penji, you can talk to designers directly without scrolling through long email threads. You can leave our designers a message anytime regarding your designs. Then, they will get right to it while you’re working on your business.
See anything that needs revisions? Just use our built-in point-and-click feature! In one click, you can leave your feedback on the design directly without downloading it. It’s one of the many reasons why brands love Penji.
And if you want to be part of the club, try Penji for 30 days here!
| 2023-01-06T00:00:00 |
2023/01/06
|
https://penji.co/graphic-design-communication/
|
[
{
"date": "2023/01/06",
"position": 56,
"query": "artificial intelligence graphic design"
}
] |
What Is Artificial Intelligence And How It Will Impact Marketers
|
What Is Artificial Intelligence And How It Will Impact Marketers
|
https://foundationinc.co
|
[
"Ross Simmonds"
] |
... graphic designers and video teams within the industry. The latest technological shift, artificial intelligence, is going to impact everything. In December ...
|
“I am limited by the technology of my time, but one day you’ll figure this out. And when you do, you will change the world.” – Howard Stark
Technology and humanity have always had an interesting relationship.
Technology has given us some of the greatest discoveries of our time. It’s given us cures for diseases. It’s given us the ability to connect with millions around the world. It’s given us the ability to improve longevity. And it’s given us the ability to understand our world in new ways.
All this being true…
Emerging technologies are still something that many people believe threatens our well-being and will lead to our demise. This view isn’t a new one. Luddites in the 19th century destroyed textile machinery because of the threat associated with an increased use of mechanized looms and knitting frames.
The world of marketing has seen tremendous evolution thanks to technology.
I remember my first months working at an ad agency, and chatting with one of the VPs about their work. They explained to me how they ran the print and film arm of the business, using a dark room to produce photos and a printing press for brochures. It was mind-blowing — because by then, Photoshop or iMovie could do exactly what he was describing.
He “retired” a few months later.
Today, we live in a world where beautiful designs can be created with nothing more than a laptop. Just 40 years ago, you needed ink pads, grid paper, scissors, rulers, rubber cement, and so much more. Today, we live in a world where you can run an advertisement that reaches hundreds of people in New York City — from your basement in Halifax, Nova Scotia. Today, you can write an article on a laptop, and that article is distributed through social media channels and reaches thousands.
The evolution of technology in marketing has been amazing to witness.
But more change is coming, and this time, it’s going to shake up more than just graphic designers and video teams within the industry. The latest technological shift, artificial intelligence, is going to impact everything.
In December 2022, it was reported that Alphabet (owner of Google) issued a “code red” over the rise of the AI bot ChatGPT. Since then, speculation has been at an all-time high amongst marketers about how ChatGPT and other AI technologies could impact the way we search for information online — and what it all means for Alphabet’s business.
Billions of people use Google’s search engine every single day, and many of those people spend money every single day to capture clicks from Alphabet’s subsidiaries. If ChatGPT and AI tools have triggered a “code red” for this business — it should be triggering a code red for every other technology-based industry.
In the remainder of this article, I’m going to share some of my thoughts on how artificial intelligence is going to influence the marketing industry in the coming years, and what brands, marketers and advertisers can do to prepare for what is to come.
Let’s get into it.
Will Artificial Intelligence Replace Marketers?
The emergence of artificial intelligence (AI) has already transformed the marketing industry, impacting how and when brands can reach their target audiences, but you may not realize it.
If you’ve run Google ads or Facebook Ads over the last 5 years, it’s likely that an AI has been involved in determining your target audience or budget allocation. In the early days of these ad products, a PPC expert would have used a spreadsheet to manage it all. But now you can simply click a button, and the optimizations are done for you by the platforms.
This hasn’t replaced PPC marketers. And I don’t think it will replace them any time soon.
But it certainly has changed the role they play when it comes to running campaigns.
More and more tasks are going to be eradicated because of artificial intelligence.
With AI, marketers now have access to powerful tools that allow them to better understand consumer needs, create graphics within seconds, create long-form posts within minutes, generate AI scripts, and automate mundane tasks. The role of marketers will never completely evaporate from the ecosystem, but the things that make up their work are certain to change.
Photographs & Illustrations Can Be Created Using AI
At the top of this blog post, you saw an image that resembles Don Draper. Don Draper (portrayed by the actor Jon Hamm) was the main character in the popular TV series Mad Men, which showcased the life of an advertising executive in the ‘60s. In the visual above, he takes on the look of someone who is part human and part robot. This is because I wanted to communicate the idea that artificial intelligence will not replace us as humans, but instead augment us.
Here are a few early versions of the graphic that almost made the cut:
All pretty good, right?
What if I told you that these were all created via artificial intelligence, with a tool called MidJourney?
All I had to do was ask MidJourney to create a realistic version of Don Draper as part robot or cyborg, and within seconds, I had a selection of images to choose from.
No illustrators. No designers. No artists.
Just an AI tool and a prompt.
This is powerful for many reasons. At scale, you could use an AI tool like Midjourney, Freepik AI, or Wepik to develop multiple images for your blog posts without having to pay thousands of dollars to stock photo websites to source a valuable graphic. An AI tool could even play a role in the development of an advertisement and save you money on design, photography, and more.
Let’s take this advertisement from 2020 that was named one of the best:
It’s a clever ad, right? It won tons of awards.
But capturing the photo for this might not have been easy.
Thanks to Midjourney, though, I can create a picture of a moldy burger within seconds:
Not bad eh?
With these images of a moldy burger, I can prompt MidJourney to touch up the one that I like the most.
Given these options, I think the top right burger has the most realistic look and feel, so I can prompt the tool to give me a few alternatives. In seconds, I’m provided with four different options to choose from:
Now, I can take this burger over to Photoshop and create a new version of the Burger King ad in minutes:
To remove the background, I used an AI tool called “Remove BG,” which automatically detects and gets rid of any unwanted background imagery.
Sure, the result Isn’t exactly like the original ad:
But it’s not too bad for 5 minutes of work.
We’re not there yet, but in the future, I envision a world where we will be able to tell an AI to create an ad (using a very descriptive prompt) and in response, get an output that matches exactly what we envision in our minds. For example, I asked for a Pizza Hut ad featuring Serena Williams, and a Tylenol ad featuring Donald Glover.
In seconds, using MidJourney, I got these results:
The AI has the ability to understand what goes into an advertisement, what different layouts of an advertisement might look like — and even determine that a Tylenol ad might be more likely to connect with an audience if the person in the ad is wearing a robe.
Are there issues with these ads?
Of course. For example, the text is Lorem Ipsum. It doesn’t make sense. But imagine for a second that these visuals were accessible inside of a tool like Canva, and the Lorem Ipsum areas were editable.
You could create hundreds of variations within an hour and find the layouts that suit your needs best. You could connect with Donald and Serena’s legal and management teams and cut them a check. You could run these ads, and within less than a day, you could launch a campaign that currently competes with most generic advertisements in the consumer goods category right now.
Apart from image creation, AI can also help with image moderation which can check the quality and appropriateness of images to better align with your audience.
How Digital Ad Creation Will Be Changed By Artificial Intelligence
Digital advertising is rapidly changing as artificial intelligence is being integrated into processes. AI can create ads at scale and optimize ad creatives in mere moments. There is already AI-driven ad-tracking software that optimizes your campaigns for the best performance. It can run A/B tests with hundreds of different words, all while writing actual ad copy and making adjustments based on data. The future for digital advertising is a lot less manual.
Digital ad creation is a task that takes only seconds for AI, when it often takes days for humans to perform the same work. As another example, I can now log into a tool like AdCreative.ai and tell it a little about Foundation. In less than 10 minutes, I can use this tool to give me 90+ variations of ads that can be used on LinkedIn, Facebook, Google, and more:
The tool even goes so far as to give an ad a “conversion score” based on its own data.
Once I create a series of different graphics, and confirm that they fit my brand’s aesthetic, I can start to work on the copy. One of my favorite AI copywriting tools is Jasper. You can use Jasper to write everything from blog posts to ad copy for use on Facebook.
Here’s how it works:
I visit Jasper and select that I’m looking to create ad headlines for Facebook:
Then, I can provide some details about Foundation that will help Jasper write a headline:
Once this is uploaded, Jasper provides me with a handful of potential headlines in seconds:
Not bad, right?
I can then go to the “Facebook Ad Primary Text” section of Jasper to generate the ad copy:
And there you have it.
Jasper created Facebook ads that can start running immediately.
HEADLINE:
Foundation: Your Content Marketing Engine
BODY:
Are you ready to get your content seen by more people?
If you’ve been struggling to get your content seen by the right people, Foundation can help. We’re experts at distributing content so it reaches its full potential. And we’ve helped hundreds of B2B SaaS companies just like yours.
Click here to learn more about how we can help you get your content seen by more people and attract more leads, sales, and customers.
Not bad eh?
This is what’s possible using an AI copywriting tool like Jasper.
I’m a huge fan of what their team is building, and I’m giving away 10,000 bonus credits to anyone who signs up using this link. If you’re working in content marketing and have been sleeping on the power of AI copywriting tools, I get it. I was in your shoes.
But I’m convinced that this is the future.
Repetitive Tasks Will Be Automated At Scale
As marketers, we are used to doing a lot of the same tasks, day in and day out. But with the rise of artificial intelligence, many of those tasks will soon be replaced by machines.
One task that will soon be fully replaced by AI is meta description writing. Marketers will no longer have to spend time writing short blurbs that describe their content — instead, AI writing tools like Jasper and ChatGPT will be able to do it for them, and better.
This isn’t a prediction.
This is reality today.
You can use an AI tool right now to create a meta description for any blog post that you have on your site. Jasper offers four different types of meta descriptions and titles that it can produce for you, for Blog Posts, Homepages, Product pages and Service Pages.
Let’s say I needed a meta description for this piece.
The one you’re reading right now.
I simply go to Jasper, add the title of the piece, and paste the introduction paragraph into the “blog post description” section. In seconds, I have three different meta descriptions:
The process described above is still somewhat manual, but as tools like Jasper and ChatGPT continue to roll out their own APIs, the ability to customize these outputs at scale will become even more interesting. I recently came across a new Google Sheet product from Arielle Phoenix that breaks down how to combine artificial intelligence and programmatic SEO to create content in bulk. I bought the Masterclass and product as soon as I saw it.
Let me give you a rundown of what it can do:
Add keywords to a spreadsheet
The spreadsheet grabs the keywords to create URL slugs
The spreadsheet runs the keywords against a ChatGPT3 prompt to write SEO-driven meta descriptions and click-friendly blog post titles
The spreadsheet then uses the title of the blog post to create an introduction
Next, the spreadsheet uses the title of the blog post to write an outline
…and so much more.
You can create HUNDREDS of pieces of content within a matter of minutes using the Bulk Publishing Framework + AI-Integrated Google Sheet.
As an example, I wanted to see what the Google Sheet could do, so I decided to list a few keywords that are relevant to the marketing industry. Let’s say Foundation wanted to create definition content on topics like content marketing, editorial calendar, and digital marketing for a marketing blog. I simply add these keywords to the Google Sheet that Arielle Phoenix developed. And using an API connected to ChatGPT, it delivers the title, meta description, intro, outline and more:
This was all developed in the matter of seconds.
Brilliant, right?
It does even more than this, but that’s a blog post for another day.
No more writing meta descriptions. No more writing outlines.
The AI can do it for you. But that’s not all an AI can do.
How AI Accelerates The Content Creation Process
As content marketers, we know that the ability to produce quality content quickly and efficiently is crucial to our success. And while some may be hesitant to embrace artificial intelligence solutions, those who do will be left far behind their counterparts who embrace AI.
Artificial intelligence can help speed up the research process, the briefing process, and even the actual content creation process required to develop a piece of content that is likely to rank in Google. Embracing artificial intelligence can bring immense benefits to content marketers, allowing them to become more productive and produce superior content in a fraction of the time it would traditionally take. So, if you’re still on the fence about AI, let’s revisit the earlier example mentioned—the definition blog post called “What Is Content Marketing.”
Using Jasper, I can take each part of the “outline” that was delivered in the AI Integrated Google Sheet and then place it into Jasper to create a more in-depth blog post. In this case, I simply type “Definition of Content Marketing” into Jasper’s One-Shot Blog Post feature — and within seconds, I have an in-depth breakdown of content marketing:
Let me say this — I wouldn’t publish this verbatim.
But I would pass this along to an expert and ask them to update it so that it sounds more human, has an expert’s perspective, and is reviewed for quality assurance. Using this process, I’m able to improve the quality of the blog post and still produce a piece of content faster than a counterpart who is not using artificial intelligence to create content.
Our workflows are going to change as AI technology continues to advance. The speed in which we navigate through the content creation process should accelerate rapidly in the coming years. And the workflow of the future will be built on the back of AI tools that make the writing process 10x more effective, efficient, and data-driven.
The content creation workflow before AI vs. content creation workflows using AI can be summarized in the following graphic:
In both Content Creation Workflow 1.0 and 2.0, human research is needed to ensure that you’re creating the right things. AI will support the research, but research done by humans is still an important part of the puzzle, no matter whether you embrace the traditional method of content creation or the new method.
In Content Creation Workflow 1.0, the second and third step are both manually completed. The second step in the first workflow is the act of identifying (brainstorming) headlines that should be written for a blog based on a keyword. In the future, this won’t be necessary because you can ask an AI tool to give you 5 headlines based on a keyword, and have those headlines in seconds:
A copywriter might take 30-60 minutes to come up with these 10 blog post ideas.
It took ChatGPT less than 30 seconds.
In content creation workflow 1.0, the next step would be to identify a title and begin working on the framework that makes up the piece. In this case, if we were to write an outline for the title “The Surprising Health Benefits of Nose Breathing and Mouth Taping” it might take a writer and/or researcher anywhere from 1 – 2 hours to complete the task.
An AI tool like Chat GPT, however, can pull the outline within a matter of seconds:
Now that the outline is developed, the next step is simple:
Write the rest of the piece.
The traditional content creation process would take hours (maybe 5-10 per piece), even for an advanced writer. Using AI, the key points in the outline of this piece can be submitted to Jasper, and within minutes, you can have a fully written blog post, breaking down everything there is to know about the basic benefits of nose breathing and mouth taping.
And on the subject of blog posts, a common task for content creators is writing HTML to elevate blog posts. Leveraging ChatGPT, you can ask the service to develop HTML on your behalf. For example, you can ask ChatGPT to develop a chart that compares mouth breathing and nose breathing.
In seconds, you’re met with a chart that is ready to be uploaded to your traditional or headless CMS, and that breaks down the differences between the two types of breathing:
Still not impressed?
One of the most important parts of a blog post is imagery. Don’t forget you can use AI to create all of the imagery for a blog post, using nothing more than the title of each section of the post thanks to processes like data labeling tools.
For example, I go to Midjourney and ask for a few illustrations and graphics that can be used for the blog article about mouth taping and nasal breathing and I am quickly armed with images that are owned by me, which will not be subject to any forms of copyright infringement:
So, what does the future hold?
Ultimately, AI will become an essential part of the creative process. It will help us to produce better work faster and more efficiently, while freeing up time for us to focus on the things that really matter — creativity and innovation. So the next time you’re working on a project, don’t be afraid to embrace the power of artificial intelligence. It’s here to stay.
At this point, you’re probably in one of two camps:
You’re convinced that AI is going to change everything. You’re still weirded out by the burger covered in mold.
No matter which camp you’re in…
It’s important to know that as a marketer, artificial intelligence isn’t something that you should be afraid of. Tools like Jasper, Midjourney, and even the AI Google Sheets tool created by Arielle Phoenix are just that… tools. But these tools come with great power, and as Stan Lee once wrote:
“With great power comes great responsibility.”
It’s on us to use AI tools appropriately.
These tools will arm you with the ability to do things that the early professionals of our industry couldn’t even imagine as a possibility. Technologies like this come only a few times in a century.
And in this moment, we all have an opportunity to capture something special. We can take part in what I believe will be the next great technological revolution in our industry. It’s going to be messy. It’s going to be chaotic. And it’s going to come with a lot of pushback from a lot of different actors, ranging from government to industry.
But even then… I’m confident that this is going to change everything.
Agree? Disagree?
I’d love to hear your take.
| 2023-01-02T00:00:00 |
2023/01/02
|
https://foundationinc.co/lab/artificial-intelligence-for-marketers/
|
[
{
"date": "2023/01/06",
"position": 86,
"query": "artificial intelligence graphic design"
}
] |
The Rise of AI Like Chatgpt and Other Chatbots Could Lead to Mass ...
|
The Rise of AI Like Chatgpt and Other Chatbots Could Lead to Mass Unemployment
|
https://hackernoon.com
|
[] |
In this article, I will explore the current state of AI and chatbots and their potential impact on employment. First, let's define what we ...
|
As the world becomes increasingly reliant on technology, artificial intelligence (AI) and chatbots have emerged as key players in many industries.
But with the rise of these advanced technologies comes the potential for mass unemployment, as AI and chatbots are able to automate tasks that were previously performed by humans.
In this article, I will explore the current state of AI and chatbots and their potential impact on employment.
First, let's define what we mean by AI and chatbots. AI refers to the ability of a computer or machine to perform tasks that would normally require human intelligence, such as learning, problem-solving, and decision-making.
Chatbots, on the other hand, are computer programs designed to simulate conversation with human users through artificial intelligence.
Currently, AI and chatbots are being used in a variety of industries, from customer service to healthcare to finance. They are able to handle tasks such as answering customer inquiries, scheduling appointments, and processing transactions.
While these technologies have the potential to improve efficiency and reduce costs for companies, they also have the potential to displace human workers and contribute to mass unemployment.
This is a major concern that must be addressed as we continue to embrace the advancements of AI and chatbots.
The Potential for Mass Unemployment
As AI and chatbots continue to advance, it's becoming increasingly clear that they have the potential to automate many tasks currently performed by humans. In fact, some experts predict that AI could potentially replace up to 50% of all jobs within the next decade.
This raises the concern of mass unemployment, as the use of these technologies could lead to widespread job displacement.
One of the main industries that may be affected by the increased use of AI and chatbots is customer service. Chatbots are already being used to handle customer inquiries and resolve issues, and as they become more advanced, they may be able to handle more complex tasks.
This could lead to job loss for customer service representatives.
Other industries that may be impacted by the adoption of AI and chatbots include finance, healthcare, and manufacturing.
AI-powered machines and robots are already being used to perform tasks such as processing transactions, analyzing medical images, and medical diagnoses, and assembling products.
As these technologies continue to improve, they may be able to handle more complex tasks, potentially leading to job loss for workers in these industries.
It's important to note that the potential for mass unemployment is not a certainty. However, it is a concern that must be taken seriously as we continue to embrace the advancements of AI and chatbots.
Companies and governments have a responsibility to consider the potential consequences of adopting these technologies and to take steps to mitigate any negative impacts on employment.
The Ethical Considerations of AI and Chatbots
The rise of AI and chatbots raises important ethical considerations, particularly in terms of the impact on workers and their livelihoods. As these technologies are able to automate more tasks, there is a potential for widespread job displacement and unemployment.
This could have significant consequences for affected workers and their families, who may face financial insecurity and a lack of job opportunities.
In addition to the impact on workers, there are also ethical considerations surrounding the use of AI and chatbots more broadly. For example, the use of AI in decision-making processes, such as hiring or loan approval, raises concerns about bias and fairness.
There are also concerns about the potential for AI to be used in ways that harm or exploit people, such as in the development of autonomous weapons and the launch of cyber attacks.
Given these ethical considerations, it is important that companies and governments take responsibility for addressing the potential consequences of AI and chatbot adoption.
This may involve regulatory measures to ensure the responsible and ethical use of these technologies, as well as efforts to support affected workers through retraining and education programs.
It is only by considering the potential impacts of AI and chatbots on all stakeholders that we can ensure their responsible and ethical use.
Steps That Can Be Taken to Mitigate the Potential Negative Impacts of AI and Chatbots on Employment
While the potential for mass unemployment due to the adoption of AI and chatbots is a concern, there are steps that can be taken to mitigate the potential negative impacts on employment.
One important step is to provide retraining and education programs for affected workers. This can help ensure that workers have the skills and knowledge needed to transition to new roles or industries.
Governments and companies can also work together to provide support for affected workers, such as financial assistance and job placement services.
Another important step is to ensure that AI and chatbots are used ethically and responsibly.
This may involve launching awareness programs and regulating the use of these technologies to prevent abuses and ensure fairness, as well as promoting transparency and accountability in their development and deployment.
Finally, there may be a need to regulate the use of AI and chatbots to prevent widespread job displacement.
This could involve measures such as requiring companies to demonstrate that the adoption of these technologies will not result in significant job loss, or setting limits on the use of AI and chatbots in certain industries or for certain tasks.
By taking these steps, we can help ensure that the adoption of AI and chatbots is balanced with the protection of human workers and their livelihoods.
Bottom Line
As expressed, the rise of AI and chatbots brings with it many potential benefits, including improved efficiency and cost savings for companies. However, it also raises concerns about the potential for mass unemployment due to the automation of tasks currently performed by humans.
We must carefully consider the impact of AI and chatbots on employment and take steps to mitigate any negative consequences.
This may involve providing retraining and education programs for affected workers, ensuring the ethical and responsible use of these technologies, and regulating their use to prevent widespread job displacement.
Finding a balance between the use of AI and chatbots and the protection of human workers is essential. By doing so, we can ensure that the advancements of these technologies are a positive force for society rather than a source of disruption and inequality.
| 2023-01-06T00:00:00 |
https://hackernoon.com/the-rise-of-ai-like-chatgpt-and-other-chatbots-could-lead-to-mass-unemployment
|
[
{
"date": "2023/01/06",
"position": 88,
"query": "ChatGPT employment impact"
},
{
"date": "2023/01/06",
"position": 88,
"query": "ChatGPT employment impact"
},
{
"date": "2023/01/06",
"position": 89,
"query": "ChatGPT employment impact"
},
{
"date": "2023/01/06",
"position": 90,
"query": "ChatGPT employment impact"
},
{
"date": "2023/01/06",
"position": 90,
"query": "ChatGPT employment impact"
},
{
"date": "2023/01/06",
"position": 90,
"query": "ChatGPT employment impact"
},
{
"date": "2023/01/06",
"position": 89,
"query": "ChatGPT employment impact"
}
] |
|
Robots Are Coming for White-Collar Workers, Too
|
Robots Are Coming for White-Collar Workers, Too
|
https://jacobin.com
|
[
"Ryan Zickgraf",
"Doug Henwood",
"Ramaa Vasudevan",
"David Calnitsky",
"David Moscrop",
"Chris Tilly",
"Alison Kowalski",
"Juan Sebastian Carbonell"
] |
A 2013 study from Oxford University suggested that nearly half of today's professions could be eliminated by automation over the next generation.
|
Elon Musk is right about one thing: OpenAI’s new chatbot prototype is “scary good.” How good is that? Consider this short excerpt from “Robots and Revolution,” a song I asked ChatGPT to create:
Some socialists say that AI could be a threat To workers’ livelihoods, and the profits that they get They argue that the profits from AI should be shared And that the government should help those who are impaired But others say that AI could be a force for good It could reduce the need for labor, like it should It could create a society where we’re free from wage labor Where we can pursue our passions, without any favor
No, it’s not a work of particular genius. I’d grade “Robots and Revolution” a C-, with low marks for the clunky lyrical flow and lack of humor. On the other hand, this three-verse, one-chorus ditty was spat out in about thirty seconds and did a reasonable job summarizing arguments about AI in basic rhyme form. Plus, this just scratches the surface of the capabilities of the browser-based word genie. Since OpenAI released ChatGPT to the public in November, people have used it to write college-level essays, punch up their scripts’ dialogue, craft recipes for dinner, and write software code.
I decided to turn “Robots and Revolution” into a music video project, so I spent a few hours utilizing other AI tools to do so myself as a novice. I signed up for a voice app (Murf) to provide vocals, Midjourney for computer-generated visuals, Amper Music to make a moody soundtrack, and Canva to help edit and produce it. I hesitate to call the result my music video because I didn’t make it as much as curate it, letting the AI do the rest. Doing so saved me the hefty labor cost of paying working artists, musicians, engineers, and graphic designers.
Likewise, some commercial websites have begun employing AI artists instead of illustrators, and content creators increasingly use algorithm-generated music to avoid directly paying artists or royalty fees.
It’s all the more fuel for the speculation that the so-called Fourth Industrial Revolution is already upon us. Klaus Schwab, the founder and executive chairman of the World Economic Forum, has described the Fourth Industrial Revolution as “a fusion of technologies blurring the lines between the physical, digital, and biological spheres.” It’s the convergence of AI, machine learning, advanced robotics, and biotech.
In previous industrial revolutions, machines took over many manual labor jobs, then repetitive assembly line work and analog office drudgery. Now they’re coming for higher-level “cognitive” work. A 2013 study from Oxford University suggested that nearly half of today’s professions could be eliminated by automation over the next generation, including many white-collar and skilled blue-collar jobs.
Kai-Fu Lee, AI expert and CEO of Sinovation Ventures, wrote a 2018 essay that speculated that 50 percent of jobs would be automated within a decade and a half. “Accountants, factory workers, truckers, paralegals, and radiologists — just to name a few — will be confronted by a disruption akin to that faced by farmers during the Industrial Revolution,” wrote Lee.
“Clearly AI is going to win,” Nobel Prize–winning psychologist Daniel Kahneman noted in 2021. “How people adjust is a fascinating problem.”
That adjustment disproportionately harms most workers.
Labor-saving technology is not inherently wrong. What’s troublesome is the process in which it gets integrated into the capitalist system. In Capital, Karl Marx explained how the replacement of labor with automation is a tool used by capital to weaken labor and enrich itself without regard for workers’ standard of living or the needs of society. “The instrument of labour, when it takes the form of a machine, immediately becomes a competitor to the worker himself,” Marx wrote.
That competition puts further downward pressure on wages on a mass of unskilled labor, an “artificial surplus population” of the unemployed. Marx called automation “the most powerful weapon for suppressing strikes, those periodic revolts of the working class against the autocracy of capital.” As a result, inequality grows.
Capitalist apologists are wrong when they say that while new tech might displace workers in the short term, it should eventually liberate workers from mindless drudgery and allow them to work in more advanced sectors. (That’s the argument ChatGPT broached in the second verse of “Robots and Revolution.”) If that were the case, why do so many workers of the present endure long hours, low pay, and hate their jobs despite all the productivity gains from advances in electricity, computing, robotics, and other tech over the last two centuries?
If history tells us anything, it’s that without working-class organization, capital comes out on top.
It’s not all bad news. Shiny new AI tools like Open GPT aren’t yet good enough in 2023 to replace, say, Beyoncé and Kendrick Lamar with singing and rapping robots. But we should heed the first verse of “Robots and Revolution” and seriously consider the medium and long-term future of those of us employed in art, music, writing, coding, and other fields primed for AI disruption.
| 2023-01-07T00:00:00 |
https://jacobin.com/2023/01/robots-creative-jobs-automation-technology-openai-artificial-intelligence
|
[
{
"date": "2023/01/07",
"position": 15,
"query": "AI replacing workers"
},
{
"date": "2023/01/07",
"position": 71,
"query": "AI job creation vs elimination"
},
{
"date": "2023/01/07",
"position": 19,
"query": "artificial intelligence wages"
}
] |
|
Best Artificial Intelligence Jobs
|
Best Artificial Intelligence Jobs
|
https://dataaxy.com
|
[] |
The AI job market is thriving as more businesses invest in AI to improve ... 16% of all US jobs will be replaced by AI and Machine Learning by 2030 (Forrester).
|
Unlock your potential in prominent AI positions offered by leading companies and agencies, empowering you to thrive as a AI Engineer, Prompt Specialist, Deep Learning Engineer as you delve into the exciting realm of artificial intelligence.
Leverage our advanced tech that aggregates the latest job offerings from every corner of the web.
Jobs from All Over the Internet Leverage our advanced tech that aggregates the latest job offerings from every corner of the web.
Be the First To Know Receive fresh job alerts daily, ensuring you're always first in line.
Artificial Intelligence (AI) is a game-changing technology that is reshaping the way we live and work. It's an exciting field that offers a vast range of career opportunities for those interested in creating intelligent solutions to real-world problems.
Overview of the AI job market
Job market trends
The AI job market is thriving as more businesses invest in AI to improve efficiency and decision-making. This growing demand has created a plethora of opportunities for AI professionals across various industries.
Key industries
AI has applications in virtually every industry, including healthcare, finance, retail, transportation, and technology. These industries are leveraging AI for tasks such as predictive analytics, automation, customer service, and more.
Prominent companies
Top tech companies like Google, Amazon, Microsoft, and IBM are at the forefront of AI development and are constantly seeking AI talent. However, it's not just tech companies; businesses across sectors are recruiting AI professionals to help them harness the power of AI.
Roles and responsibilities of an AI professional
AI professionals design and implement AI models, solve complex problems, and work on innovative projects. They collaborate with other team members to develop AI-driven solutions that can transform business operations.
Skills required for AI jobs
Technical skills
Key technical skills for AI jobs include programming (Python, R, Java), machine learning, deep learning, natural language processing, and understanding of AI algorithms. Knowledge of AI platforms like TensorFlow, PyTorch, and Keras is also beneficial.
Soft skills
AI professionals should also have strong problem-solving skills, excellent communication abilities, and a knack for strategic thinking. They need to communicate their findings and insights effectively to both technical and non-technical stakeholders.
How to find AI jobs
Job portals
Online job portals like LinkedIn, Indeed, and Glassdoor are great places to start your job search. They provide a wide array of job listings and offer insights about companies and roles.
Networking
Networking is another effective way to find AI job opportunities. Attend industry events, webinars, and join online communities to connect with other professionals and learn about new opportunities.
Company websites
Many companies post job vacancies directly on their websites. Regularly checking the career sections of these websites can help you find opportunities that may not be listed on job portals.
Tips for landing an AI job
When applying for AI jobs, ensure that your CV and LinkedIn profile are up-to-date and showcase your relevant skills and experiences. Tailor your application to the job description and highlight the skills and experiences that best align with the role.
Prepare for interviews by reviewing common AI interview questions, discussing your past projects, and demonstrating your technical skills. It's also crucial to stay updated with the latest AI trends and advancements.
Engage in continuous learning and participate in relevant webinars and workshops to keep your skills sharp and relevant.
Conclusion
Artificial Intelligence is a rapidly expanding field, offering a wealth of opportunities for those with the right skills. By staying updated with the latest trends, honing your technical and soft skills, and being proactive in your job search, you can find exciting AI job opportunities and make a significant impact in this transformative field.
| 2023-01-07T00:00:00 |
https://dataaxy.com/jobs/artificial-intelligence
|
[
{
"date": "2023/01/07",
"position": 39,
"query": "machine learning job market"
}
] |
|
B.S. in Data Science - Mathematics & Statistics
|
B.S. in Data Science
|
https://www.valpo.edu
|
[] |
Data science, and its associated fields, is one of the fastest growing employment opportunities in the world. A strong demand in the job market — combined ...
|
B.S. in Data Science
The College of Arts and Sciences now offers a B.S. in data science. The video below gives an overview of our data science program:
What is a Data Scientist?
A data scientist analyzes complex systems and solves real-world problems through the analysis of data, and in particular, very large sets of data. Many scientific disciplines, our economy, and even our providers of streaming entertainment increasingly rely on data. You’ll work with a variety of methods including predictive/prescriptive analytics, algorithm design and execution, applied machine learning, statistical modeling, and data visualization.
Why Data Science?
Data science, and its associated fields, is one of the fastest growing employment opportunities in the world. A strong demand in the job market — combined with a shortage of people trained in data science —means you’ll have opportunities in all sectors of society.
According to the Education Advisory Board, some of the top occupations requiring data analysis skills are:
Marketing managers
Financial analysts
Computer systems engineers/architects
Business intelligence analysts
Computer programmers
And, some top employers for data analytics-related skills are:
Amazon.com
UnitedHealth Group
Microsoft Corporation
JP Morgan Chase Company
General Electric Company
The new bachelor of science in data science at Valpo is designed to give graduates the interdisciplinary skills that employers need. The data science program integrates statistics, mathematics, computer science, and data science to produce graduates with the skills needed to evaluate and interpret data. You will gain a broad skill set that will be attractive to employers in this thriving field.
Degree Requirements — Bachelor of Science in Data Science
In order to graduate with a bachelor of science in data science at Valpo, a minimum of 41 credit hours is required. Students must take courses in data science and from the partner disciplines of statistics, mathematics, and computer science. Additionally, students should explore an area of application through selection of one or more courses from an appropriate field, as described below. Students are strongly encouraged to take a minor or a second major in their applied field of interest.
Required Courses
| 2023-01-07T00:00:00 |
https://www.valpo.edu/mathematics-statistics/academics/degree-programs/bs-in-data-science/
|
[
{
"date": "2023/01/07",
"position": 98,
"query": "machine learning job market"
}
] |
|
The future of work: How AI and chatbots like GPT ... - code{32}
|
The future of work: How AI and chatbots like GPT are reshaping the global economy
|
https://code32.net
|
[] |
There may be major shifts in the job market as a result of the widespread use of chatbots and other forms of AI technology.
|
The advent of widespread use of AI promises to revolutionize not only the business world, but also our everyday life. As an illustration, consider the rise of conversational robots like GPT (Generative Pre-trained Transformer), which can carry on natural-sounding conversations and carry out duties like customer service and data collection.
There may be major shifts in the job market as a result of the widespread use of chatbots and other forms of AI technology. Changing job markets and the need for previously unneeded skills may result from the increased usage of AI and automation to replace people. On the other hand, it has the potential to enhance productivity and generate new employment opportunities.
Significant as well are the geopolitical ramifications of AI. As AI becomes more commonplace, governments who are able to develop and deploy these technologies more quickly may have an advantage in terms of economic growth and military capability. The potential for the misuse of AI in military or espionage situations is a big issue, and this could lead to an AI “arms race” between governments.
Adopting AI may also have far-reaching effects on society. There is concern that industries like customer service, which rely heavily on human connection, may see a decline as a result of rising use of automation and machine learning algorithms. Social harmony and interpersonal relationships may suffer as a result.
The overall economic and social effects of AI are intricate and varied. In order to avoid unintended repercussions, governments, organizations, and individuals must responsibly create and utilize AI.
| 2023-01-07T00:00:00 |
https://code32.net/future-of-work-ai-chatbots-like-gpt-reshaping-global-economy/
|
[
{
"date": "2023/01/07",
"position": 73,
"query": "future of work AI"
}
] |
|
What is Economic Uncertainty and Why Does it Matter?
|
What is Economic Uncertainty and Why Does it Matter?
|
https://www.tutor2u.net
|
[] |
The pandemic has disrupted supply chains, caused widespread job losses, and led to a sharp contraction in economic activity. The global financial crisis: The ...
|
Economic uncertainty refers to a situation in which the future economic environment is difficult to predict, and there is a high degree of risk or unknowns involved.
This can be caused by a variety of factors, including political instability, changes in government policies, natural disasters, and market fluctuations.
Examples of economic uncertainty include:
Volatility in financial markets: When stock prices or exchange rates fluctuate significantly, it can create uncertainty for investors and businesses. This was shown during the Global Financial Crisis and also financial uncertainty in during and after the covid-19 pandemic. Many countries have volatile exchange rates which increases the risks for businesses and overseas investors. Changes in macroeconomic policies: For example, if a government announces plans to change direct and indirect tax rates or regulations, it can create uncertainty for businesses and consumers. Likewise, uncertainty can be created when a central bank changes the direction of monetary policy and starts changing interest rates. Natural disasters: Events like earthquakes, hurricanes, and other natural disasters can disrupt supply chains and disrupt economic activity, creating uncertainty. Many countries including numerous lower-income nations have an economy highly susceptible to the consequences of climate change. Political instability: Unrest or instability in a country can create uncertainty for businesses and investors.
Uncertainty can affect behaviour in a number of ways.
For example, it can cause businesses to hold off on making investments or hiring new employees, as they are unsure about the future economic environment.
Consumers may also become more cautious about spending money, as they are uncertain about their own financial situation. This can lead to an increase in precautionary saving and a rise in the marginal propensity to consume.
When uncertainty is high, there is increased risk of an economic recession as agents hold back on consumption and investment decisions.
Overall, economic uncertainty can lead to a decrease in economic activity, as people and businesses become more risk-averse.
There are many events that have caused economic uncertainty in the past. Some examples include:
| 2023-01-07T00:00:00 |
https://www.tutor2u.net/economics/reference/what-is-economic-uncertainty-and-why-does-it-matter
|
[
{
"date": "2023/01/07",
"position": 74,
"query": "AI economic disruption"
}
] |
|
Data Analyst and Machine Learning Engineer
|
Data Analyst and Machine Learning Engineer at Prembly (formerly Identitypass)
|
https://www.ycombinator.com
|
[] |
Job brief We are looking for a Machine Learning (ML) Engineer to help us create artificial intelligence products. Machine Learning Engineer responsibilities ...
|
Job brief We are looking for a Machine Learning (ML) Engineer to help us create artificial intelligence products. Machine Learning Engineer responsibilities include creating machine learning models and retraining systems. To do this job successfully, you need exceptional skills in statistics and programming. If you also have knowledge of data science and software engineering, we’d like to meet you. Your ultimate goal will be to shape and build efficient self-learning applications.
Responsibilities Interpret data, analyze results using statistical techniques and provide ongoing reports Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality Acquire data from primary or secondary data sources and maintain databases/data systems Identify, analyze, and interpret trends or patterns in complex data sets Filter and “clean” data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems Work with management to prioritize business and information needs Locate and define new process improvement opportunities Study and transform data science prototypes Design machine learning systems Research and implement appropriate ML algorithms and tools Develop machine learning applications according to requirements Select appropriate datasets and data representation methods Run machine learning tests and experiments Perform statistical analysis and fine-tuning using test results Train and retrain systems when necessary Extend existing ML libraries and frameworks Keep abreast of developments in the field
Requirements and skills Proven experience as a Machine Learning Engineer or similar role Computer vision skills Understanding of data structures, data modeling and software architecture Deep knowledge of math, probability, statistics and algorithms Ability to write robust code in Python, Java and R Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn) Excellent communication skills Ability to work in a team Outstanding analytical and problem-solving skills BSc in Computer Science, Mathematics or similar field; Master’s degree is a plus
| 2023-01-07T00:00:00 |
https://www.ycombinator.com/companies/prembly/jobs/gKkiuiJ-data-analyst-and-machine-learning-engineer
|
[
{
"date": "2023/01/07",
"position": 81,
"query": "generative AI jobs"
}
] |
|
AI Art Generator - AI Picture on the App Store
|
AI Art Generator - AI Picture
|
https://apps.apple.com
|
[] |
Apple Intelligence · Apps by Apple · Continuity · iCloud+ · Mac for Business ... Graphics & Design. Compatibility. iPhone: Requires iOS 15.0 or later. iPad ...
|
Cybernettr ,
The description is vague as to how it works. It appears to use stable diffusion (open source). Needless to say, it does not run directly on your device, but must connect to the Internet—the artwork is generated on remote servers. That means if the servers go down, the app will stop working. Still, you can get your money’s worth out of this quickly if you use it a lot. Fidelity to the prompt is not always very good. For example, i prompted “Tim Cook showing off the latest iPhone“ and it showed a knight, a cyborg and other things but none of the images looked like Tim Cook. In another app I use, all of the images looked like Tim Cook. When I requested “Rembrandt in the style of Rembrandt,“ it worked great, but when I prompted “Rembrandt painting a self portrait,“ it always gave me a “unknown error!“ On the plus side, the images are a lot higher resolution than they appear on your phone. If you download them to a device with a large screen like a computer, you will be able to enjoy them in their full resolution, which is pretty decent!
| 2023-01-07T00:00:00 |
https://apps.apple.com/us/app/ai-art-generator-ai-picture/id1527512896
|
[
{
"date": "2023/01/07",
"position": 88,
"query": "artificial intelligence graphic design"
}
] |
|
Robot Tax and Automation Propelled Job Displacement |
|
Robot Tax and Automation Propelled Job Displacement
|
https://fazalali.com
|
[] |
Time is needed for policies and mechanisms that may counteract the adverse effects of automation-propelled job displacement. Time is needed for these new jobs ...
|
The sorrows and desperation we can name will never exceed the richness of reality. There is always an excess. And this excess overflows with possibilities only half disclosed. As we scrutinize the horizon of intelligence, machine learning, and digital skills, what worries us most is a future that works. Time is needed for policies and mechanisms that may counteract the adverse effects of automation-propelled job displacement. Time is needed for these new jobs to define themselves, for workers to retool, and for new entrants into the future of work to prepare.
On the interplay of AI, robotics, and globalization, the political scientist Darrelle West and the economist Frank Levy agree that job displacement in major industries will inevitably fuel major economic disruptions. This in turn will fuel even deeper anti-democratic populist responses and political polarization.
Richard Baldwin’s analysis is far less fatalistic, but still quite alarming. He believes that the nascent interplay of robotics, AI, and globalization is advancing inhumanly fast. And that its explosive pace is such that it is injecting force into the socio-political system via a rate of job displacement that exceeds our ability to absorb workers via job replacement.
A report by the McKinsey Global Institute (2017) titled, “Jobs Lost, Jobs Gains: Workforce Transformation in times of Transition”, estimates that up to one-third of the 2030 workforce in the US and Germany may need to learn new skills and find new occupations. In a parallel paper, “A Future That Works: Automation, Employment and Productivity”, (2017), the McKinsey Global Institute forecasts that as much as fifty per cent of occupations would be affected in one way or another by technological change.
“Robots and Jobs: Evidence from US Labour Markets”, (2020), published in the Journal of Political Economy, 128, No. 6, pp. 2188-2244, examined the period 1990–2007 during which robot density increased by roughly one robot per 1,000 workers. Estimates showed that employment shrank on average by 0.4 per cent – one robot replacing four workers – and wages by 0.8 per cent. In France, Daron Acemoglu, Claire Lelarge, and Pascual Restrepo, (2020) published a study on competitiveness with robots in the American Economic Association Papers and Proceedings 110, pp. 383-88. In a sample that exceeded 50,000 firms between 2010-2015, they found the same negative effects on employment and wages and positive effects on productivity.
A salient concern that is emerging from these discourses is that policy is required that could attenuate potential inegalitarian consequences of digitalization-based technological change. In advanced economies, there is a marked difference between disposable income inequality and the evolution of market income. As it turns out, the former proves more stable than the latter. This means that redistribution was able to offset some inequality shock on market incomes.
For these economies, the puzzle is whether redistribution can be as effective in the futures under construction if inequality shocks on market incomes amplify under the force of globotics, automation, and IoT. François Bourguignon, director of studies at Ecole des Hautes Études en Sciences Sociales and former senior vice president of the World Bank, believes that tax can have a stabilizing role with four aims: to influence the speed and direction of innovation; to capitalize safety nets for occupational transitions; to circumvent an explosion in disposable income inequality; and to make sure that aggregate demand can play a role in compensating for job displacement.
But none of these economic policies can have any effect outside a new ecology of schooling, and lifelong learning that shifts education funding from the lecture hall to the factory floor and office cubicle. The risk is that globotics, and AI may not outpace tax policy, and shifts in the ecology of schooling to avoid the adverse effects of automation-propelled job displacement.
Two MIT Professors of Economics have asked the elementary question: What if the U.S. placed a tax on robots? In Working Paper 25103, titled, “Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation”, by Arnaud Costinot and Iván Werning (2018), it is argued that robot density is increasing and artificial intelligence technologies are spreading rapidly alongside imports from China and other developing economies. They argue that these changes create opportunities for some workers, extinguish chances for others, and generate significant distributional consequences.
Because robots can replace jobs, a tax on robots would incentivize firms to retain human workers, while also compensating for a dropoff in payroll taxes when robots are used. But slowing the increase in robot density with a tax is far from simple. How can a tax be designed that applies exclusively to automation, machinery or devices without affecting capital equipment overall? Is a robot different from an algorithm?
How do you define a robot that separates it from any other tool or piece of equipment? Taxing automation may eventually have to find a passage via increased taxation of capital. Under the present conditions, it is not clear whether the existing tax regime provides extensive incentives to automation investment or to labour-saving equipment in general. South Korea has reduced incentives for firms to deploy robots; European Union policymakers, on the other hand, considered a robot tax but did not enact it.
Dr Fazal Ali completed his Masters in Philosophy at the University of the West Indies, he was a Commonwealth Scholar who attended the University of Cambridge, Hughes Hall, provost of the University of Trinidad and Tobago and the acting president, and chairman of the Teaching Service Commission. He is presently a consultant with the IDB.
| 2023-01-08T00:00:00 |
2023/01/08
|
https://fazalali.com/2023/01/08/robot-tax-and-automation-propelled-job-displacement/
|
[
{
"date": "2023/01/08",
"position": 11,
"query": "automation job displacement"
},
{
"date": "2023/01/08",
"position": 89,
"query": "AI replacing workers"
}
] |
Dr. Michael Cary's Quest for Health Equity in AI and Algorithms
|
HER: Health Equity Reimagined: Dr. Michael Cary's Quest for Health Equity in AI and Algorithms
|
https://nursing.duke.edu
|
[] |
... eliminating unfair, avoidable, or remediable differences among groups of people. ... Cary and his team have created a curriculum designed to train health ...
|
Dr. Michael Cary's Quest for Health Equity in AI and Algorithms
At a time when health equity and social justice have never been more critical, Duke University School of Nursing (DUSON) stands firmly at the forefront of the movement.
Cary
Michael Cary PhD, RN, FAAN, the Elizabeth C. Clipp Term Chair and Associate Professor at DUSON is emblematic of the school's dedication to forging a path toward a more equitable healthcare delivery system. As a driving force in healthcare and technology, Dr. Cary exemplifies DUSON's commitment to advancing health equity.
A Champion for Health Equity
Dr. Cary's journey into the intersection of healthcare and technology was ignited by a powerful vision– a future health system revolutionized by AI and advanced technology to provide ethical and equitable care for all. His unwavering commitment to leveling the healthcare playing field, particularly for marginalized communities, has been instrumental in shaping his pioneering work.
In his role as the Duke AI Health Equity Scholar, Dr. Cary leads initiatives aimed at safeguarding patients and ensuring that AI-enabled technologies in healthcare do not perpetuate disparities, but rather become powerful catalysts for fairness.
Eliminating Bias in Healthcare Algorithms
Dr. Cary's mission to eliminate biased algorithms spans various dimensions:
Research: Advancing the development of AI and machine learning models that are less prone to bias. Dr. Cary and collaborators are at the forefront of developing algorithms guided by the 'health equity by design' framework, an innovative and comprehensive approach that places a central focus on eliminating unfair, avoidable, or remediable differences among groups of people.
Advancing the development of AI and machine learning models that are less prone to bias. Dr. Cary and collaborators are at the forefront of developing algorithms guided by the 'health equity by design' framework, an innovative and comprehensive approach that places a central focus on eliminating unfair, avoidable, or remediable differences among groups of people. Education and Training: Designing curriculum and training programs, ensuring that AI tools are used in a manner that enhances equitable patient care and safety.
Designing curriculum and training programs, ensuring that AI tools are used in a manner that enhances equitable patient care and safety. Health System: Dr. Cary recognizes the critical role that healthcare systems play in implementing AI solutions. To address bias within these systems, Dr. Cary works closely with healthcare professionals, administrators, and data scientists to ensure that AI is deployed thoughtfully and responsibly.
Recent Breakthrough: Health Affairs Paper
Dr. Cary's recent groundbreaking work, published in Health Affairs, addresses a pressing issue in healthcare. In July 2022, the Department of Health and Human Services proposed a rule prohibiting discrimination by healthcare providers and health plans when using clinical algorithms. However, it offered no specific guidance on achieving this vital goal. Dr. Cary and his team responded with the most comprehensive review to date, encompassing 109 articles, and providing actionable recommendations to mitigate bias in clinical algorithms.
These recommendations serve as a call to action for researchers, developers, healthcare organizations, and policymakers. They offer a roadmap for achieving health equity in healthcare by involving diverse patients and communities, integrating health equity considerations, fostering diverse, well-trained teams, expanding empirical evidence, and promoting transparency and accountability. "Embracing these recommendations is not merely a matter of compliance but a commitment to creating a healthcare system where bias has no place, discrimination is eradicated, and every patient receives ethical and equitable care," said Cary.
A Comprehensive Approach to Health Equity
What sets Dr. Cary apart is his approach to health equity. Acknowledging that health disparities are deeply rooted in issues such as structural racism, unequal access to resources, and socio-economic conditions, he goes beyond eliminating biased algorithms. Dr. Cary actively shapes policies and practices to address systemic factors that contribute to healthcare disparities.
Moreover, his involvement in hosting the Duke CTSI READI - Community STEAM event for young adults on October 21st underscores his commitment to education and community engagement. This event serves as a testament to DUSON's mission of nurturing young talent in science, technology, engineering, art, and mathematics.
On December 1, Dr. Cary is set to deliver a presentation to Epic’s Equitable Brain Trust. This initiative exemplifies DUSON's commitment to partnership and leadership in the quest for a more equitable healthcare system. It challenges the status quo and centers efforts on saving lives among the most vulnerable and structurally oppressed.
On January 8, 2023, the FAIR HEALTH (Fostering AI Research for Health Equity and Learning Transformation) ™ Workshop will take place at Kirby Hall at Duke Gardens This workshop will bring together experts from various disciplines, including health professionals (nurses, physicians, therapists, etc.), data scientists, and others, to address the critical issue of algorithmic bias. Dr. Cary and his team have created a curriculum designed to train health professionals how to identify and mitigate bias within clinical algorithms. The goal is to 1) showcase the curriculum as part of the workshop, 2) apply mitigation strategies to case studies, and 3) measure awareness and knowledge gained via pre- and post-surveys.
Additionally, a Research Symposium is scheduled for March 13-14, 2024 at DUSON. Using the recommendations for mitigating racial and ethnic bias in clinical algorithms in the recently published Health Affairs paper, this research symposium will further the discussions on algorithmic bias. The symposium will include keynote presentations, breakout sessions, and a panel discussion. The goal is to create a list of key priorities to be published as part of a research-setting agenda for the field.
As a tireless advocate for health equity, Dr. Michael Cary's contributions serve as a testament to DUSON's leadership role. His dedication serves as a beacon for our mission—to lead the charge in advancing health equity, advocating for social justice, and blazing a trail toward a brighter, more equitable future in healthcare. At DUSON, we firmly believe that progress toward health equity begins with individuals like Dr. Cary, whose unwavering commitment to this cause inspires us all.
| 2023-01-08T00:00:00 |
https://nursing.duke.edu/about-us/health-equity-reimagined/solutions-in-action/health-equity-ai-algorithms
|
[
{
"date": "2023/01/08",
"position": 92,
"query": "AI job creation vs elimination"
}
] |
|
How technology is redrawing the boundaries of the firm
|
How technology is redrawing the boundaries of the firm
|
https://www.economist.com
|
[] |
Economic & financial indicators. Opinion. Opinion. Leaders · Columns · By ... For a few AI whizzes, pay is going ballistic. Get The Economist app on iOS or ...
|
T echnology and business are inextricably linked. Entrepreneurs harness technological advances and, with skill and luck, turn them into profitable products. Technology, in turn, changes how firms operate. Electricity enabled the creation of larger, more efficient factories, since these no longer needed to depend on a central source of steam power; email has done away with most letters. But new technologies also affect business in a subtler, more profound way. They alter not just how companies do things but also what they do—and, critically, what they don’t do.
| 2023-01-08T00:00:00 |
2023/01/08
|
https://www.economist.com/business/2023/01/08/how-technology-is-redrawing-the-boundaries-of-the-firm
|
[
{
"date": "2023/01/08",
"position": 50,
"query": "AI economic disruption"
}
] |
What is Artificial Intelligence Engineering?
|
What is Artificial Intelligence Engineering?
|
https://professionalprograms.mit.edu
|
[] |
Applied Generative AI for Digital Transformation · Blockchain ... The debate about AI taking over jobs is a heated topic. One thing is certain ...
|
What is Artificial Intelligence Engineering?
AI engineering is the process of combining systems engineering principles, software engineering, computer science, and human-centered design to create intelligent systems that can complete certain tasks or reach certain goals.
To better explain AI engineering, it is important to discuss AI engineers, or some of the people behind making intelligent machines. AI engineers work with large volumes of data to create intelligent machines. Sophisticated algorithms help businesses in all industries including banking, transportation, healthcare, and entertainment. AI is the disruptive technology behind virtual assistants, streaming services, automated driving, and critical diagnoses in medical centers.
What Do AI Engineers Actually Do?
AI engineers build AI models using machine learning (ML) algorithms and deep neural networks (DNN) to draw business insights. They must:
Have sound knowledge of programming, software engineering, and data science
Use different tools and techniques to process data
Develop and maintain AI systems
Some responsibilities of AI engineers are:
Build AI models from the ground up and explain results to product managers and stakeholders
Develop, test, and deploy AI models
Convert machine learning models into APIs so other applications can utilize it
Build data ingestion and data transformation infrastructure
Work alongside data and business analysts
Execute statistical analysis and tune results to extract better insights
Automate infrastructure used by the data science team
Create and manage AI development and production infrastructure
AI engineers have a key role in industries since they have valuable data that can guide companies to success. The finance industry uses AI to detect fraud and the healthcare industry uses AI for drug discovery. The manufacturing industry uses AI to reshape the supply chain and enterprises use it to reduce environmental impacts and make better predictions. AI engineers provide essential solutions.
What are the Most Important Skills an AI Engineer Must Have?
An AI engineer should possess a foundation in computer science and other knowledge including:
Programming skills
Neural networks
Data Science
Statistics and probability
Data Engineering
Exploratory data analysis
Other general skills help AI engineers reach success like effective communication skills, leadership abilities, and knowledge of other technology. Other disruptive technologies AI engineers can work with are blockchain, the cloud, the internet of things, and cybersecurity. Companies value engineers who understand business models and contribute to reaching business goals too. After all, with the proper training and experience, AI engineers can advance to senior positions and even C-suite-level roles.
Tips on How to Become an Artificial Intelligence Engineer
Becoming an AI engineer is a challenging task. Besides having deep computer science knowledge, an AI engineer must be adaptable and valuable to a company. Here are some tips that can guide you toward your goal:
1. Education
Many job postings for an AI engineer position require a master’s degree. At the same time, many job postings mention they are flexible. Some of the degrees employers look for are:
Computer Science
Statistics
Mathematics
Electrical Engineering
Physics
Economics
2. Technical Skills & Concepts
Honing your technical skills is extremely critical if you want to become an artificial intelligence engineer. Programming, software development life cycle, modularity, and statistics and mathematics are some of the more important skills to focus on while obtaining a degree. Furthermore, essential technological skills in big data and cloud services are also helpful.
3. Experience
It’s important to have some experience in AI engineering to find a suitable position. Starting in a company as an intern may help. The majority of offers come from big firms with more than 10,000 employees. Further, most job postings come from information technology and retail & wholesale industries. There is also a substantial amount of open job positions in consulting & business, education, and financial services. This information can help when finding entry-level positions.
4. Geography
Finding tech job positions means following tech companies. While many tech companies are located in the United States, there are many large companies located all over the world. Additionally, tech startups are a global movement. That means you can find an AI engineering position in most countries. Nevertheless, the United States has a large amount of AI engineering positions.
5. Continuing Education
Continuing your education is helpful in every industry. Taking courses in digital transformation, disruptive technology, leadership and innovation, high-impact solutions, and cultural awareness can help you further your career as an AI engineer.
The Future of AI
Businesses need to embrace AI. Subsequently, the future of artificial intelligence and artificial intelligence engineers is promising. Many industry professionals believe that strong versions of AI will have the capabilities to think, feel, and move like humans, whereas weak AI—or most of the AI we use today—only has the capacity to think minimally.
This AI evolution translates to a higher job outlook for AI and ML engineers.
Gain Knowledge in Disruptive Technology at MIT Professional Education
Ai is the moving force behind machine learning, the internet of things, cloud services, and cybersecurity. Learning about these disruptive technologies helps you reach your career goals. MIT Professional Education offers courses in technology, sustainability, leadership & innovation, and high-impact solutions that can help you reach your career goals in AI engineering and beyond. Some courses from MIT Professional Education that can boost technology roles include:
Find out more on how MIT Professional Education can help you reach your career goals.
| 2023-01-02T00:00:00 |
2023/01/02
|
https://professionalprograms.mit.edu/blog/technology/artificial-intelligence-engineering/
|
[
{
"date": "2023/01/08",
"position": 46,
"query": "generative AI jobs"
}
] |
Does Artificial Intelligence Increase Human Athletic ...
|
Does Artificial Intelligence Increase Human Athletic Performance?
|
https://aegai.nd.edu
|
[
"Gareth Spiteri"
] |
Better-funded division-one schools can pay athletes high wages. Instead ... “Artificial Intelligence (AI) in Sports.” Sport Performance Analysis. https ...
|
Introduction
The story of Leicester City winning the Premier League in 2016 was a surprise to any soccer fan across the world. With their odds of winning being five thousand to one, it is a feat that should not have happened. To put that into perspective, in 2016, five thousand to one odds were the same as Elvis Presly still being alive, the Loch Ness monster being proven to exist, or that Kim Kardashian would become the president of the United States in 2020. Leicester City had an average commercial income of £29.3 million that year, while the runner-up in the league, Arsenal, had a commercial income of £146.8 million. The Premier League is known for its “Big 6”, which are typically the only ones that compete for the league title as these teams are the only ones with the funds to pay world-class players and train in the best conditions. However, Leicester City changed that with a team consisting of relatively unknown players who fought for every win they could. Though what if the wealthiest teams in the league, such as Manchester City or Arsenal, had the technology to break down Leicester City’s strategy and develop an optimized strategy to defeat them. Would winning the league still be possible for a club that did not have the funds to compete with technology such as that?
That technology is slowly becoming real for many sports across the globe. AI is beginning its entrance into nearly every aspect of sports, from optimized training programs to injury prevention. While optimized training and strategy programs are still in progress to be used reliably throughout sports, injury prevention technology is beginning to be used reliably. In the case of basketball, “AI technology can improve the training level of basketball players, help coaches formulate suitable game strategies, prevent sports injuries, and improve the enjoyment of games”.[i] Though the author of that statement from 2021 still claims that AI in sports is still in its infancy, it is hard to tell the true impact of this technology. The process of how AI works and what some of the implications might be due to its entrance into sports have begun to be laid out.
How will AI Prevent Athletic Injuries?
One area of sports that AI is entering is injury prevention. Injury prevention inherently does not improve an athlete's skill level. Instead, it keeps athletes healthy and prepared to push their bodies healthily; hence, it will likely have little controversy. The primary method used in injury prevention is artificial neural networks (ANN), "a computational model based on the structure and functions of biological neural networks. Information that flows through the network affects the structure of the ANN because a neural network changes—or learns, in a sense—based on inputs and outputs"[ii] which you can see a representation of below. Artificial neural networks are used in various sports ranging from American football to handball, to help athletes prevent injuries. One of its most exciting uses is to prevent concussions in American football. According to the NFL, after their partnership with AWS, the machine learning program helped decrease concussions by 38% after a suggestion to change a kickoff rule.[iii]
Alongside the artificial neural network in injury prevention, the data set that the ANN uses is generated from an athlete's movements to learn and predict. Technology for "mocap systems" has been used in movie production since the 1980s[iv] but has been adapted to track athletes' biomechanics. By using motion capture technology, doctors and AI can see how an athlete's joints and muscles move to perform movements. For example, if the athlete is performing a jump squat, the motion capture can show the angles at which the joints are moving. This gives insight into which leg the athlete is putting more pressure on that can lead to injury in the future. However, this is not a cheap process to do. Besides needing to pay workers to operate the system and then interpret the results, the setup can cost up to five hundred thousand dollars.[v]
The Process Behind Strategy Optimization
An area that might raise significant controversy is AI optimizing strategy and tactical decision-making. This process seems to be more involved than injury prevention as this is attempting to optimize teams, players, money, and strategy all at once. Because of many factors, the process is split into two sections, backroom decision-making, and on-field decision-making. Backroom decision-making involves scout information, player transfers, and the squad of players available. On-field decision-making involves but is not limited to training, tactics, and team selection.
Each step in tactical decision-making requires its own set of training and validation data and processes to get a complete picture. For example, in the case of tactics/match preparation, deep learning has been applied to model behaviors of players in both basketball and football. Through this, “A simulation is run to see how an AI team would move in certain situations with the AI team created by ‘ghosting’ the characteristics of average and top teams. This helps to identify where teams can make changes to their players’ movements and change events to improve the probability of scoring a basket/goal or reduce the probability of conceding.”[vi] To the right are the results of past ghosting models utilized in basketball.
Using deep learning to have AI simulate matches began research in 2017, a year after Leicester City had won the Premier League. This research is where it becomes worrying for the fairness of sports. Since Leicester won the Premier League in 2016, the technology for optimizing strategy has become much more sophisticated and will only become more accurate. The worry becomes that teams that have much more money to spend willingly will invest more money into AI optimization than a team such as Leicester City would be able to. This issue can also become problematic in American collegiate sports. A primary reason college athletes do not get paid salaries is that recruiting players would become unfair. Better-funded division-one schools can pay athletes high wages. Instead, schools with an abundance of money invest in higher-quality training facilities, locker rooms, coaches, and more. These incentives already push aspiring college athletes to said schools, but what if optimizing strategy becomes the bare minimum needed to attract these athletes. Smaller schools already have issues recruiting top athletes as the advantages of going to a better-funded school are already so great. If these less-funded schools can not provide athletes with this technology to give them faith in being able to compete with these well-known, highly-funded schools, then does this issue become close to one of the reasons schools do not pay college athletes?
The optimizing strategy also uses input from player recruitment of young upcoming players and other players in the sport that can improve a team’s play. This process also utilizes deep learning methods that are used in simulating matches. Looking back at Leicester City, we can again see the issues that can arise from this. Leicester City did not come into the new season after ending the previous season on a losing streak. After being close to relegation, they miraculously saved themselves to stay in the Premier League after a string of impressive wins. After saving themselves from relegation, there still was not much talk about the players themselves, as top coaches did not pay much attention to the bottom of the league table. If AI deep learning was using data from the entire league, Leicester’s best players that led to their championship win could have been discovered by the “Big 6”. Leaving Leicester City unable to match the pay that those top clubs could offer them and may have left them to be again fighting to stay in the league instead of their miraculous championship-winning season.
Do AI strategies have merit?
With the technology for AI to be used in sports being so new, there is the question of how effective it is currently and what the goal is for how effective researchers want it to be. This question relates more to the case of optimizing strategy rather than the case of injury prevention. The benefits of injury prevention are present in the case of lowered rates of concussions in the NFL. Looking particularly at optimizing tactics again and the process of “ghosting” teams, there are critical drawbacks. As of 2017, a vital drawback of the models was that “the model lacked sufficient fidelity to make realistic predictions”. The model could not come to realistic predictions due to the lack of the ability to apply relevant contextual information, current score, and fatigue of players. To prevent this issue, some researchers have attempted to switch this technology from post-game or pre-game use to on-the-field use. Researchers have done this by “analytics courtside for use in in-game decisions by combining data-driven ghosting with a digital sketching interface”.[vii] This combination bypasses problems such as contextual information and score by allowing coaches to use this tool during a game.
This advancement in technology became ready for coaches to try in 2018. Though since then, there has not been much discussion on the technology. We have yet to see coaches readily using it courtside, and there is limited research on new advancements in 2022. In comparison, much more research has been done on injury prevention through AI. The International Conference on Artificial Intelligence in Sports is expected to occur in July 2022. There are already papers selected that discuss new research in injury prevention but none on game optimization or “ghosting”.[viii] This brings the question of how in-depth this technology is still being researched? Is the goal of optimizing through “ghosting” something too far-fetched in 2022 due to the issue of sports being so variable? Do coaches and players believe that their intellect is superior to AI since they understand the human nuances of sports?
AI Effects on Youth Recruitment
One of the most significant ethical issues that need to be addressed is the field of recruitment. Through tracking performance data, teams can predict how a player will interact with the rest of the team once the player is recruited. This issue becomes apparent when this technology becomes widespread at college and youth levels of athletics. Through machine learning, AI can “be applied to discover which other professional players are the scouted player of interest most similar to. These solutions can even project a young player’s future career performance. It can use prediction models from historical data of former rookies and their eventual successes to forecast future performances of current prospects.”[ix]Currently, in 2022, this is not yet the reality, but looking at this ethically, we can begin to see issues of fairness and opportunity. Not all youth players have access to play with sports clubs or expensive private schools where this technology would be available. If a player does not resemble a current player as much, the system may have issues inputting their playstyle into a simulation. Therefore, talented youth players might have more trouble being recruited in the future.
Is it Ethical to use Injury Prevention AI on Youth Athletes?
Furthering the idea of issues surrounding youth players, the area of injury prevention is a concern. As mentioned previously, injury prevention relies on studying an athlete's biomechanics to understand how their movements might lead them to be at high risk for specific injuries. As we have seen from many private companies such as Facebook, companies will gather a user's data and sell it to third-party companies. The selling of private data leads to worries that these new companies creating this technology and selling the service may collect and sell athletes' data to third parties. While this is also a concern for adult athletes, it feels especially concerning for youth athletes who would not understand its implications and how this could be an invasion of their privacy. It is also important to note that as of July 2018, "HIPAA does not cover health or health care data generated by noncovered entities or patient-generated information about health"[x]. It seems that HIPPA does not protect biomechanical data collected for the use of machine learning.
Furthermore, injury prevention does not solely rely on biomechanical movements. Studies have shown that some risk factors for injuries take psychological assessments and stress levels into account. So, psychological data is also at risk of being stored beyond the biomechanical data that can be potentially stored on youth athletes. This causes even more concern regarding privacy since it seems that HIPPA does not protect this data.
Using AI injury prevention on youth athletes in 2022 is not an area of concern for the general public. There have been studies done using AI on youth athletes since 2018 through the work of gathering data on soccer players (aged 18-13). From a study in 2004, it was found that the average injury rate per player per season is 0.40 and the time spent recovering from said injuries accounted for 6% of a youth player's development time.[xi] With the common occurrence of injuries in youth programs that take away from their development time, introducing technology that can help prevent this seems ethical. It can be ethical as long as their data is used for the sole purpose of helping prevent injuries and their development as players. Since, in 2022 in America, it seems that their data is not protected, it becomes worrisome that their data can be used in unethical ways. Though if new regulations were made to protect youth athletes from having their data exploited, this could be an ethical solution to a common problem in youth athletics.
Brief Future Outlook
Looking at all aspects of AI in sports, I believe there is a place for AI technology. Besides the inevitable in areas such as bookmaking and the fantasy sports realm, I think that injury prevention and player development will play a massive role in the advancements of all sports. As someone who has struggled with injuries from a young age because of athletics, I believe that if I had access to this technology, I would have struggled with fewer injuries. I learned later in life that I would favor one leg over the other, which led to me having tendonitis in one knee. However, if I had access to these new AI advancements, this problem could have been identified at the start, and I could have avoided the years of pain I have dealt with in one knee.
That benefit of injury prevention is without even mentioning what it has done in professional sports. Through this use of AI, the NFL has reduced the occurrence of concussions in their sport by 38%, that alone should warrant the use of AI throughout more sports to prevent injuries. Sports are a beloved part of human culture, and to keep seeing world-class players be involved for as long as they can, everything that can protect their health should be done. Regarding optimizing strategy, the problems associated should fix themselves over time. Optimizing strategy through deep learning is still new, meaning that the strategies it computes are not perfect, and the technology to do so is still expensive. This means that in 2022 only the teams/clubs/colleges that can afford this are still not relying on this information. By the time it begins to be perfected, the cost of using this will most likely decrease, allowing access to teams with less funding. I do not believe this will ruin the game but will increase the skill level of players and coaches. It will force players and coaches to be more adaptive as the two AI-optimizing strategies compete against each other. AI deep learning can potentially bring athletes and sport strategy that we have yet to see. If the technology continues to be researched, it can begin an exciting new area for coaches, athletes, and spectators alike.
Endnotes
[i]. Li, Bin, and Xinyang Xu. 2021. “Application of Artificial Intelligence in Basketball Sport”. Journal of Education, Health and Sport 11 (7):54-67. https://doi.org/10.12775/JEHS.2021.11.07.005.
[ii]. Claudino, Joao. 2019. “Current Approaches to the Use of Artificial Intelligence for Injury Risk Assessment and Performance Prediction in Team Sports: a Systematic Review.” SpringerLink. https://link.springer.com/article/10.1186/s40798-019-0202-3
[iii]. “Using Artificial Intelligence to Advance Player Health and Safety.” 2019. NFL.com. https://www.nfl.com/playerhealthandsafety/equipment-and-innovation/aws-partnership/using-artificial-intelligence-to-advance-player-health-and-safety.
[iv]. “100 years of motion-capture technology.” 2018. Engadget. https://www.engadget.com/2018-05-25-motion-capture-history-video-vicon-siren.html.
[v]. “The complete guide to professional motion capture.” n.d. Rokoko. Accessed April 16, 2022. https://www.rokoko.com/insights/the-complete-guide-to-professional-motion-capture.
[vi]. Beal, Ryan. 2019. “Artificial Intelligence for Team Sports: a survey.” Cambridge Core. https://www.cambridge.org/core/journals/knowledge-engineering-review/article/artificial-intelligence-for-team-sports-a-survey/2E0E32861D031C022603F670B23B55B3.
[vii]. 2018. Bhostgusters: Realtime Interactive Play Sketching with Synthesized NBA Defenses. https://sportin-tech.com/wp-content/uploads/2020/05/2018_Seidl_MITSSAC_BhostgustersRealtimeInteractivePlaySketchingwithsynthesizedNBADefenses.pdf.
[viii]. “International Conference on Artificial Intelligence in Sports ICAIS in July 2022 in Paris.” n.d. World Academy of Science, Engineering and Technology. Accessed April 18, 2022. https://waset.org/artificial-intelligence-in-sports-conference-in-july-2022-in-paris.
[ix]. Martinez, Guillermo. 2021. “Artificial Intelligence (AI) in Sports.” Sport Performance Analysis. https://www.sportperformanceanalysis.com/article/artificial-intelligence-ai-in-sports.
[x]. Cohen, Glenn, and Michelle M. Mello. 2018. “HIPAA and Protecting Health Information in the 21st Century.” JAMA Network. https://jamanetwork.com/journals/jama/fullarticle/2682916.
[xi]. Price, RJ. 2004. “The Football Association medical research programme: an audit of injuries in academy youth football.” BMJ Journals. https://bjsm.bmj.com/content/38/4/466.abstract.
| 2023-01-08T00:00:00 |
https://aegai.nd.edu/latest/does-artificial-intelligence-increase-human-athletic-performance/
|
[
{
"date": "2023/01/08",
"position": 73,
"query": "artificial intelligence wages"
}
] |
|
AI In Architecture: The Good, The Bad, And The Unknown
|
AI In Architecture: The Good, The Bad, And The Unknown — Rascoh Studio
|
https://rascoh.com
|
[] |
Some of the most popular AI art generators allow users to enter a text-based prompt and can be used to create architectural graphics of the interior or exterior ...
|
AI Art has taken the world by storm and has created division between many people. Some artists argue that AI Art is “the fast-food of the art world”, while others praise everything that AI has to offer.
There’s been a lot of buzz around this topic over the years, but conversations have exploded recently with the public release of newer neural networks like Stable Diffusion and Dall-E. Now more than ever, it’s essential to question AI and how it might be used in our practice.
How will AI in architecture impact our profession?
In this post, we’ll take a look at this discussion but through the lens of architecture and design. I’ve written about AI on this website a number of times, and it’s something that I’m deeply interested in.
With so many recent stories that question the use of AI for creating graphic and written content, I feel that it’s crucial to weigh in on the dialogue to get a deeper understanding of how architects can use these so-called “AI architecture generators.”
As with any design exercise, it’s important to start with the fundamentals.
Laying the Framework: Art, Architecture and Technology
As with any creative endeavor, it’s important to lay a foundation for discover — or, a rule book by which we can interpret something.
The relationship between producing art and producing architecture are somewhat adverse. Artists tend to work from real to abstract, while architects work from abstract to reality. Yet, one cannot exist without the other.
Technology drives creativity, and creating art and architecture are profoundly human undertakings.
To frame this conversation, we’ll need to look at the semantics of the words “art” and “architecture” and question their meaning.
What is Art?
It’s impossible to truly define “art”, since it’s a subjective term. What one may think of as “art”, someone else may think of as worthless. We perceive art as we perceive ourselves — so therefore, art can only be defined by those who interpret it.
In a study conducted by the National Endowment for the Arts in 2012, a series of diagrams helps to illustrate how art works and its role in society. I’ve studied these diagrams in depth, and reinterpreted the one below to simplify the main idea:
(Original study found here.)
However, most people can agree that creating art is a means of self-expression and serves as an outlet to help people communicate their innermost thoughts.
For some, creating art is a cathartic release and allows us to channel and process our emotions. Be it in the form of writing, drawing, painting, producing music, or by the simple phrase of “creating something from nothing,” art completes its cycle when it is shared and interpreted by others.
With art, we can acquire new perspectives and deepen our understanding of the world and how the world appears through the eyes of other people.
What is Architecture?
At its core, architecture is the art and science of human accommodation. The ultimate goal of architecture is to create something that exists in concrete reality — within the third dimension.
Art is hard to define, as is architecture.
However, by nature, architecture is a coalescence of science and art — and defining the architectural “design process” is perhaps more linear.
In the most straightforward way to explain architecture: we start with an idea, gather information (research), parse our research against constraints (building codes), create a set of instructions to provide builders, and oversee the construction of our ideas. When it’s all said and done, three-dimensional space is created.
Stuart Graff, CEO of the Frank Lloyd Wright Foundation, defines architecture in a more poetic sense:
“For Wright, architecture isn’t buildings. Buildings are a form of architecture. Architecture is this sense of continuity and connection with everything around us. You might think of architecture as an ecosystem in Wright’s world… how we relate to the things around us, and how they relate to us.” Stuart Graff, CEO of the Frank Lloyd Wright Foundation
Source: https://www.secondstudiopod.com/
Humans are three-dimensional beings, and if authentic architecture should support the fundamental requirement of “human accommodation,” the end result must exist in the third dimension.
What are AI Architecture Generators?
AI Architecture generators are software programs or online tools that use artificial intelligence (AI) to create or generate artwork that can be utilized in architecture. Some of the most popular AI art generators allow users to enter a text-based prompt and can be used to create architectural graphics of the interior or exterior of a building.
To preface this even further, check out my other post:
How to Use MidJourney AI Create Architecture Concept Renderings Here’s a quick guide on how to use MidJourney, a free-to-use AI Art Generation Bot. Let your imagination run wild with Midjourney Architecture concepts! Read More →
Some AI art generators use machine learning algorithms to analyze existing art and generate new works based on those patterns. Others use deep learning techniques to generate original images from scratch.
AI art generators even allow you to input your own images or ideas, and the AI system will create a new work based on those inputs.
Ultimately, AI art generators are designed to mimic an artist’s style, using algorithms to generate original artwork based on parameters input by the user.
As a result, the images produced by these tools will vary in quality and originality.
There are many areas in which AI could be integrated into the architectural design process — from creating visual concepts to writing code for parametric design. So the phrase “AI Architecture” is somewhat nuanced.
“AI in Architecture” I’ll be using the phrase “AI Architecture Generators” or “AI in Architecture” to refer to AI used in architectural design.
Creating architecture has historically been a time-consuming, iterative process. So to suggest that we could plug in a query to AI and have it “design” something for us seems a bit far-fetched…
But that’s also what we thought about creating art before the latest “AI Art Generators” hit the mainstream in 2022. And now, people are questioning the future of AI in creative industries.
Many artists feel that their work is being ripped off or that their styles are being “stolen” by AI art generators. These discussions raise an essential point for our architecture and design industry, and now is the time to weigh in on the conversation.
But before we get into the downsides of AI, I’ll cover some of the benefits that using AI in architecture provides us.
AI In Architecture: The Good
For over 50 years, architects and designers have integrated computers into nearly every aspect of the design process. From creating concept sketches to developing construction documents, technology plays a massive role in the quality of work we produce. Utilizing the latest technology to design architecture is not only smart but also necessary.
That’s not to say that traditional design methods are becoming less important in the design process… but it’s becoming hard to ignore the rapid advances in BIM technology, generative design, fabrication, and architecture visualization.
Within the last five years, we’ve become accustomed to using new software like Enscape, Dynamo, and Python for Revit. Or, Grasshopper and Ladybug for Rhino. You name the animal, and a software developer might name the latest and greatest parametric design program after it!
Even Adobe software — a staple for nearly any creative profession — is starting to utilize AI in photo editing software like Lightroom to help streamline the editing process for photographers.
2022 was an impressive year for AI.
With the release of datasets like GPT-3 and Stable Diffusion, AI has finally found its way into the visualization world.
And for the practice of architecture, being able to visualize photo-realistic architectural concepts in a matter of seconds has, frankly, blown people’s minds.
Using AI to Create Architecture Concepts
MidJourney has quickly become one of the world’s most popular AI art generators. Architects and designers have found ways how to use MidJourney for architecture concept renderings, and even some of the largest firms in the world are using the software to help clients visualize buildings.
MidJourney uses the AI model called Stable Diffusion to create its images. As Stable Diffusion models continue to develop and “train” from billions of images used in its neural network, the quality of images being produced is clearly being refined.
The images below are generated with four MidJourney versions released in 2022.
I used the following prompt to create these images in the latest four versions of MidJourney:
“modern villa in a forest filled with rain and fog, 8K, detailed, raining, realistic, photographed by Mike Kelley”
MidJourney Version 1 – Stable Diffusion, March 2022 MidJourney Version 2 – Stable Diffusion, April 2022
MidJourney Version 3 – Stable Diffusion, July 2022 MidJourney Version 4 – Stable Diffusion Beta, November 2022
Looking at the images above, it’s apparent how AI models are rapidly evolving and producing higher-quality images. The “architecture” quality is also becoming more believable and accurate.
AI in Architectural Design Software
Architects and designers are also finding other ways to integrate AI into the design process, such as using ChatGPT to create custom scripts for modeling in generative software.
Even for software like Revit, plugins are being created to utilize text-to-image AI technology to redefine how we create architectural renderings. If you want to see this in action, check out this video on “How to Create AI Renders Directly from Revit” by Ricardo Morales Quirós.
He’s using a plugin called Veras for Revit, created by EvolveLAB.
Rather than plugging a text prompt into AI like MidJourney, this software integrates directly with Revit 3D views. For any 3D view you set up in Revit, you can plug in a prompt to reimagine that view’s entire look and feel. Changing materials, lighting, atmosphere, and more — happens on the fly.
Although this technology is still very new, it’s incredible that text-to-image rendering capabilities are becoming more grounded and less “random.”
And what’s more, the longer these AI models train, the more sophisticated they become. When AI can teach itself, the evolution will be exponential.
💡 Jon’s Take: The Good Side of AI in Architecture
There are many other ways architects and designers utilize AI in the design process. This technology will only evolve and carve new methods into our practice — ideally, for the better.
If you’re curious to learn more about how architects can use AI for design, be sure to check out some of my other articles:
Best AI Art Generators For Architects and Designers AI Art Generators are becoming valuable tools for architects and designers, but which one is the best to use? Here are some of the best AI Art Generators for Architects and Designers, ranked and reviewed based on price, accessibility, features, and more! Read More →
The integration of AI into the architectural design process is going to be immensely helpful for streamlining the design process. Being able to rapidly produce design concepts or material renderings means that we’ll be able to shift our focus back to more critical tasks in the design process.
We’ll be able to use AI to take care of mundane production tasks and realign our focus on the critical thinking aspect of architectural design.
But of course, there are two sides to every coin — and some people are starting to feel that the creative industry as a whole is on the precipice of a robot-controlled future.
AI in Architecture: The Bad
Depending on who you ask, AI is actually quite terrifying.
Lucky for us right now, AI doesn’t have the capability of being self-aware. In other words, it doesn’t know that it’s creating something.
However, this rapid advancement of AI makes us question our own “humanness,”.
The concern comes in the efficiency of AI — and the sheer volume of content it can produce in just a matter of minutes… Some would even say that AI in architecture threatens to “replace” humans.
But now, it’s more important than ever to reflect on humans’ relationship with architecture and art.
As AI art generators become more sophisticated, artists have begun voicing their concerns about the legality of where AI is collecting its training data from.
Is AI “Stealing” from Artists?
In this article, the author takes a look at how Lensa is using Stable Diffusion and “ripping off” the styles of human artists.
In the lens of architecture, similar questions are being raised. Another post on Archinect questions the “Real Threat of Artificial Intelligence to the Architecture Profession.”
One of the interesting takes from the comments in this article is the prospect that AI could make it more difficult for entry-level designers in practice. With AI being able to quickly produce some of the more mundane tasks that might be handed over to, say, an intern — it leaves even more questions on the table that deal with learning the fundamentals.
But wouldn’t that just “free” up the time for somebody to learn something else more valuable? For example, rather than spending 8 hours creating one concept rendering, that entry-level designer could focus on a more technical side of design, such as creating construction details — something that AI is not sophisticated enough to know how to do.
Ultimately, there are so many moving parts in architectural design that we certainly use the help of AI to save time. If we can save time producing the mundane, we can reallocate our time to learn something currently impossible for AI to assist with.
When AI evolves and DOES learn how to complete some of the more technical tasks in architecture, we’ll continue to pivot and find new ways to produce better designs.
It always ties back to the fundamentals.
AI is not “Theft”
Some people believe that AI is not “theft”, and AI models have the right to access the same information that we, humans, can access.
In this Reddit Post, the author claims:
“I think the view that AI art is “theft” mainly comes from misunderstanding how AI creates the images or what current artists take for granted when it comes to the creative process, or mere frustration with the ease in which new images are created.”
Looking ahead, here’s the hot-topic question…
Will AI Replace Architects?
Based on the questions above, someone could suggest reasonable concern for the possibility of AI becoming sophisticated enough to “steal” architects’ designs — but to outright “replace” the process of creating architecture seems far-fetched for now.
Artists are concerned about AI “stealing” their styles — and the evidence is hard to argue. We’ll need to start questioning the same for architecture, as AI becomes more advanced and continues to use existing architectural precedents as its inspiration.
However, “Steal” is undoubtedly a strong word to use — especially given that any “research binge” that architects and designers would indulge in for inspiration nearly always results in skimming through books or the internet to find graphic inspiration.
But to make sense of AI in architecture, we’ll need to look deeper into exactly how AI art generators work.
If you’re curious, be sure to read my other post:
How to See Original Stable Diffusion Images Using Software Called “Datasette” How Does AI Collect Data? Here’s a tool to see 12 million original images scraped by Stable Diffusion AI. Read More →
One sentiment I often use in architectural design is “if you know where it comes from, you’ll know where to take it.”
As with any creative endeavor, it’s vital to try to gain an understanding of its roots by constantly asking questions — especially when it comes to using AI in architecture and design.
As mentioned earlier, art influences architecture — so if the art world is affected by AI, then inherently, the world of architecture will also be impacted.
💡 Jon’s Take: The Bad Side of AI in Architecture
To be honest, I’ve never felt more conflicted about using technology like AI before. It often feels like “cheating” in many ways, with the sheer volume of architectural iterations that can be produced within minutes.
That raises the point in my primary concern with using AI…
AI produces A LOT of “Worthless” content.
When I’m pumping out images in MidJourney, a lot of sifting is involved. For every 20 MidJourney images that pop up on my Discord screen, only one image is actually useful.
Now, scale that — and you’ve got a lot of useless images.
Of course, it can be argued that that’s just part of the iterative design process. You’ve got to produce a lot of content, put it all on the table, and critically determine the best iteration to move forward with, given the project’s scope.
My concern is not necessarily about the quality of the content that AI produces — because, let’s be honest… the content will get better as AI evolves.
My concern is about the second layer in the iterative process — critical thinking.
With the release of ChatGPT 3, my social feeds have been completely inundated with new articles, tips, tricks, and “How to Use ChatGPT for (anything).”, or… “500 Prompts to Use for ChatGPT”, or… “I asked AI to Design 60 Buildings for Me in 1 Hour”… and the list goes on.
Just because AI can produce a lot of content doesn’t make it meaningful.
What adds that layer of “meaning” is how we distill that content into something valuable — something that somebody can use to improve a specific process.
By now, we’ve seen the power of AI and the striking images it can produce. We’ve seen the power of ChatGPT and how it can crank out paragraphs faster than we can think of sentences. But the process shouldn’t end with the product created by AI.
AI should serve as a tool to inspire the design process, not become the design process.
Don’t let AI “Be” the Creative Process, but rather a tool to support creativity.
The moment we become too reliant on AI to think critically for us will be an exponential downfall for every creative industry. When I see headlines like “ChatGPT Wrote 20 Articles in 15 Minutes”, I’m left to question those articles’ quality.
I can see a future where AI produces nearly every article published online — and the only human interaction in making the content is writing the prompt, then copying/pasting it into a post online.
I’ve seen this happening all over the place in architecture and design schools with MidJourney images — and even beyond the boundaries of the digital universe.
Now we’re seeing AI everywhere with written content.
The responsibility is in our hands, not AI. The more content we consume, the more we need to question it. And with the implementation of AI in content production, there will be a massive influx of “thin” content in the future.
AI In Architecture: The Unknown
A lot needs to be discovered about the future of AI for art and architecture. It’s difficult to predict the trajectory of technological advancement in AI — but given the information that I shared earlier, it’s evident that AI is rapidly evolving and becoming better.
So many other industries outside of architecture and design have used AI for years, like healthcare and transportation. But this is one of the first times in history that AI has directly impacted how we conceive architecture and design ideas.
Here are 3 “unknowns” that we should pay attention to for the future of AI and architecture:
Ethical Concerns. There are questions about the potential for AI to be used to perpetuate social biases or to infringe on privacy. It is not clear how these issues will be addressed in the future, or what the consequences of ignoring them might be. Job Market: There are concerns that AI could lead to widespread unemployment, as machines take over tasks that are currently done by humans. On the other hand, there’s also the possibility that AI could create new job opportunities, as businesses seek to integrate these technologies into their operations. Societal Impacts: It’s not clear how the widespread use of AI might change the way people interact with each other or the way that they perceive their own role in the world.
It’s really exciting to consider the potential possibilities with AI in architecture, but it’s also important to approach the development and use of these technologies with caution and care.
As far as ethical concerns go, the biggest concern right now is about whether or not AI should be regulated.
Should AI be Regulated?
People have suggested that AI should be regulated in what online sources it pulls data from — websites or social media. In fact, The Harvard Business Review states that “AI Regulation Is Coming” and that people have been growing more concerned with the use of AI over the past decade.
The question of whether AI art should be regulated is a complex one, and there are valid arguments on both sides.
On one hand, some people believe that AI art should be regulated in order to protect the integrity of the art world and to ensure that artists are properly credited for their work.
They argue that AI art has the potential to mislead audiences, who may need to realize that a work was created by a machine rather than a human artist. Additionally, they argue that AI art could be used to deceive collectors and buyers, who may be willing to pay more for a work if they believe a human artist created it.
On the other hand, others argue that applying regulations to art or any creative endeavor would be a significant step in the wrong direction.
Art is about expression — and in a lot of ways, there “are no rules” when it comes to creating art.
They point out that other art forms, such as digital art, have yet to be regulated and that AI art should be treated the same way. They also argue that regulating AI art could stifle creativity and innovation and prevent artists from exploring new techniques and styles.
Ultimately, the decision of whether or not to regulate AI art will depend on the specific context in which it is being created and exhibited.
It may be necessary to establish clear guidelines and standards for the use of AI in the art world, in order to ensure that artists are properly credited and that audiences are not deceived.
💡 Jon’s Take: The Future of AI in Architecture
AI is not the end but the means.
We should use AI to help inspire our design process or to help us break through creative blocks. But we can’t simply exchange something so innately human — critical thinking — with the “shiny object syndrome” that AI is giving us regarding productivity.
It’s great that we now have tools to help speed up our iterative design process, but that should only open the doors to being able to distill that content into something truly groundbreaking…
Regardless of where you personally stand on the conversation, one thing is apparent.
AI is here to stay.
It will continue to find uses in other creative industries to help increase productivity and inspire new ideas.
If you want a bit of solace when thinking about the future of AI, remember that AI isn’t self-aware. It doesn’t know of its existence or what it is creating. As humans, we have the luxury of self-awareness — and self-awareness is arguably impossible to replicate using technology.
AI is part of the creative process, not THE creative process.
Regarding AI art generators specifically, this is an excellent time to reflect on what we value when it comes to art and what we value in designing architecture. No matter how AI evolves, there will always be one key ingredient that makes designing architecture a uniquely human ability.
The design process!
Curious how architects can integrate AI into the design process? Check out this post:
10 Ways Architects Could Use AI in the Design Process Ever wondered how architects of the future could integrate AI in the design process? Here are 10 areas in the architectural design process that might see AI uses in the future. Read More →
If we want to “play the game” with AI in architecture, we need to get a better understanding of how AI works. We keep an open mind to where technology can take us in the future.
Those who don’t want to “play the game” with AI design certainly don’t have to.
But be aware that this technology is rapidly evolving, and now is the time to get a basis of understanding how it all works.
Here’s what I’ll say for anybody who is concerned about the future of AI in architecture:
Do your research.
Like with any tool, becoming a “master of AI” requires understanding exactly how we can use it in the design process. Rather than being intimidated by the technology, embrace it. Learn about it. Discover the roots of how it was created, who created it, and how it operates.
Don’t be intimidated. Be curious.
The way by which architecture is created is directly connected to technology.
Think about the evolution of design technology and how it has shaped how we create architecture. CADD software has made it exponentially faster to produce design concepts and drawings, but this technology has only existed for the past half-century.
Yet, the seeds of architecture were planted thousands of years ago — and many of the original architectural precedents of our ancestors still inspire modern architecture today.
Technology is ultimately a catalyst that will help us reimagine the ways in which we create architecture.
Question the meaning of “art” and “architecture.”
Architecture shouldn’t simply appeal to our visual sense. It should strive to evoke deep feelings and thoughts in those who are experiencing it.
Computer screens can’t replace physical architecture, and to call any image produced by MidJourney or other AI graphic software “architecture” would be misleading — and frankly, untrue.
Architecture isn’t solely about producing pretty pictures or fancy renderings.
We need to constantly question our process.
How was this architecture created? Why was this architecture created? What does this piece of architecture mean to different people based on their unique walks of life?
What we choose to value in the architectural design process is important. Architecture should tell stories and connect with people.
AI is great for tossing ideas around and expanding inspiration, but at the end of the day, having a unique creative process will set people apart and maintain the “humanness” behind art and design.
The key is to learn how it works, why it works, and how to integrate it into your creative process. Embrace AI as a step forward rather than a force to replace us.
Don’t be discouraged by AI in architecture — use it as a tool to reimagine the future.
Jon Henning Hi, I'm Jon. I write about emerging technology in architecture, engineering and design, and I want to help you push boundaries with the latest tech trends in the AEC industry.
| 2023-01-08T00:00:00 |
2023/01/08
|
https://rascoh.com/ai-in-architecture-good-bad-unknown/
|
[
{
"date": "2023/01/08",
"position": 76,
"query": "artificial intelligence graphic design"
}
] |
Artificial Intelligence vs Computer Science 2024
|
Artificial Intelligence vs Computer Science 2024
|
https://blog.kalvium.com
|
[
".Wp-Block-Post-Author-Name Box-Sizing Border-Box"
] |
Automation will revolutionise the way we live and work, CS is going to improve ... Job Displacement: Artificial intelligence has the potential to automate ...
|
Hello young techies! Are you wondering what the future holds for artificial intelligence vs computer science? Well, let’s break it down! AI is all about creating machines that can think and act like humans. They can analyse data, learn new things, and make decisions. Pretty cool, right? Meanwhile, computer science solves problems and innovative technologies with the help of computers. It’s the basis of everything we do with computers, from making video games to creating medical databases.
So which one will rule the future? The answer is BOTH! Artificial intelligence vs Computer Science is like peanut butter and jelly; they go hand in hand. AI needs computer science to function, and computer science needs AI to understand humans. Don’t worry about choosing one over the other. Embrace both and see where your skills take you. Let’s dig into the differences between Artificial intelligence vs Computer Science and see how they influence our lives and work.
Artificial Intelligence Computer Science Overview Create intelligent machines that fully eliminate human instructions Design and develop computer systems and applications Sub-branches and Disciplines Machine learning, natural language processing and robotics Algorithms, data structures, programming, and computer architecture Industry Scope Healthcare, finance, and transportation Software development, data analysis, and cybersecurity Real-world application Automate tasks and improve efficiency Create new software, apps, and systems Future Automation will revolutionise the way we live and work CS is going to improve and streamline existing processes
Artificial Intelligence vs Computer Science: What is AI?
No, AI is not a robot from outer space trying to take over the world (at least not yet). “Can Machines Think?” This question inspired the father of computer science, Alan Turing, to distinguish between a computer and a human text response. AI is the study of creating machines and computer programs that think and act like humans. It helps us use computers to understand how humans think and make decisions.
AI doesn’t have to be just like humans, though, it can use its methods to solve problems and make choices. AI is a combination of science, engineering, and computer programming. It’s cool because it can help us do things faster and better! So how does AI work? It’s all about programming. Scientists and engineers write special codes that teach machines how to become more humane. It’s like teaching you how to behave and talk, but with more math and computer language.
Should we worry? Is AI here to steal our jobs or replace us?
Perhaps! But, AI is here to help us and make our lives a little easier. For example, AI can analyse unlimited amounts of data to find patterns and make predictions. It can drive you home safely and fly fighter jets on battlefields. The next time you see an automated machine, think of them as a more intelligent and helpful friend!
PROS of Artificial Intelligence
Artificial intelligence (AI) is a technology that allows machines to perform tasks that would usually require human intelligence. While some people are uneasy about the potential negative impacts of AI, there are many pros to this technology.
Increased Efficiency and Productivity: One of the biggest pros is easy availability. Think about the time when you have a question or need secondary help. You can ask ChatGPT, Google or Siri and boom, you have an answer. No need to go to the library or ask another human. It’s not just about answering questions. AI can schedule appointments, find the best route to your destination, and order your favourite pizza. Hence, AI can be the alternative if you need easy ways to get things done.
While some of these resources are free, others require you to pay a nominal fee. Additionally, the laws and regulations surrounding AI vary from country to country, so it is critical to familiarise yourself with the rules and restrictions that apply to you. Improved Decision-Making: It might surprise you that AI helps us in many ways that make our lives easier and more convenient. For example, when you’re shopping online and get personalised recommendations based on your previous purchases, that’s AI at work. When you’re using Maps on your mobile to find the best route to your destination, AI is helping calculate the quickest and most efficient path in traffic.
AI is also utilised in healthcare to help doctors diagnose and treat patients, and in transportation to make roads and highways safer.
It’s even used in agriculture to help farmers grow and harvest crops more efficiently. Zero Errors: Have you ever made a mistake on a math test or misspelt a word on a paper? Did you know that artificial intelligence can help your life be error-free? One of the pros of using AI is that it can analyse big data and make decisions based on that information. Applications in Healthcare: AI possesses the potential to revolutionise the medical field and make healthcare hassle-free. One way AI functions in medicine is through the analysis of medical images. AI can quickly go through X-rays, CT scans, and MRIs to help doctors make diagnoses. Managing Repetitive Tasks: We are all tired of doing the same mundane tasks over and over again. Imagine never having to sort through emails or input data into a spreadsheet again. AI can work around the clock and not make human errors.
CONS of Artificial Intelligence
Artificial intelligence (AI) is a technology constantly transforming our lives. Let’s explore the CONS of AI and how it might affect the battle between Artificial Intelligence vs Computer Science.
Costly Price: AI can help us do things faster and more efficiently, but it can be expensive to set up and maintain. These machines are very complicated to train and use, and it costs a lot of money to keep them working actively. Lack of Human Replication Ability: Many people think that machines will never be able to be as emphatic as humans because they can’t feel emotions or have a moral understanding of right and wrong. Even though machines can get better at doing specific tasks over time and might be more efficient than humans, they will always be missing those innate human qualities. Experience is Valueless: People learn from what they do and incorporate their experience to do better in the future. Machines don’t have that same ability. They don’t react to things around them as people do. Job Displacement: Artificial intelligence has the potential to automate certain tasks and processes, which could lead to job displacement in certain industries. It’s important for individuals and organisations to stay current on the latest developments in AI and be proactive in developing the skills and knowledge necessary to thrive in the changing job market. Ethical Concerns: AI technology uses special codes to help machines think and act like humans. But sometimes, these codes can include human bias that can cause problems. It can lead to unfair or discriminatory outcomes, especially for marginalised people.
Pros of Artificial Intelligence Cons of Artificial Intelligence Increased Efficiency and Productivity Costly Price Improved Decision-Making Lack of Human Replication Ability Zero Errors Experience is Valueless Applications in Healthcare Job Displacement Managing Repetitive Tasks Ethical Concerns
Artificial Intelligence vs Computer Science: What is CS?
Artificial Intelligence vs Computer Science is the talk of the town. Generations of families have been opting for Computer Science due to job security, future scope, and the sheer thrill of technological innovations and advancements. It involves understanding the hardware (the computer itself) and the software (the programs and apps we use). It’s like being a detective but with computers instead of clues.
However, it’s not all about just deciding who is better between artificial intelligence vs computer science. Imagine you have a big blank canvas and a box of crayons. CS is like the crayons, giving you the tools to draw and create whatever you can imagine. Imagine being able to design your own video game, create a website, or even build a robot! All of these things are possible with computer science.
PROS of Computer Science
Computer Science is a constantly evolving domain that offers many exciting opportunities and benefits. Let’s explore the pros of computer science and see why it is a great career choice for those interested in technology and problem-solving.
Generous Pay: Let’s talk about one big perk: pay scope! You see, computer science professionals are in high demand. Companies are looking for people who can create new technologies, fix problems, and make their businesses run smoothly. And because of this demand, computer science professionals get paid well. Versatility: Every industry today needs people skilled in the IT sector. For example, you find computers in banks, magazines, travel agencies, and hospitals. Almost every company or organisation offers online services and someone has to be available to ensure everything works smoothly. Career Opportunities: The technology world is changing and improving. Hence, the CS industry is growing with leaps and bounds. Whether you’re working on creating new technologies or improving existing ones, you’ll always have the opportunity to learn and grow. Adjustable Scheduling: Did you know that many people who work in IT don’t have a set schedule like most jobs? You can run your schedule and often work from home too! It is one of the cool things about working in the CS domain. Fixing the World: Sometimes, people complain about how much artificial intelligence vs computer science technology has changed our lives. But have you ever thought about what the world would be like without it? One area where technology has made a difference is healthcare. For example, you can create apps that help people track their health and remind them to take medicine.
CONS of Computer Science
Here are some potential cons of Computer Science,
Stress: If you work in the CS field, you’ll have to be on your feet and come up with solutions to problems that come up unexpectedly. You might have to solve problems you’re not used to dealing with or aren’t part of your job. It can be stressful, but it’s also an exciting part of working in CS. Extended Periods of Work: If you’re transitioning to a career in computing, you should be prepared to work hard and put in long hours. It might not always be easy, but it’s a big part of what it takes to succeed in this field. Just because you have flexible hours doesn’t mean you’ll have less work to do. Be ready for the hustle if you want to be successful. Culture of Lies: People might ask you to fix problems like pop-up ads, cache, or even spam ware and phishing attempts. Sometimes, they might not tell you exactly what happened or how they caused the problem, so you’ll have to figure it out on your own. It’s kind of like being 007, but with computers. Continuous Learning: The CS industry is evolving radically in the battle between artificial intelligence vs computer science. You might have to work extra hard to keep learning and stay on top of all the changes. It can be a lot of work, but it’s also exciting to be a part of such a fast-paced and dynamic industry. Medical Issues: One of the main issues that IT professionals face is eye strain from staring at screens all day. The blue light emitting from screens can cause vision problems and even lead to age-related macular degeneration.
Pros of Computer Science Cons of Computer Science Generous Pay Stress Versatility Extended Periods of Work Career Opportunities Culture of Lies Adjustable Scheduling Continuous Learning Fixing the World Health Issues
What’s the Verdict of Artificial Intelligence vs Computer Science?
Artificial Intelligence vs Computer Science? Well, here’s the deal: AI is a type of computer science. It’s like a little sibling to a bigger, older sibling of computer science. Artificial intelligence vs computer science is a different but connected subject. AI is about making machines and computer programs that can do things people are good at, like learning, making decisions, and solving problems. Computer science is the foundation for AI and many other fields. So if you study CS, you have numerous options for what you can do with your degree. You can specialise in AI while exploring other areas like video game design, data analysis, or medical technology. The sky’s the limit!
On the other hand, if you focus on AI as your area of study, you might not have as many options for what you can do with your degree. It’s like if you only want to eat one type of food, it might hurt your teeth after a while or you might get bored and wish to expand. But if you study CS, you can try different foods ( i.e., specialisations) and see what you like best.
Here’s a tip: think about what you’re most devoted to, new ideas or making decisions. If you want to work on projects with machine learning, natural language processing, and other AI technologies, then you might want to focus on AI. But if you like to study computer science, create software and other applications, and dive a little deeper, a degree in computer science might be perfect.
Still confused about which degree or college to choose? Check out our Premium Counselling Session for FREE, where you will receive assistance from our expert academic advisors in making informed decisions about your career and college choices.
Frequently Asked Questions (FAQs)
Which is best Computer Science or Artificial Intelligence?
It’s not accurate to say that one field is inherently “better” than the other, as both computer science and artificial intelligence are important and valuable fields of study. Computer science is a broad field that encompasses the study of computers and computational systems, including the design and development of software, hardware, and other technologies.
Artificial intelligence, on the other hand, is a subfield of computer science that focuses specifically on the development of intelligent computer systems that can think and act like humans. Both fields are important and have numerous applications in a variety of industries. The best choice for you will depend on your interests, strengths, and career goals.
Is AI the future of Computer Science?
AI is a rapidly expanding area of computer science that aims to replicate human intelligence in machines. It can be applied in various industries, including healthcare, finance, and transportation, and is expected to have a significant impact on the future of technology and society. While it is hard to predict the exact ways in which AI will shape the future, it will surely be a key player in the creation and use of new technologies.
| 2023-01-09T00:00:00 |
2023/01/09
|
https://blog.kalvium.com/artificial-intelligence-vs-computer-science/
|
[
{
"date": "2023/01/09",
"position": 76,
"query": "automation job displacement"
}
] |
Robots Replacing Humans | Robot Planet
|
Robots Replacing Humans
|
https://robotplanet.com.au
|
[] |
One expert predicts that 40% of all current jobs will be replaced within 15 years by robots controlled by artificial intelligence.
|
Are robots going to replace people’s jobs in future
| 2023-01-09T00:00:00 |
https://robotplanet.com.au/blog/will-robots-replace-human-jobs/
|
[
{
"date": "2023/01/09",
"position": 85,
"query": "AI replacing workers"
}
] |
|
AI products for growing applications.
|
Careers at Inworld
|
https://inworld.ai
|
[] |
We only hire technical people for all roles: engineers, scientists, or similarly technical folks. We are here to serve technical builders and strongly believe ...
|
Solve the way to evolve.
We build products to empower those who build for consumers. We are a small, highly specialized team, solving the way for AI to autonomously scale and evolve with users.
We only hire technical people for all roles: engineers, scientists, or similarly technical folks.
We are here to serve technical builders and strongly believe that technical depth is a requirement to allow deep empathy with our customers and enable effective communication internally.
| 2023-01-09T00:00:00 |
https://inworld.ai/careers
|
[
{
"date": "2023/01/09",
"position": 36,
"query": "machine learning job market"
}
] |
|
8 Ways Artificial Intelligence (AI) Can Help You Improve ...
|
8 Ways Artificial Intelligence (AI) Can Help You Improve Productivity
|
https://aijourn.com
|
[
"Saurabh Sharma",
"Saurabh Sharma Is A Digital Marketing Executive At Taggbox",
"A Leading Ugc Platform. He Has Three Years Of Experience In The Information Technology Industry. He Spends His Time Reading About New Trends In Digital Marketing",
"The Latest Technologies."
] |
... and creating content can be automated within minutes. Without you knowing, you may have used AI in completing your jobs. For instance, you are using AI ...
|
Artificial Intelligence (AI) comes as a blessing in this century. With various capabilities, AI-powered tools have helped workers to stay efficient in a saturated workforce. And, of course, we can expect more from this technology in the future.
The automated nature of Artificial Intelligence comes to the rescue to eliminate manual work. From inputting data, generating reports, forecasting demands, and creating content can be automated within minutes.
Without you knowing, you may have used AI in completing your jobs. For instance, you are using AI content marketing tools to automate content creation. You can write blog posts, newsletter or other marketing content quickly.
To know more about AI powers, let’s discover the seven ways Artificial Intelligence (AI) assists your work and helps you stay productive. Keep reading!
1. Forecast Demand Accurately
Accurate demand forecasting is essential for every business in any field. AI and machine learning systems can test outcome possibilities using specific strategies, including for production and marketing schemes.
Having AI-powered systems in the workforce can also help generate an accurate analysis of any project. You can predict how your new product or strategy resonates with your target audiences and estimate the results.
Moreover, machine learning and AI implementation can reduce overall inventory by up to 50%. The giant retailer, Wal-Mart, proves how AI and machine learning can solve a task worth a month in just 24 hours.
They use automated drones that can fly through their warehouse. The drones then scan items and track misplaced items automatically so workers can take immediate action.
2. Automate Text Creation
Many believe that search engines love long and comprehensive blog posts. However, producing one with good quality can suck up a lot of time. You may spend a day creating, proofreading, and editing a blog post.
Fortunately, many text-producing tools are now available on the market. They can help you generate ideas and produce more sentences in no time. For instance, they can generate 50+ words that relate with your main idea.
Moreover, proofreading and plagiarism checking is vital to building your site’s credibility. Writing tools can help you improve the quality of your writing effortlessly.
Some recommendations for AI-powered text creation tools include Sassbook, Jasper, and AIContentLab. To further enhance your writing, you can use Grammarly, Hemingway, WordTune, Writer, or other platforms.
3. Predict Maintenance
Before implementing AI technology, companies commonly set schedules to check their tools’ conditions periodically. It’s, in fact, an effective way to maintain your tools’ performance, but AI systems can make it better.
Rather than scheduled maintenance, the AI system will monitor your tools and notify you when care is needed. You can even program your machines and make them able to diagnose their problems.
You need to employ big data algorithms to develop predictive maintenance. It will also allow you to break down upcoming equipment.
4. Easy Data Extraction and Review
For years, data entry, extraction, and review have been time-consuming and can be overwhelming. Digital devices have been a great help for businesses to speed up their processes.
Thankfully, AI and machine learning technology can accelerate the processes even more. A corporation can automatically collect, manage, review, and record internal and client data. The sophisticated systems make it easier for you to present and understand your progress.
The automated document extraction by AI programs helps you reduce review times and improve operational efficiencies. You can even get valuable insights to escalate your strategy and workflow.
5. Seamless Interaction
Chatbots and automated call management are some results of AI implementation in today’s workforce. The two technologies have helped businesses handle clients efficiently to provide excellent customer service.
Chatbots are available 24/7 to answer potential customers instantly. It’s one of the best innovations ever that can substitute for employee responsibility. Therefore, companies can allocate employees to do more vital tasks.
Additionally, automated call management also helps companies to reach potential clients properly. They can set up an automation system to define when and how to call target customers.
For example, if someone submits a newsletter form or goes to specific pages on your site, the system will collect their data and make further contacts, including sending emails or making a call.
Automated call management eliminates the requirement for employees to be always seated next to the phone, making and receiving calls from clients. Therefore, it supports their productivity by focusing more on priority issues.
6. Improve Manufacturing Processes
Manufacturing processes, if they continue to be handled manually, will be ineffective. The lack of automated systems will limit the number of products manufactured. As a result, companies fail to achieve customer satisfaction and improve production.
AI technology can help examine many things during manufacturing. It can verify component quantity, quality, temperatures, and processing time. Moreover, it can help detect faults, cycle times, and lead times, resulting in a quick and seamless workflow.
Employees act as an operator and monitor every manufacturing process. AI will work behind the curtain, sending signals for faults and recommending solutions in real-time.
7. Automate Hiring
One of the most challenging things every business faces is selecting new hires. Imagine if your recruitment team needs to read every CV to look for candidates. It can be endless work for them.
However, AI and machine learning technologies can lessen their burden to find relevant candidates quickly. Users can set up an automated system to oversee applicants’ resumes by selecting vital parameters such as education, skills, and work experience.
8. Social Commerce & Livestream Shopping
Throughout the day, today’s consumers seamlessly switch between devices and experiences. They have control over where they consume media, and they are watching an increasing amount of short-form video across multiple interaction points, particularly on social commerce platforms.
Savvy brands have developed video content strategies to reach consumers on social media with compelling video experiences, generating both owned assets and influencer amplification.
All of these experiences contain a subtle call to action, attempting to persuade viewers to buy something or participate in some other way.
However, the combination of consumer control and an infinite supply of content frequently sabotages this opportunity. The goal of shoppable videos is to change all of that.
Conclusion
In this day and age, people want everything to be instant. Businesses must take orders quickly and give responses immediately. And so it is important for them to turn towards training programs through online platforms, like from the Artificial Intelligence course. Because, without AI and machine learning technology, it can be overwhelming for employees to handle their workloads.
| 2023-01-09T00:00:00 |
2023/01/09
|
https://aijourn.com/8-ways-artificial-intelligence-ai-can-help-you-improve-productivity/
|
[
{
"date": "2023/01/09",
"position": 44,
"query": "AI job creation vs elimination"
}
] |
How you can future-proof your career in the era of AI
|
How you can future-proof your career in the era of AI
|
https://nawmagazine.com
|
[
"Guest Contributor"
] |
The increased use of AI in white-collar workplaces means the changes will be different to previous workplace transformations. That's because, the thinking goes, ...
|
Ever since the industrial revolution, people have feared that technology would take away their jobs. While some jobs and tasks have indeed been replaced by machines, others have emerged. The success of ChatGPT and other generative artificial intelligence (AI) now has many people wondering about the future of work – and whether their jobs are safe. Writes Elisabeth Kelan
A recent poll found that more than half of people aged 18-24 are worried about AI and their careers. The fear that jobs might disappear or be replaced through automation is understandable. Recent research found that a quarter of tasks that humans currently do in the US and Europe could be automated in the coming years.
The increased use of AI in white-collar workplaces means the changes will be different to previous workplace transformations. That’s because, the thinking goes, middle-class jobs are now under threat.
The future of work is a popular topic of discussion, with countless books published each year on the topic. These books speak to the human need to understand how the future might be shaped.
I analysed 10 books published between 2017 and 2020 that focused on the future of work and technology. From this research, I found that thinking about AI in the workplace generally falls into two camps. One is expressed as concern about the future of work and security of current roles – I call this sentiment “automation anxiety”. The other is the hope that humans and machines collaborate and thereby increase productivity – I call this “augmentation aspiration”.
Anxiety and aspiration
I found a strong theme of concern in these books about technology enabling certain tasks to be automated, depriving many people of jobs. Specifically, the concern is that knowledge-based jobs – like those in accounting or law – that have long been regarded as the purview of well-educated professionals are now under threat of replacement by machines.
Automation undermines the idea that a good education will secure a good middle-class job. As economist Richard Baldwin points out in his 2019 book, The Globotics Upheaval, if you’ve invested a significant amount of money and time on a law degree – thinking it is a skill set that will keep you permanently employable – seeing AI complete tasks that a junior lawyer would normally be doing, at less cost, is going to be worrisome.
But there is another, more aspirational way to think about this. Some books stress the potential of humans collaborating with AI, to augment each other’s skills. This could mean working with robots in factories, but it could also mean using an AI chatbot when practising law. Rather than being replaced, lawyers would then be augmented by technology.
In reality, automation and augmentation co-exist. For your future career, both will be relevant.
Future-proofing yourself
As you think about your own career, the first step is to realise that some automation of tasks is most likely going to be something you’ll have to contend with in the future.
In light of this, learning is one of the most important ways you can future-proof your career. But should you spend money on further education if the return on investment is uncertain?
It is true that specific skills risk becoming outdated as technology develops. However, more than learning specific abilities, education is about learning how to learn – that is, how to update your skills throughout your career. Research shows that having the ability to do so is highly valuable at work.
This learning can take place in educational settings, by going back to university or participating in an executive education course, but it can also happen on the job. In any discussion about your career, such as with your manager, you might want to raise which additional training you could do.
Critical thinking and analytical skills are going to be particularly central for how humans and machines can augment one another. When working with a machine, you need to be able to question the output that is produced. Humans are probably always going to be central to this – you might have a chatbot that automates parts of legal work, but a human will still be needed to make sense of it all.
Finally, remember that when people previously feared jobs would disappear and tasks would be replaced by machines, this was not necessarily the case. For instance, the introduction of automated teller machines (ATMs) did not eliminate bank tellers, but it did change their tasks.
Above all, choose a job that you enjoy and keep learning – so that if you do need to change course in the future, you know how to.
Elisabeth Kelan, Professor of Leadership and Organisation, University of Essex
This article is part of Quarter Life, a series about issues affecting those of us in our twenties and thirties. From the challenges of beginning a career and taking care of our mental health, to the excitement of starting a family, adopting a pet or just making friends as an adult. The articles in this series explore the questions and bring answers as we navigate this turbulent period of life.
You may be interested in:
Three mindfulness and meditation techniques that could help you manage work stress
‘Boundaries’ or coercive control? Experts explain how to tell the difference
Why it’s so difficult to figure out what to do with your life – and three steps to take
This article is republished from The Conversation under a Creative Commons license. Read the original article.
| 2023-09-01T00:00:00 |
2023/09/01
|
https://nawmagazine.com/19667/
|
[
{
"date": "2023/01/09",
"position": 26,
"query": "future of work AI"
}
] |
Generative AI – a game-changer society needs to be ready ...
|
Generative AI – a game-changer society needs to be ready for
|
https://www.weforum.org
|
[] |
... AI can generate trillions of dollars in economic value. Over 150 start-ups ... This includes disruption of labour markets, legitimacy of scraped data ...
|
Despite the current downturn and layoffs in the tech sector, generative AI companies continue to receive huge interest from investors.
While generative AI has people excited about a new wave of creativity, there are concerns about the impact of these models on society.
Only when solid checks and balances are in place can there be a more thoughtful, beneficial expansion of generative AI technologies/products.
In the wake of newly released models such as Stable Diffusion and ChatGPT, generative AI has become a 'hot topic' for technologists, investors, policymakers and for society at large.
As the name suggests, generative AI produces or generates text, images, music, speech, code or video. Generative AI is not a new concept, and machine-learning techniques behind generative AI have evolved over the past decade. Deep learning and General Adversarial Network (GAN) approaches have typically been used, but the latest approach is transformers.
A Generative Pretrained Transformer (GPT) is a type of large language model (LLM) that uses deep learning to generate human-like text. They are called "generative" because they can generate new text based on the input they receive, "pretrained" because they are trained on a large corpus of text data before being fine-tuned for specific tasks, and "transformers" because they use a transformer based neural network architecture to process input text and generate output text.
Despite the current market downturn and layoffs in the technology sector, generative AI companies continue to receive interest from investors. Stability AI and Jasper, for example, have recently raised $101 million and $125 million, respectively, and investors like Sequoia think the field of generative AI can generate trillions of dollars in economic value. Over 150 start-ups have emerged and are already operating in the space.
Generative AI: A timeline of images generated by artificial intelligence Image: Our World in Data
Emergent capabilities of generative AI systems
Text-to-image programs such as Midjourney, DALL-E and Stable Diffusion have the potential to change how art, animation, gaming, movies and architecture, among others, are being rendered. Bill Cusick, creative director at Stability AI, believes that the software is “the foundation for the future of creativity”.
Based on a new era of human-machine based cooperation, optimists claim that generative AI will aid the creative process of artists and designers, as existing tasks will be augmented by generative AI systems, speeding up the ideation and, essentially, the creation phase.
Beyond the creative space, generative AI models hold transformative capabilities in complex sciences such as computer engineering. For example, Microsoft-owned GitHub Copilot, which is based on OpenAI’s Codex model, suggests code and assists developers in autocompleting their programming tasks. The system has been quoted as autocompleting up to 40% of developers’ code, considerably augmenting the workflow.
Loading...
What are the risks?
While generative AI has people excited about a new wave of creativity, there are concerns about the impact of these models on society. Digital artist Greg Rutkowski fears that the internet will be flooded with artwork that is indistinguishable from his own, simply by telling the system to reproduce an artwork in his unique style. Professor of art Carson Grubaugh shares this concern and predicts that large parts of the creative workforce, including commercial artists working in entertainment, video games, advertising, and publishing, could lose their jobs because of generative AI models.
Besides profound effects on tasks and jobs, generative AI models and associated externalities have raised alarm in the AI governance community. One of the problems with large language models is their ability to generate false and misleading content. Meta’s Galactica – a model trained on 48 million science articles with claims to summarize academic papers, solve math problems, and write scientific code – was taken down after less than three days of being online as the scientific community found it was producing incorrect results after misconstruing scientific facts and knowledge.
This is even more alarming when seen in the context of automated troll bots, with capabilities advanced enough to render obsolete, The Turing Test – which tests a machine’s ability to exhibit intelligent behaviour similar to or indistinguishable from a human. Such capabilities can be misused to generate fake news and disinformation across platforms and ecosystems.
Large models continue to be trained on massive datasets represented in books, articles and websites that may be biased in ways that can be hard to filter completely. Despite substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human feedback (RLHF) in the case of ChatGPT, OpenAI acknowledges that their models can still generate toxic and biased outputs.
How is generative AI governed?
In the private sector, two approaches to the governance of generative AI models are currently emerging. In one camp, companies such as OpenAI are self-governing the space through limited release strategies, monitored use of models, and controlled access via API’s for their commercial products like DALL-E2. In the other camp, newer organizations, such as Stability AI, believe that these models should be openly released to democratize access and create the greatest possible impact on society and the economy. Stability AI open sourced the weights of its model – as a result, developers can essentially plug it into everything to create a host of novel visual effects with little or no controls placed on the diffusion process.
In the public sector, little or no regulation governs the rapidly evolving landscape of generative AI. In a recent letter to the White House, US Congresswoman Anna Eshoo highlighted "grave concerns about the recent unsafe release of the Stable Diffusion model by Stability AI”, including generation of violent and sexual imagery.
Other issues surround intellectual property and copyright. The datasets behind generative AI models are generally scraped from the internet without seeking consent from living artists or work still under copyright. “If these models have been trained on the styles of living artists without licensing that work, there are copyright implications,” according to Daniela Braga, who sits on the White House Task Force for AI Policy.
The problem with copyright is also visible in the field of autocompleted code. Microsoft's GitHub Copilot is involved in a class action lawsuit alleging the system has been built on “software piracy on an unprecedented scale.” Copilot has been trained on public code repositories scraped from the web, which in many cases, are published with licenses that require crediting creators when reusing their code.
What's the road ahead?
While generative AI is a game-changer on numerous areas and tasks, there is a strong need to govern the diffusion of these models, and their impact on society and the economy more carefully. The emerging discussion between centralized and controlled adoption with firm ethical boundaries on the one hand versus faster innovation and decentralized distribution on the other will be important for the generative AI community in the coming years.
This is a task not only reserved for private companies, but which is equally important for civil society and for policymakers to weigh in on. This includes disruption of labour markets, legitimacy of scraped data, licensing, copyright and potential for biased or otherwise harmful content, misinformation, and so on. Only when solid checks and balances are in place can more thoughtful and beneficial expansion of generative AI technologies and products be achieved.
| 2023-01-09T00:00:00 |
https://www.weforum.org/stories/2023/01/davos23-generative-ai-a-game-changer-industries-and-society-code-developers/
|
[
{
"date": "2024/10/31",
"position": 9,
"query": "AI economic disruption"
},
{
"date": "2023/01/09",
"position": 28,
"query": "generative AI jobs"
}
] |
|
AI and the Big Five
|
AI and the Big Five
|
https://stratechery.com
|
[] |
To summarize, I'm not very impressed by people who try to prove wild economic ... disruption will, in the full recounting, not just be born of disruption ...
|
The story of 2022 was the emergence of AI, first with image generation models, including DALL-E, MidJourney, and the open source Stable Diffusion, and then ChatGPT, the first text-generation model to break through in a major way. It seems clear to me that this is a new epoch in technology.
To determine how that epoch might develop, though, it is useful to look back 26 years to one of the most famous strategy books of all time: Clayton Christensen’s The Innovator’s Dilemma, particularly this passage on the different kinds of innovations:
Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character… Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.
It seems easy to look backwards and determine if an innovation was sustaining or disruptive by looking at how incumbent companies fared after that innovation came to market: if the innovation was sustaining, then incumbent companies became stronger; if it was disruptive then presumably startups captured most of the value.
Consider previous tech epochs:
The PC was disruptive to nearly all of the existing incumbents; these relatively inexpensive and low-powered devices didn’t have nearly the capability or the profit margin of mini-computers, much less mainframes. That’s why IBM was happy to outsource both the original PC’s chip and OS to Intel and Microsoft, respectively, so that they could get a product out the door and satisfy their corporate customers; PCs got faster, though, and it was Intel and Microsoft that dominated as the market dwarfed everything that came before.
The Internet was almost entirely new market innovation, and thus defined by completely new companies that, to the extent they disrupted incumbents, did so in industries far removed from technology, particularly those involving information (i.e. the media). This was the era of Google, Facebook, online marketplaces and e-commerce, etc. All of these applications ran on PCs powered by Windows and Intel.
Cloud computing is arguably part of the Internet, but I think it deserves its own category. It was also extremely disruptive: commodity x86 architecture swept out dedicated server hardware, and an entire host of SaaS startups peeled off features from incumbents to build companies. What is notable is that the core infrastructure for cloud computing was primarily built by the winners of previous epochs: Amazon, Microsoft, and Google. Microsoft is particularly notable because the company also transitioned its traditional software business to a SaaS service, in part because the company had already transitioned said software business to a subscription model.
Mobile ended up being dominated by two incumbents: Apple and Google. That doesn’t mean it wasn’t disruptive, though: Apple’s new UI paradigm entailed not viewing the phone as a small PC, a la Microsoft; Google’s new business model paradigm entailed not viewing phones as a direct profit center for operating system sales, but rather as a moat for their advertising business.
What is notable about this history is that the supposition I stated above isn’t quite right; disruptive innovations do consistently come from new entrants in a market, but those new entrants aren’t necessarily startups: some of the biggest winners in previous tech epochs have been existing companies leveraging their current business to move into a new space. At the same time, the other tenets of Christensen’s theory hold: Microsoft struggled with mobile because it was disruptive, but SaaS was ultimately sustaining because its business model was already aligned.
Given the success of existing companies with new epochs, the most obvious place to start when thinking about the impact of AI is with the big five: Apple, Amazon, Facebook, Google, and Microsoft.
Apple
I already referenced one of the most famous books about tech strategy; one of the most famous essays was Joel Spolsky’s Strategy Letter V, particularly this famous line:
Smart companies try to commoditize their products’ complements.
Spolsky wrote this line in the context of explaining why large companies would invest in open source software:
Debugged code is NOT free, whether proprietary or open source. Even if you don’t pay cash dollars for it, it has opportunity cost, and it has time cost. There is a finite amount of volunteer programming talent available for open source work, and each open source project competes with each other open source project for the same limited programming resource, and only the sexiest projects really have more volunteer developers than they can use. To summarize, I’m not very impressed by people who try to prove wild economic things about free-as-in-beer software, because they’re just getting divide-by-zero errors as far as I’m concerned. Open source is not exempt from the laws of gravity or economics. We saw this with Eazel, ArsDigita, The Company Formerly Known as VA Linux and a lot of other attempts. But something is still going on which very few people in the open source world really understand: a lot of very large public companies, with responsibilities to maximize shareholder value, are investing a lot of money in supporting open source software, usually by paying large teams of programmers to work on it. And that’s what the principle of complements explains. Once again: demand for a product increases when the price of its complements decreases. In general, a company’s strategic interest is going to be to get the price of their complements as low as possible. The lowest theoretically sustainable price would be the “commodity price” — the price that arises when you have a bunch of competitors offering indistinguishable goods. So, smart companies try to commoditize their products’ complements. If you can do this, demand for your product will increase and you will be able to charge more and make more.
Apple invests in open source technologies, most notably the Darwin kernel for its operating systems and the WebKit browser engine; the latter fits Spolsky’s prescription as ensuring that the web works well with Apple devices makes Apple’s devices more valuable.
Apple’s efforts in AI, meanwhile, have been largely proprietary: traditional machine learning models are used for things like recommendations and photo identification and voice recognition, but nothing that moves the needle for Apple’s business in a major way. Apple did, though, receive an incredible gift from the open source world: Stable Diffusion.
Stable Diffusion is remarkable not simply because it is open source, but also because the model is surprisingly small: when it was released it could already run on some consumer graphics cards; within a matter of weeks it had been optimized to the point where it could run on an iPhone.
Apple, to its immense credit, has seized this opportunity, with this announcement from its machine learning group last month:
Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to get started with deploying to Apple Silicon devices… One of the key questions for Stable Diffusion in any app is where the model is running. There are a number of reasons why on-device deployment of Stable Diffusion in an app is preferable to a server-based approach. First, the privacy of the end user is protected because any data the user provided as input to the model stays on the user’s device. Second, after initial download, users don’t require an internet connection to use the model. Finally, locally deploying this model enables developers to reduce or eliminate their server-related costs… Optimizing Core ML for Stable Diffusion and simplifying model conversion makes it easier for developers to incorporate this technology in their apps in a privacy-preserving and economically feasible way, while getting the best performance on Apple Silicon. This release comprises a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models.
It’s important to note that this announcement came in two parts: first, Apple optimized the Stable Diffusion model itself (which it could do because it was open source); second, Apple updated its operating system, which thanks to Apple’s integrated model, is already tuned to Apple’s own chips.
Moreover, it seems safe to assume that this is only the beginning: while Apple has been shipping its so-called “Neural Engine” on its own chips for years now, that AI-specific hardware is tuned to Apple’s own needs; it seems likely that future Apple chips, if not this year then probably next year, will be tuned for Stable Diffusion as well. Stable Diffusion itself, meanwhile, could be built into Apple’s operating systems, with easily accessible APIs for any app developer.
This raises the prospect of “good enough” image generation capabilities being effectively built-in to Apple’s devices, and thus accessible to any developer without the need to scale up a back-end infrastructure of the sort needed by the viral hit Lensa. And, by extension, the winners in this world end up looking a lot like the winners in the App Store era: Apple wins because its integration and chip advantage are put to use to deliver differentiated apps, while small independent app makers have the APIs and distribution channel to build new businesses.
The losers, on the other hand, would be centralized image generation services like Dall-E or MidJourney, and the cloud providers that undergird them (and, to date, undergird the aforementioned Stable Diffusion apps like Lensa). Stable Diffusion on Apple devices won’t take over the entire market, to be sure — Dall-E and MidJourney are both “better” than Stable Diffusion, at least in my estimation, and there is of course a big world outside of Apple devices, but built-in local capabilities will affect the ultimate addressable market for both centralized services and centralized compute.
Amazon
Amazon, like Apple, uses machine learning across its applications; the direct consumer use cases for things like image and text generation, though, seem less obvious. What is already important is AWS, which sells access to GPUs in the cloud.
Some of this is used for training, including Stable Diffusion, which according to the founder and CEO of Stability AI Emad Mostaque used 256 Nvidia A100s for 150,000 hours for a market-rate cost of $600,000 (which is surprisingly low!). The larger use case, though, is inference, i.e. the actual application of the model to produce images (or text, in the case of ChatGPT). Every time you generate an image in MidJourney, or an avatar in Lensa, inference is being run on a GPU in the cloud.
Amazon’s prospects in this space will depend on a number of factors. First, and most obvious, is just how useful these products end up being in the real world. Beyond that, though, Apple’s progress in building local generation techniques could have a significant impact. Amazon, though, is a chip maker in its own right: while most of its efforts to date have been focused on its Graviton CPUs, the company could build dedicated hardware of its own for models like Stable Diffusion and compete on price. Still, AWS is hedging its bets: the cloud service is a major partner when it comes to Nvidia’s offerings as well.
The big short-term question for Amazon will be gauging demand: not having enough GPUs will be leaving money on the table; buying too many that sit idle, though, would be a major cost for a company trying to limit them. At the same time, it wouldn’t be the worst error to make: one of the challenges with AI is the fact that inference costs money; in other words, making something with AI has marginal costs.
This issue of marginal costs is, I suspect, an under-appreciated challenge in terms of developing compelling AI products. While cloud services have always had costs, the discrete nature of AI generation may make it challenging to fund the sort of iteration necessary to achieve product-market fit; I don’t think it’s an accident that ChatGPT, the biggest breakout product to-date, was both free to end users and provided by a company in OpenAI that both built its own model and has a sweetheart deal from Microsoft for compute capacity. If AWS had to sell GPUs for cheap that could spur more use in the long run.
That noted, these costs should come down over time: models will become more efficient even as chips become faster and more efficient in their own right, and there should be returns to scale for cloud services once there are sufficient products in the market maximizing utilization of their investments. Still, it is an open question as to how much full stack integration will make a difference, in addition to the aforementioned possibility of running inference locally.
Meta
I already detailed in Meta Myths why I think that AI is a massive opportunity for Meta and worth the huge capital expenditures the company is making:
Meta has huge data centers, but those data centers are primarily about CPU compute, which is what is needed to power Meta’s services. CPU compute is also what was necessary to drive Meta’s deterministic ad model, and the algorithms it used to recommend content from your network. The long-term solution to ATT, though, is to build probabilistic models that not only figure out who should be targeted (which, to be fair, Meta was already using machine learning for), but also understanding which ads converted and which didn’t. These probabilistic models will be built by massive fleets of GPUs, which, in the case of Nvidia’s A100 cards, cost in the five figures; that may have been too pricey in a world where deterministic ads worked better anyways, but Meta isn’t in that world any longer, and it would be foolish to not invest in better targeting and measurement. Moreover, the same approach will be essential to Reels’ continued growth: it is massively more difficult to recommend content from across the entire network than only from your friends and family, particularly because Meta plans to recommend not just video but also media of all types, and intersperse it with content you care about. Here too AI models will be the key, and the equipment to build those models costs a lot of money. In the long run, though, this investment should pay off. First, there are the benefits to better targeting and better recommendations I just described, which should restart revenue growth. Second, once these AI data centers are built out the cost to maintain and upgrade them should be significantly less than the initial cost of building them the first time. Third, this massive investment is one no other company can make, except for Google (and, not coincidentally, Google’s capital expenditures are set to rise as well). That last point is perhaps the most important: ATT hurt Meta more than any other company, because it already had by far the largest and most finely-tuned ad business, but in the long run it should deepen Meta’s moat. This level of investment simply isn’t viable for a company like Snap or Twitter or any of the other also-rans in digital advertising (even beyond the fact that Snap relies on cloud providers instead of its own data centers); when you combine the fact that Meta’s ad targeting will likely start to pull away from the field (outside of Google), with the massive increase in inventory that comes from Reels (which reduces prices), it will be a wonder why any advertiser would bother going anywhere else.
An important factor in making Meta’s AI work is not simply building the base model but also tuning it to individual users on an ongoing basis; that is what will take such a large amount of capacity and it will be essential for Meta to figure out how to do this customization cost-effectively. Here, though, it helps that Meta’s offering will probably be increasingly integrated: while the company may have committed to Qualcomm for chips for its VR headsets, Meta continues to develop its own server chips; the company has also released tools to abstract away Nvidia and AMD chips for its workloads, but it seems likely the company is working on its own AI chips as well.
What will be interesting to see is how things like image and text generation impact Meta in the long run: Sam Lessin has posited that the end-game for algorithmic timelines is AI content; I’ve made the same argument when it comes to the Metaverse. In other words, while Meta is investing in AI to give personalized recommendations, that idea, combined with 2022’s breakthroughs, is personalized content, delivered through Meta’s channels.
For now it will be interesting to see how Meta’s advertising tools develop: the entire process of both generating and A/B testing copy and images can be done by AI, and no company is better than Meta at making these sort of capabilities available at scale. Keep in mind that Meta’s advertising is primarily about the top of the funnel: the goal is to catch consumers’ eyes for a product or service or app they did not know previously existed; this means that there will be a lot of misses — the vast majority of ads do not convert — but that also means there is a lot of latitude for experimentation and iteration. This seems very well suited to AI: yes, generation may have marginal costs, but those marginal costs are drastically lower than a human.
Google
The Innovator’s Dilemma was published in 1997; that was the year that Eastman Kodak’s stock reached its highest price of $94.25, and for seemingly good reason: Kodak, in terms of technology, was perfectly placed. Not only did the company dominate the current technology of film, it had also invented the next wave: the digital camera.
The problem came down to business model: Kodak made a lot of money with very good margins providing silver halide film; digital cameras, on the other hand, were digital, which means they didn’t need film at all. Kodak’s management was thus very incentivized to convince themselves that digital cameras would only ever be for amateurs, and only when they became drastically cheaper, which would certainly take a very long time.
In fact, Kodak’s management was right: it took over 25 years from the time of the digital camera’s invention for digital camera sales to surpass film camera sales; it took longer still for digital cameras to be used in professional applications. Kodak made a lot of money in the meantime, and paid out billions of dollars in dividends. And, while the company went bankrupt in 2012, that was because consumers had access to better products: first digital cameras, and eventually, phones with cameras built in.
The idea that this is a happy ending is, to be sure, a contrarian view: most view Kodak as a failure, because we expect companies to live forever. In this view Kodak is a cautionary tale of how an innovative company can allow its business model to lead it to its eventual doom, even if said doom was the result of consumers getting something better.
And thus we arrive at Google and AI. Google invented the transformer, the key technology undergirding the latest AI models. Google is rumored to have a conversation chat product that is far superior to ChatGPT. Google claims that its image generation capabilities are better than Dall-E or anyone else on the market. And yet, these claims are just that: claims, because there aren’t any actual products on the market.
This isn’t a surprise: Google has long been a leader in using machine learning to make its search and other consumer-facing products better (and has offered that technology as a service through Google Cloud). Search, though, has always depended on humans as the ultimate arbiter: Google will provide links, but it is the user that decides which one is the correct one by clicking on it. This extended to ads: Google’s offering was revolutionary because instead of charging advertisers for impressions — the value of which was very difficult to ascertain, particularly 20 years ago — it charged for clicks; the very people the advertisers were trying to reach would decide whether their ads were good enough.
I wrote about the conundrum this presented for Google’s business in a world of AI seven years ago in Google and the Limits of Strategy:
In yesterday’s keynote, Google CEO Sundar Pichai, after a recounting of tech history that emphasized the PC-Web-Mobile epochs I described in late 2014, declared that we are moving from a mobile-first world to an AI-first one; that was the context for the introduction of the Google Assistant. It was a year prior to the aforementioned iOS 6 that Apple first introduced the idea of an assistant in the guise of Siri; for the first time you could (theoretically) compute by voice. It didn’t work very well at first (arguably it still doesn’t), but the implications for computing generally and Google specifically were profound: voice interaction both expanded where computing could be done, from situations in which you could devote your eyes and hands to your device to effectively everywhere, even as it constrained what you could do. An assistant has to be far more proactive than, for example, a search results page; it’s not enough to present possible answers: rather, an assistant needs to give the right answer. This is a welcome shift for Google the technology; from the beginning the search engine has included an “I’m Feeling Lucky” button, so confident was Google founder Larry Page that the search engine could deliver you the exact result you wanted, and while yesterday’s Google Assistant demos were canned, the results, particularly when it came to contextual awareness, were far more impressive than the other assistants on the market. More broadly, few dispute that Google is a clear leader when it comes to the artificial intelligence and machine learning that underlie their assistant. A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in). Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky Button” guaranteed that the search in question would not make Google any money. After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention. Google Assistant has the exact same problem: where do the ads go?
That Article assumed that Google Assistant was going to be used to differentiate Google phones as an exclusive offering; that ended up being wrong, but the underlying analysis remains valid. Over the past seven years Google’s primary business model innovation has been to cram ever more ads into Search, a particularly effective tactic on mobile. And, to be fair, the sort of searches where Google makes the most money — travel, insurance, etc. — may not be well-suited for chat interfaces anyways.
That, though, ought only increase the concern for Google’s management that generative AI may, in the specific context of search, represent a disruptive innovation instead of a sustaining one. Disruptive innovation is, at least in the beginning, not as good as what already exists; that’s why it is easily dismissed by managers who can avoid thinking about the business model challenges by (correctly!) telling themselves that their current product is better. The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use — and that certainly sounds a lot like Google Search’s current trajectory.
I’m not calling the top for Google; I did that previously and was hilariously wrong. Being wrong, though, is more often than not a matter of timing: yes, Google has its cloud and YouTube’s dominance only seems to be increasing, but the outline of Search’s peak seems clear even if it throws off cash and profits for years.
Microsoft
Microsoft, meanwhile, seems the best placed of all. Like AWS it has a cloud service that sells GPU; it is also the exclusive cloud provider for OpenAI. Yes, that is incredibly expensive, but given that OpenAI appears to have the inside track to being the AI epoch’s addition to this list of top tech companies, that means that Microsoft is investing in the infrastructure of that epoch.
Bing, meanwhile, is like the Mac on the eve of the iPhone: yes it contributes a fair bit of revenue, but a fraction of the dominant player, and a relatively immaterial amount in the context of Microsoft as a whole. If incorporating ChatGPT-like results into Bing risks the business model for the opportunity to gain massive market share, that is a bet well worth making.
The latest report from The Information, meanwhile, is that GPT is eventually coming to Microsoft’s productivity apps. The trick will be to imitate the success of AI-coding tool GitHub Copilot (which is built on GPT), which figured out how to be a help instead of a nuisance (i.e. don’t be Clippy!).
What is important is that adding on new functionality — perhaps for a fee — fits perfectly with Microsoft’s subscription business model. It is notable that the company once thought of as a poster child for victims of disruption will, in the full recounting, not just be born of disruption, but be well-placed to reach greater heights because of it.
There is so much more to write about AI’s potential impact, but this Article is already plenty long. OpenAI is obviously the most interesting from a new company perspective: it is possible that OpenAI will become the platform on which all other AI companies are built, which would ultimately mean the economic value of AI outside of OpenAI may be fairly modest; this is also the bull case for Google, as they would be the most well-placed to be the Microsoft Azure to OpenAI’s AWS.
There is another possibility where open source models proliferate in the text generation space in addition to image generation. In this world AI becomes a commodity: this is probably the most impactful outcome for the world but, paradoxically, the most muted in terms of economic impact for individual companies (I suspect the biggest opportunities will be in industries where accuracy is essential: incumbents will therefore underinvest in AI, a la Kodak under-investing in digital, forgetting that technology gets better).
Indeed, the biggest winners may be Nvidia and TSMC. Nvidia’s investment in the CUDA ecosystem means the company doesn’t simply have the best AI chips, but the best AI ecosystem, and the company is investing in scaling that ecosystem up. That, though, has and will continue to spur competition, particularly in terms of internal chip efforts like Google’s TPU; everyone, though, will make their chips at TSMC, at least for the foreseeable future.
The biggest impact of all though, though, is probably off our radar completely. Just before the break Nat Friedman told me in a Stratechery Interview about Riffusion, which uses Stable Diffusion to generate music from text via visual sonograms, which makes me wonder what else is possible when images are truly a commodity. Right now text is the universal interface, because text has been the foundation of information transfer since the invention of writing; humans, though, are visual creatures, and the availability of AI for both the creation and interpretation of images could fundamentally transform what it means to convey information in ways that are impossible to predict.
For now, our predictions must be much more time-constrained, and modest. This may be the beginning of the AI epoch, but even in tech, epochs take a decade or longer to transform everything around them.
I wrote a follow-up to this Article in this Daily Update.
| 2023-01-09T00:00:00 |
2023/01/09
|
https://stratechery.com/2023/ai-and-the-big-five/
|
[
{
"date": "2023/01/09",
"position": 60,
"query": "AI economic disruption"
}
] |
2023 India and AI, Looking Ahead
|
2023 India and AI, Looking Ahead
|
https://indiaai.gov.in
|
[] |
The last decade has seen a proliferation of AI solutions that have the potential to transcend traditional development challenges and bring about socio-economic ...
|
“Amazing new applications of NLP would include conversational AI that could become tutors for children, companions for the elderly, customer service for corporations, and help-line agents for people.”
― Kai-Fu Lee, AI 2041: Ten Visions for Our Future
2022 has indeed been the year when generative and conversational AI became mainstream. OpenAI offerings like ChatGPT and Dall.E-2 became a rage and enabled more people to experience the power of Artificial Intelligence. Our efforts to Integrate Bhashini (National Language Translation Mission) with some AI tools led to innovations in experiential AI.
AI adoption in India is at an inflexion point. We have been ranked 1st for ‘AI Adoption by Organisations’ and 7th for ‘Number of newly funded AI companies’ (2013-21) by the Stanford AI Index 2022. The same Index places India 3rd for ‘No. of AI Journal Publications’ and ‘No. of AI Conferences’. Further, India has been ranked 1st in all 5 Pillars of Peak AI’s Decision Intelligence Maturity Scale, which assesses a business’s commercial AI readiness.
Given this, we need to pause and assess: How do we ensure that the potential of AI is harnessed not just for entertainment, art, etc., but also for large-scale social transformation and inclusive development?
Democratising the benefits of AI
The Government of India has taken concrete steps to encourage the adoption of AI responsibly and build public trust in using this technology, placing the idea of ‘AI for All’ at the core of our National Strategy for AI. Our approach to AI is deeply rooted in the ethic of Sabka Saath, Sabka Vikas and Sabka Prayas.
Using the power of AI, we are creating applications that unlock value for citizens and improve public service delivery. For example, MyGov Helpdesk – an AI-enabled chatbot on WhatsApp, empowered people with Covid related information and vaccination and now provides access to Digilocker documents. Bhashini uses the power of Natural Language Processing (NLP) to make the internet and digital governance more accessible.
Umang, the government app for citizens to access all its services, has launched its voice-based chatbot. The bot, built using conversational AI technologies, allows users to ask questions in Hindi and English, and through voice or text, about various government services. These initiatives are empowering the Digital Nagrik and transforming government-citizen engagement.
2023 – Year of INDIAai
While India has made grade strides in making itself strong in the global AI race, the INDIAai story has only just begun. With India’s comprehensive program on AI, ‘INDIAai’, slated to launch this year, 2023 is set to be the year of AI for India.
INDIAai will further strengthen the foundational building blocks for catalysing AI Innovation in India through its four pillars.
The first pillar of INDIAai is the Data Management Office. Recognising data as the fundamental building block for AI Innovation, INDIAai will set up the Data Management Office to improve data quality, use, and access. This would modernise the government’s data collection, processing and access practices to ensure the AI innovation ecosystem reaches its full potential.
The second pillar is the National Centre on AI. The last decade has seen a proliferation of AI solutions that have the potential to transcend traditional development challenges and bring about socio-economic transformation. However, most AI solutions cannot break the Proof of Concept (POC) barrier and progress to large-scale deployment. The NCAI will be a sector-agnostic entity that identifies AI solutions for social good and ensures national-level deployment.
The third pillar focuses on Skilling for AI. The advent of AI and its accelerated adoption is disrupting the nature of jobs & skillsets required. This disruption disproportionately impacts those at the bottom of the skills qualification pyramid. INDIAai aims to transform ITIs and Polytechnics to ensure India’s technical education infrastructure is positioned to mitigate this disruption and create the next wave of an AI-ready workforce.
The final pillar focuses on Responsible AI. As the adoption of AI in developing digital solutions accelerates, so does the potential for bias and discrimination of AI against individuals and social groups that can potentially undo the merits of leveraging this technology for good. Consequently, there is a need for proactive mitigation of risks that can threaten the safety of and discriminate against individuals/groups. INDIAai’s Responsible AI component aims to fill these gaps and implement the principles under the Responsible AI for ALL initiative.
Shaping Global Policy Discourse
Emerging economies house 85% of the world’s population yet are often underrepresented in the global discourse around emerging technologies. This often leads to the development of international standards, principles, guidelines, and governance models that are not aligned with the needs of the developing world, overlooking the issues that primarily impact the Global South.
Housing one-sixth of humanity, and with its immense diversity of languages, religions, customs and beliefs, India is perfectly positioned to play a vital role in global leadership and make global policy discourse on AI and emerging technologies more inclusive and representative.
As one of the largest Global South economies leading the AI race, India has been entrusted with the responsibility of council chair for the Global Partnership on Artificial Intelligence (GPAI) for a 3-year tenure (Incoming chair in 2022-23, lead chair in 2023-24 and outgoing chair in 2024-25). Our demonstrated commitment to catalysing AI innovation in alignment with the principles of responsible AI played an instrumental role in helping India win this position through a two-thirds majority of first-preference votes.
As the Council Chair of GPAI, India will provide a holistic and inclusive partnership guide. Being the only South Asian and the only Middle Income Country among GPAI’s 29 member countries, India is uniquely positioned to add fresh perspectives and provide holistic guidance to the GPAI council and the expert working groups. This would ensure GPAI projects and outputs are relevant for both the developed and the developing world.
India may also use this opportunity to take its National AI Institutes & AI CoEs Global. Currently, GPAI only has two expert support centres/CoEs, one in Montreal (CEIMIA) and one in Paris (INREA). India’s National Centre for AI aims to deploy AI solutions in critical sectors such as Health, Agriculture, and Education to catalyse large-scale social transformation. The commonality of issues concerning the above sectors across developing and developed countries provides the ideal opportunity for India to be the AI Garage for the world.
India’s presidency of GPAI comes at a time when India also has the G20 Presidency. This presents an opportunity to showcase India’s AI Prowess, Solutions, & Governance Models to the world. Through its unique approach to governance of Data and Emerging technologies, India has demonstrated how these can be harnessed to develop public digital platforms to build citizen-centric solutions at a population scale. These governance models that are better suited to the Global South, along with learnings from India’s AI for All approach, may be showcased on the global stage to promote their widespread adoption.
| 2023-01-09T00:00:00 |
https://indiaai.gov.in/article/2023-india-and-ai-looking-ahead
|
[
{
"date": "2023/01/09",
"position": 82,
"query": "AI economic disruption"
}
] |
|
Powering human impact with technology
|
Powering human impact with technology
|
https://www2.deloitte.com
|
[
"Principal",
"Deloitte Consulting Llp",
"Human Capital Services Leader",
"Managing Director",
"Senior Manager",
"Human Capital Cloud Leader",
"Manager",
"Emea Human Capital Sustainability Leader",
"Tara Mahoutchian",
"Nate Paynter"
] |
... disruption. [email protected]. John Forsythe ... Intelligent devices powered by AI, in particular, are providing an ever ...
|
The new fundamentals
Enable technology to work on the worker (and the team). The traditional view of technology as a substitute or supplement for human labor is too narrow. Moving forward, you need to harness technologies that help your people and teams become the best possible versions of themselves. This means nudging them to learn new behaviors, correct old behaviors, and sharpen skills. For example, successful and error-free surgeries in the operating room (OR) require finesse, but determining the exact amount of pressure to apply on the instrument is challenging for surgeons. Technology provides surgeons with smart scalpels and forceps that allow them to gauge and adjust pressure in real time, subsequently improving precision and patient outcomes.5
Use interventions and nudges to make humans better. Technology can also aid humans in improving on things that are “fundamentally human.” Given the traditional view of technology as a substitute or supplement for humans, it’s ironic to think of technology being used to make humans more human. Yet that’s exactly what we’re talking about here. Technology can help us get better at what we already do best—things like driving well-being, practicing emotional intelligence, and fostering creativity and teaming, which are things technology itself can’t do.
Helping humans become better versions of themselves is a worthwhile endeavor on its own. However, from a business perspective, it has the valuable fringe benefit of making people better at their jobs, thereby boosting engagement and performance. Building on the surgery example from the previous fundamental, technologies are also monitoring care team members’ time in the OR and cross-referencing that time with error data for the relevant type of surgery, to deliver alerts about fatigue risk. Not only does this improve outcomes for the patient, it also improves well-being for the surgical team.
Scale insights for greater impact. Beyond the individual and team impact, this technology–human team collaboration can also drive impact through insights at scale. All this technology, whether it’s used for nudging, collaboration, training, or another purpose, creates data “exhaust.”6 This data is a powerful tool all on its own. Following the surgical example, technology aggregates the data about finesse adjustments, time in-surgery, and errors, to draw insights across an entire hospital or health system to inform changes to workforce practices like shift length, scheduling, or equipment investments. This type of information could then be used to elevate performance and outcomes across workers, teams, the organization, and the ecosystem.
This imagined future isn’t just possible; in many cases, it’s already here. And its potential impact is even greater when applied not just to individuals, but also to teams (and to networks of connected teams pursuing adjacent goals). The result is improved performance, learning and development, communication, and collaboration. Executives who responded to the Deloitte 2023 Global Human Capital Trends survey believe in the benefits of enabling technology and teams to collaborate to drive outcomes, with one in three reporting an increase in financial performance as a result of their approach to technology and team collaboration.
| 2023-01-09T00:00:00 |
https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2023/human-capital-and-productivity.html
|
[
{
"date": "2023/01/09",
"position": 87,
"query": "AI economic disruption"
}
] |
|
Can democracies cooperate with China on AI research?
|
Can democracies cooperate with China on AI research?
|
https://www.brookings.edu
|
[
"Cameron F. Kerry",
"Joshua P. Meltzer",
"Andrea Renda",
"Alex Engler",
"Rosanna Fanni",
"Brooke Tanner",
"Nicol Turner Lee",
"Tonantzin Carmona",
"John Villasenor"
] |
China has been a subject of discussions among the government officials and experts participating in the Forum for Cooperation on AI (FCAI) over the past two ...
|
China looms large in the global landscape of artificial intelligence (AI) research, development, and policymaking. Its talent, growing technological skill and innovation, and national investment in science and technology have made it a leader in AI.
Over more than two decades, China has become deeply enmeshed in the international network of AI research and development (R&D): co-authoring papers with peers abroad, hosting American corporate AI labs, and helping expand the frontiers of global AI research. During most of that period, these links and their implications went largely unexamined in the policy world. Instead, the nature of these connections was dictated by the researchers, universities, and corporations who were forging them.
But in the past five years, these ties between China and global networks for R&D have come under increasing scrutiny by governments as well as universities, companies, and civil society. Four factors worked together to drive this reassessment: (1) the growing capabilities of AI itself and its impacts on both economic competitiveness and national security; (2) China’s unethical use of AI, including its deployment of AI tools for mass surveillance of its citizens, most notably the Uyghur ethnic group in Xinjiang but increasingly more widespread; (3) the rise in Chinese capabilities and ambitions in AI, making it a genuine competitor with the U.S. in the field; and (4) the policies by which the Chinese state bolstered those capabilities, including state directed investments and illicit knowledge transfers from abroad.
Taken together, these concerns led to intense scrutiny and new questions about these long-standing ties. Is cooperation helping China overtake democratic nations in AI? To what extent are technologists and companies in democratic nations contributing to China’s deployment of repressive AI tools?
This working paper considers whether and to what extent international collaboration with China on AI can endure. China has been a subject of discussions among the government officials and experts participating in the Forum for Cooperation on AI (FCAI) over the past two years. The 2021 FCAI progress report identified the implications of China’s development and use of AI for international cooperation.1The report touched on China in connection with several of the recommendations regarding regulatory alignment, standards development, trade agreements, and R&D projects but also focused on Chinese policies and applications of AI that present a range of challenges in the context of that nation’s broader geopolitical, economic, and authoritarian policies. A roundtable discussion on December 8, 2021 presented these issues to FCAI participants more fully and elicited their views.
This paper expands and distills this work with a focus on the scope, benefits, and prospective limits of China’s involvement in international AI R&D networks. In Part I, it presents the history of China’s AI development and extraordinarily successful engagement with international R&D and explains how this history has helped China become a global leader in the field. Part II shows how China has become embedded in international AI R&D networks, with China and the United States becoming each other’s largest collaborator and China also a major collaborator with each of the other six countries participating in FCAI. This collaboration takes place through multiple pathways: enrollment at universities, conferences, joint publications, and work in research labs that all operate in various ways to develop, disseminate, and deploy AI.
Part III then provides an overview of the economic, ethical, and strategic issues that call into question whether such levels of collaboration on AI can continue, as well as the challenges and disadvantages of disconnecting the channels of collaboration. The analysis then looks at how engagement with China on AI R&D might evolve. It does so primarily through a U.S.-focused lens because the U.S., as by far China’s largest competitor and collaborator in AI, provides an umbrella and a template for countries and FCAI participants that also collaborate with China on AI R&D and face many of the same issues. Moreover, measures to respond to the challenges China presents are more likely to be effective in coordination than in isolation. Recent U.S. export controls on semiconductors and the technologies used to manufacture them have laid bare the critical role of countries such as Japan and Korea. For now, the U.S. government is able to force foreign compliance through administrative measures, such as the foreign direct product rule, but these mechanisms may be made moot if foreign manufacturers engineer U.S. technology out of their supply chain. This paper deals with cooperative research rather than hardware supply chains, but similar dynamics exist across these domains. Accordingly, this paper is not just about collaboration with China but also about collaboration in relation to China.
Measures to respond to the challenges China presents are more likely to be effective in coordination than in isolation.
The U.S., other governments participating in FCAI, and their partners are not the only actors in this drama. What AI R&D with China looks like going forward will also be determined by what China does. China’s intensifying push for technological self-reliance has accelerated China’s disengagement from the international technology ecosystem in certain respects, while so far keeping it deeply enmeshed in other international research networks. The future trajectory of this engagement will depend heavily on actions taken by the Chinese government and the Chinese Communist Party.
In light of the issues presented by these changes, the paper proposes rebalancing AI R&D with Chinese researchers and institutions through a risk-based approach. Going forward, such collaboration will require a clear assessment of the costs and benefits, aiming to maximize the benefits of an open research environment and strong international links with the risks presented by AI R&D with China. Adopting an appropriately risk-based approach often will not counsel complete disengagement with China on AI R&D and instead require a rebalancing that takes into account the various vectors for knowledge transfer. Crucially, governments need to work collaboratively with each other and with companies, universities, and research labs to inform the assessment of the risks and understand the benefits of AI R&D with China. A failure to build these partnerships into the risk-assessment process could lead to bad outcomes that mismeasure risks and benefits, leaving the U.S. worse off.
Download the full report.
| 2023-01-09T00:00:00 |
https://www.brookings.edu/articles/can-democracies-cooperate-with-china-on-ai-research/
|
[
{
"date": "2023/01/09",
"position": 75,
"query": "government AI workforce policy"
}
] |
|
NSF announces new AI institute
|
NSF announces new AI institute
|
https://www.nsf.gov
|
[] |
An official website of the United States government. Here's how you know ... Required Policy Links. Vulnerability disclosure · Inspector General · Privacy ...
|
The U.S. National Science Foundation announced a new artificial intelligence institute to focus on the speech language pathology needs of children. The need for speech and language services has been exacerbated during the COVID-19 pandemic due to a widening gap in services available to children. The AI Institute for Exceptional Education aims to close this gap by developing advanced AI technologies to scale availability of speech language pathology services so every child in need has access.
The institute is supported by a $20 million grant from NSF and the Department of Education's Institute of Education Sciences to the University at Buffalo.
"The AI Institute for Exceptional Education follows 18 already established NSF-led AI Institutes, an ecosystem of AI research and education in pursuit of transformational advances in AI research and development of AI-powered innovation," said NSF Program Director James Donlon. "We are happy to welcome this new team to the AI Institutes program."
"We are eager to see how this team advances AI research to develop better solutions for children with specific speech-language needs, as well as their families and the US schools who serve them. This project is a great example of how we can harness the opportunities that AI technologies can offer to enhance the services that our Nation can offer the American people," said Fengfeng Ke, NSF Program Director.
The institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development.
Visit nsf.gov to learn more about the NSF National Artificial Intelligence Research Institutes program.
| 2023-01-09T00:00:00 |
https://www.nsf.gov/news/nsf-announces-new-ai-institute
|
[
{
"date": "2023/01/09",
"position": 93,
"query": "government AI workforce policy"
}
] |
|
Generative Ai
|
#generative ai
|
https://futurism.com
|
[] |
generative ai. Tag. Latest Stories. Creator of "Indie Band" Who Insisted It ... Applying to Jobs Has Become an AI-Powered Wasteland · AI Is Turbocharging ...
|
DISCLAIMER(S)
Articles may contain affiliate links which enable us to share in the revenue of any purchases made.
Registration on or use of this site constitutes acceptance of our Terms of Service.
© Recurrent Ventures Inc, All Rights Reserved.
| 2023-01-09T00:00:00 |
https://futurism.com/tags/generative-ai
|
[
{
"date": "2023/01/09",
"position": 85,
"query": "generative AI jobs"
}
] |
|
Center for Labor and a Just Economy - Harvard Law School
|
Center for Labor and a Just Economy
|
https://clje.law.harvard.edu
|
[
"Brett Milano",
"Sharon Block",
"Raj Nayak",
"Seema Nanda",
"Braden Campbell",
"Chrissy Lynch",
"Gary Rivlin",
"Jonathan Saltzman"
] |
CLJE is a hub of collaborative research, policy, and strategies to empower working people to build an equitable economy and democracy.
|
2025 Harvard Trade Union Program Graduation
Check out coverage from the 2025 Harvard Trade Union Program graduation, at which Julie Su, former Acting Secretary of Labor, delivered commencement remarks to the 112th class of the historic program.
| 2023-01-09T00:00:00 |
https://clje.law.harvard.edu/
|
[
{
"date": "2023/01/09",
"position": 9,
"query": "AI labor union"
}
] |
|
Aleksandra Przegalinska
|
Center for Labor and a Just Economy
|
https://clje.law.harvard.edu
|
[] |
Aleksandra received her Ph.D. in the field of philosophy of artificial intelligence at the Institute of Philosophy of the University of Warsaw.
|
Aleksandra received her Ph.D. in the field of philosophy of artificial intelligence at the Institute of Philosophy of the University of Warsaw. Aleksandra is the head of the Human-Machine Interaction Research Center at Kozminski University (www.humanrace.edu.pl) and the Leader of the AI in Management Program. Until recently, she conducted post-doctoral research at the Center for Collective Intelligence at the Massachusetts Institute of Technology in Boston. She graduated from The New School for Social Research in New York. In 2021 Aleksandra joined the American Institute for Economic Research as a Visiting Research Fellow. She is interested in the future of work seen through the lens of emerging technologies, as well as in natural language processing, humanoid artificial intelligence, social robots, and wearable technologies. She is the co-author of Collaborative Society (MIT Press), published together with Dariusz Jemielniak.
| 2023-01-09T00:00:00 |
https://clje.law.harvard.edu/team/aleksandra-przegalinska/
|
[
{
"date": "2023/01/09",
"position": 36,
"query": "AI labor union"
}
] |
|
Kansas.gov: Home
|
Kansas.gov
|
https://portal.kansas.gov
|
[] |
State admitted to the Union. 0 th. Social Media Connections. Kansas State ... Kansas Service Awards. government-experience awards icon · ai-award-winner icon.
|
After putting Kansas back on track and ending her first term with the largest budget surplus in history, Governor Laura Kelly was re-elected and sworn in for a second term as the 48th Governor of the State of Kansas on January 9, 2023.
Governor Kelly is a bipartisan leader who in her first term fully funded schools, improved infrastructure and broke records for business investment and job creation. As a result, the entire state prospered.
Governor Kelly has set a North Star for her second term – to make Kansas the best place in the country to raise a family – and will continue to prioritize fiscal responsibility, affordable healthcare and early childhood development during her time in office.
| 2023-01-09T00:00:00 |
https://portal.kansas.gov/
|
[
{
"date": "2023/01/09",
"position": 94,
"query": "AI labor union"
}
] |
|
Tiger Global-Backed Scale AI Just Laid Off 20 Percent of Its ...
|
Hot AI startup Scale AI, valued at $7 billion, has laid off 20% of its staff. Insiders say employees found out when they were locked out of their computers.
|
https://www.businessinsider.com
|
[
"Samantha Stokes",
"Stephanie Palazzolo"
] |
Hot AI startup Scale AI, valued at $7 billion, has laid off 20% of its staff. Insiders say employees found out when they were locked out of their computers.
|
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.
It may be a new year, but for the tech industry — which has long been feeling the heat in the form of decreased revenue and job cuts — it's just more of the same.
Buzzy artificial intelligence data-management startup Scale AI, which was last valued at $7 billion in 2021, laid off 20% of its workforce Monday morning, Insider has learned. The company confirmed the layoffs in a blog post.
Founded in 2016 by Alexandr Wang and Lucy Guo, Scale AI was a member of the prestigious accelerator program Y Combinator's summer 2016 cohort. The startup uses machine learning to label and categorize massive amounts of data so companies can feed this data into AI models. Scale AI's customers includes Harvard Medical School, PayPal, Brex, OpenAI, and Toyota.
Scale AI achieved unicorn status in 2019 following a $100 million Series C led by Founders Fund, and in total it has raised $602.6 million from Index Ventures, Coatue, Tiger Global, Accel, Dragoneer, and other notable investors.
Scale AI's impressive rise to fame made its cofounder Wang the world's youngest self-made billionaire, according to Forbes. Wang himself became known in the Valley as "the next Zuckerberg."
Prior to the layoffs, the startup had employed around 700 people, according to Pitchbook.
Employees learned that they had been laid off when they found themselves locked out of their work computers this morning, according to one former employee who spoke to Insider and multiple LinkedIn posts authored today by former staffers. They then found messages in their personal email accounts about their jobs being cut — the former employee saw the devastating message after waking up to take care of his two-year-old son.
Employees were also told that to get access to personal documents on their work computers, they would need to notify Scale's HR team and work with the startup's IT department to recover the information before being permanently locked out, the former staffer said.
The tech industry is continuing to hurt after a brutal 2022 claimed 150,000 jobs, according to tracker site layoffs.fyi. As the pandemic tech boom has come to an end, startups are coming back to earth following a period of reckless fundraising and untenable valuations — and jobs, passion projects, and even holiday parties have all been on the chopping block.
The former Scale AI employee told Insider that he had an inkling job cuts were coming after the startup underwent a recent hiring freeze and delayed scheduling biannual reviews with staffers, which usually occur during the beginning and middle of the year.
Related stories Business Insider tells the innovative stories you want to know Business Insider tells the innovative stories you want to know
Despite the dismal broad outlook, many believed the red-hot space of AI would be immune to the tech industry's troubles, after seeing a year of eye-popping valuations like marketing startup Jasper's unicorn price tag and research lab OpenAI's $29 billion valuation, as first reported by The Wall Street Journal.
However, the job cuts at Scale AI – once a Silicon Valley darling – seem to suggest otherwise. In fact, it could be a cautionary tale for some buzzy AI startups.
Wang, who is the CEO of the company, wrote in the blog post announcing the layoffs that he was on a hiring spree given the enthusiasm for Scale AI's product.
"Over the past several years, interest from enterprises and governments in AI has grown rapidly. As a result, I made the decision to grow the team aggressively in order to take advantage of what I thought was our new normal," Wang wrote in the post.
Wang added that he failed to predict the economic downturn over multiple previous quarters, which has affected the startup's customer in the e-commerce and consumer technology space.
Do you have information about layoffs or other trouble at a startup? Contact the reporters Samantha Stokes ([email protected] and encrypted messaging 646-389-7866) and Stephanie Palazzolo ([email protected] and encrypted messaging 979-599-8091).
| 2023-01-09T00:00:00 |
https://www.businessinsider.com/layoffs-tiger-global-and-y-combinator-scale-ai-artificial-intelligence-2023-1
|
[
{
"date": "2023/01/09",
"position": 4,
"query": "AI layoffs"
},
{
"date": "2023/01/09",
"position": 2,
"query": "artificial intelligence layoffs"
}
] |
|
undefined | Scale
|
undefined
|
https://scale.com
|
[] |
I have made the difficult decision to reduce the size of our team by 20%, which means saying goodbye to many talented Scaliens.
|
Today I have to announce the hardest change I’ve ever had to make at Scale. I have made the difficult decision to reduce the size of our team by 20%, which means saying goodbye to many talented Scaliens. If you are among those impacted, you will be contacted shortly with further details via your personal email as well as offered time for a 1:1 conversation with a manager today.
This was not a decision made lightly, and it’s one of many steps we are taking in order to ensure Scale is operating responsibly for the long-term health and success of the business.
I know that this is tough news for everyone, especially those impacted, and you likely have many questions, the most pressing of which I aim to answer for you now:
How did we get here?
I take full responsibility for the decisions that have led us to this point. Over the past several years, interest from enterprises and governments in AI has grown rapidly. As a result, I made the decision to grow the team aggressively in order to take advantage of what I thought was our new normal.
For a time, this seemed to prove out—we saw strong sales growth through 2021 and 2022. As a result, we increased headcount assuming the massive growth would continue. However, the macro environment has changed dramatically in recent quarters, which is something I failed to predict. Many of the industries we serve, such as e-commerce and consumer technology, have been buoyed by the pandemic and are now experiencing a painful market correction. As a result, we need to prepare ourselves for a very different economic environment.
Given the uncertainty many of the industries we serve face, when I re-assessed our investment level against these market realities, it became clear that we needed to realign our investment to adjust to this new environment. While many other companies have made similarly difficult decisions recently, we spent months looking for ways to avoid it, but unfortunately we came to the conclusion that we needed to make these changes as well.
| 2023-01-09T00:00:00 |
https://scale.com/blog/company-update
|
[
{
"date": "2023/01/09",
"position": 28,
"query": "AI layoffs"
}
] |
|
Laid-Off Tech Workers Finding New Jobs Quickly
|
Laid-Off Tech Workers Finding New Jobs Quickly
|
https://www.shrm.org
|
[
"Roy Maurer"
] |
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR.
|
Designed and delivered by HR experts to empower you with the knowledge and tools you need to drive lasting change in the workplace.
Demonstrate targeted competence and enhance credibility among peers and employers.
Gain a deeper understanding and develop critical skills.
| 2023-01-09T00:00:00 |
https://www.shrm.org/topics-tools/news/talent-acquisition/laid-tech-workers-finding-new-jobs-quickly
|
[
{
"date": "2023/01/09",
"position": 74,
"query": "AI layoffs"
}
] |
|
5 Common Business Scenarios for Artificial Intelligence
|
5 Common Business Scenarios for Artificial Intelligence
|
https://www.uipath.com
|
[
"Uipath Inc.",
"January"
] |
First, IT leaders are using AI for their internal functions to optimize IT operations, execute zero-touch service desk, etc. Second, IT teams are promoting AI ...
|
If there is one technology that most organizations include in their strategic roadmap, it has to be artificial intelligence (AI). According to Gartner, “70% of organizations will have operationalized AI architectures due to the rapid maturity of AI orchestration initiatives” by 2025. On similar lines, Forrester predicts "one in five organizations is going to double down on AI" investments.
What does that mean for business leaders? I’ll cover that in this blog post, along with common business scenarios for AI applications.
Multiple factors in the current business environment are driving organizations to expedite their plans with AI:
Business applications are migrating towards cloud, allowing businesses to seamlessly access requisite data
Ready-made machine learning (ML) models are being leveraged through low-code/no-code platforms, creating a surge in democratization
There are also advancements in complementary technologies across industries where AI promises significant benefits:
5G is disrupting the telecom sector
Internet of things (IoT) is influencing manufacturing, automobile, and oil and gas spaces
Omnichannel experiences are driving the retail and e-commerce segment
Blockchain is influencing financial services, procurement, logistics, etc.
AI isn’t a magic solution that can be deployed in the same way across businesses. Organizations need to understand the core functional underlying capabilities AI can help drive.
AI applications for common business scenarios
Document interpretation
As the name suggests, in this scenario, AI assists organizations in classifying and extracting information from unstructured documents. With the evolving maturity of ML models, businesses can get significant accuracies and confidence levels while extracting data with fewer datasets. For instance, with Forms AI (available via UiPath Document Understanding), we can drag and drop just two to three invoices to train the ML model accurately.
Related article: Seamlessly Integrate Machine Learning Models into Your Business Processes
AI Computer Vision
Computer vision allows interpretation of on-screen elements with human-like recognition. This allows organizations to build vision-based automation that can run on most virtual desktop interface (VDI) environments—regardless of framework or operating system. Our AI Computer Vision enables robots to recognize and interact with on-screen fields and components.
Natural language processing (NLP)
An NLP capability helps with language detection, extracting unstructured data, and sentiment analysis. Communications mining is the application of NLP to business communications. It extracts intent data (like customer issues and reasons for contact), tone, and sentiment to drive automation and understanding in business processes.
One of the main uses of communications mining is email automation:
Extracting emails from underlying systems
Classifying based on the target scenarios
Extracting information from the respective email (unstructured)
Processing the information as per the requirement (such as creating a ticket in ServiceNow)
Scientific discovery: process mining and task mining
UiPath Process Mining, equipped with ML models, allows organizations to discover bottlenecks in business processes using digital footprints such as transactional logs of various applications or systems. Both Process Mining and UiPath Task Mining are conveniently available within the same platform: UiPath Business Automation Platform. Task Mining allows organizations to identify different paths employees take to perform the same task by interpreting data (collected across multiple agents over a defined period of time).
Predictive analytics
Now, with access to historical data, ML models empower businesses to make more educated decisions. Businesses are using this capability to better forecast demand, provide personalized offerings, predict network outages, prevent fraudulent transactions, and more.
Get the white paper "How RPA Analytics Drive Better Business Outcomes."
Industry relevance
There is potentially no industry that hasn’t been touched by AI. With a wide range of proof of concepts (PoCs) and industry-specific ML models offered in plug-and-play mode, businesses have doubled down on their investments in this incumbent technology.
Banking and financial services
Forrester predicted that AI would be one of the top technologies that would “win over banks in 2022.” “Interest in AI, microservices, and analytics remains high . . . Overall investment levels vary, but particularly, budgets for AI/machine learning are high.”
Pharmaceuticals
As per a McKinsey report, the use of AI technologies improves decision making, optimizes innovation, improves efficiency of research and clinical trials, and creates beneficial new tools for physicians, consumers, insurers, and regulators.
Telecommunications
“The global AI in telecommunication market size” is projected to grow “at a CAGR [compound annual growth rate] of 42.6% during 2021-2027.” This is widely influenced by service providers adopting 5G technology, resulting in net new use cases for both business-to-business (B2B) and business-to-consumer (B2C) segments.
IT
Chief information officers (CIOs) are being recognized as the torch bearers driving AI adoption at scale across businesses. When it comes to IT, there are two aspects to look at. First, IT leaders are using AI for their internal functions to optimize IT operations, execute zero-touch service desk, etc. Second, IT teams are promoting AI best practices to businesses, driving adoption with minimal risks. Accordingly, CIOs are taking the lead to drive transformation and business outcomes.
Human resources (HR)
According to a 2020 Gartner article, “17% of organizations are already leveraging AI-based solutions within their HR function.” “HR leaders are citing cost savings, accurate data-driven decision making and improved employee experience as the top reasons to deploy AI.”
Get the e-book, “How automation helps HR make work worthwhile for humans.”
The table below lists key areas for AI intervention across various industries (though not an exhaustive list):
Scaling AI with UiPath
As a strong proponent of AI, we’ve embedded AI across the UiPath Platform. As a result, businesses can remain focused on problems and objectives without being stressed about technology adoption.
We discussed earlier that UiPath Process Mining and Task Mining leverage AI capabilities to drive scientific discovery for organizations to identify process bottlenecks. And multiple tools within the UiPath Business Automation Platform (UiPath Studio, StudioX, AI Center, and Document Understanding) leverage AI to create automation solutions while dealing with unstructured data and documents.
Find out other common business scenarios that benefit from AI. Register now for access to all UiPath AI Summit sessions. Or sign up for a free trial to try out UiPath AI capabilities for your business scenarios.
| 2023-01-09T00:00:00 |
https://www.uipath.com/blog/ai/5-common-artificial-intelligence-business-scenarios
|
[
{
"date": "2023/01/09",
"position": 42,
"query": "artificial intelligence business leaders"
}
] |
|
Artificial intelligence: 3 trends to watch in 2023
|
Artificial intelligence: 3 trends to watch in 2023
|
https://enterprisersproject.com
|
[
"Yishay Carmiel",
"January"
] |
AI is becoming a fundamental differentiator for business. If you can't find deeper insights in data, quickly and at scale, your competitors will. There is far ...
|
The artificial intelligence (AI) market has been on a swift growth path for several years – so much so that the industry is expected to reach $42.4 billion in 2023. This momentum will continue, and we’re starting to realize it with the debut of powerful new AI-powered tools and services across industries.
There has been a shift from the well-understood role of AI in analysis and prediction – helping data scientists and enterprises make sense of the world and chart their courses accordingly – to new and innovative systems, like DALL-E, that are producing entirely new artifacts that have never been seen before.
But what’s driving this exponential growth, and how will it affect the space in the coming year? Here are three key AI trends that will take shape in 2023:
1. AI democratization will continue
AI is becoming a fundamental differentiator for business. If you can’t find deeper insights in data, quickly and at scale, your competitors will. There is far less supply than demand, and top engineering and data science talent will remain extremely expensive. As a result, more AI consultants and greater availability of low- and no-code features will become differentiators. This democratization of AI will help simplify the adoption of these technologies in all vertical markets by those with varying levels of experience.
[ Also read Responsible AI by design: Building a framework of trust. ]
Additionally, cloud vendors will increasingly combine their services building blocks to include AI, leading to powerful, widely available features and solutions. This is important for two reasons:
Whether they know it or not, more people will be using AI than ever, putting it in the hands of the masses.
We’re starting to realize the bottom-line business drivers of AI, which will trickle down from the aforementioned major cloud vendors to smaller tech players, leading to even greater AI adoption.
2. Generative AI will become commercialized
Generative AI is having a moment, and we’ll start to see many more products and services come to market in 2023. This area is exciting because many largely untapped but valuable use cases exist.
One particularly bright spot is generative AI-powered language applications. In gaming, for example, a user can opt to sound like their on-screen character. In a virtual meeting, a person with a cold can make their voice easier to understand, enabling them to focus on their work contributions rather than potential misunderstandings.
Unlike AI-generated imagery, which has recently gained a lot of attention, business use cases are lacking. Speech-to-speech (S2S) technology, on the other hand, has the potential to change the way we work. For customer service, this can be a game-changer. For example, contact center agents can use generative AI to clearly understand callers from anywhere in the world, helping them resolve problems faster and feel more empowered.
3. AI ethics will become a top priority
Skip to bottom of list More on artificial intelligence
Despite its proven value and great potential, AI still has complex legal and ethical issues. The severity varies – new implications can range from negative to dangerous. From deep fakes to biased algorithms to models that have degraded over time, these are all scary reminders that regulatory frameworks must adapt to the fast-evolving AI market. And while regulatory and legal frameworks are currently in the works, with an AI Bill of Rights in the near future, businesses must approach AI safely and ethically.
The first class-action lawsuit in the US against an AI system was recently filed, and it won’t be the last. Technology may be leaps ahead of the legal industry, but as AI embeds itself into our everyday lives, companies and governments must get serious about safe and responsible practices. We will also see more transparency around cases like this and learn how to avoid these missteps for future deployments.
[ Learn the non-negotiable skills, technologies, and processes CIOs are leaning on to build resilience and agility in this HBR Analytic Services report: Pillars of resilient digital transformation: How CIOs are driving organizational agility. ]
Although we’ve been saying it for a decade, 2023 will be another high-growth year for AI. The commercialization of new products and features, strides in access and affordability, and a focus on responsible practices will open up disruptive use cases for the enterprise and beyond.
It’s an exciting time to be in the AI space, and it will be interesting to see how the industry progresses over the next 12 months.
[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]
| 2023-01-09T00:00:00 |
https://enterprisersproject.com/article/2023/1/artificial-intelligence-3-trends-watch-2023
|
[
{
"date": "2023/01/09",
"position": 58,
"query": "artificial intelligence business leaders"
}
] |
|
Professional Certificate in Computer Science for Artificial ...
|
Professional Certificate in Computer Science for Artificial Intelligence
|
https://www.harvardonline.harvard.edu
|
[] |
The demand for expertise in AI and machine learning is growing rapidly. By enabling new technologies like self-driving cars and recommendation systems or ...
|
This professional certificate series combines CS50’s legendary Introduction to Computer Science course with a new program that takes a deep dive into the concepts and algorithms at the foundation of modern artificial intelligence. This series will lead you through the most popular undergraduate course at Harvard, where you’ll learn the common programming languages, then carries that foundation through CS50’s Introduction to Artificial Intelligence with Python. Through hands-on projects, you’ll gain exposure to the theory behind graph search algorithms, classification, optimization, reinforcement learning, and other topics in artificial intelligence.
By course’s end, students emerge with experience in libraries for machine learning as well as knowledge of artificial intelligence principles that enable them to design intelligent systems of their own. Enroll now to gain expertise in one of the fastest-growing domains of computer science from the creators of one of the most popular computer science courses ever.
| 2023-01-09T00:00:00 |
https://www.harvardonline.harvard.edu/course/professional-certificate-computer-science-artificial-intelligence
|
[
{
"date": "2023/01/09",
"position": 90,
"query": "artificial intelligence business leaders"
}
] |
|
A walk in deep dreams: designing type with artificial ...
|
A walk in deep dreams: designing type with artificial intelligence.
|
https://zetafonts.com
|
[
"Cosimo Lorenzo Pancini"
] |
A visual essay dedicated to our research on neural network generated typography, with the title “Our Mistaken Futures”.
|
Posted by Cosimo Lorenzo Pancini in Case studies, News on Nov/2022
this blog post is typeset in blacker-mono font family
In early summer 2022, two image generation softwares based on neural networks were released in open beta to the public: Dall-E by OpenAI and Midjourney AI. We had been interested in artificial intelligence and creativity assisted by neural networks since long time, and had been already admirers of their “unreasonable effectiveness“. But these new models not only far outperformed the older ones, but even our best expectations on the quality of imagery and design generated by neural networks.
While everybody was hooked on the capacity of these models to produce beautiful sci-fi concept art or realistic photographic imagery from a simple text prompt, we were far more excited to use the tools to explore abstract design spaces, looking for new typographic inventions and exploring visual remixes of historical or contemporary styles. In the same period we were invited to design the catalog of OFFF TLV festival, celebrating creative mistakes, and we decided that our contribution for the book would be a visual essay dedicated to our research on neural network generated typography , with the title “Our Mistaken Futures”.
In a few days we generated thousands of images, from which we selected the ones that could suggest interesting creative paths to follow. We asked the machine to generate title sequences for imaginary movies by Stanley Kubrick, Federico Fellini and Quentin Tarantino, ghost typography, fictitious specimen pages, portraits of historical typographers, posters in any style ranging from Bauhaus to 80’s style. We selected the best results and used some of them as the base for the development of the first two typefaces of an ongoing collection project, entitled Deep Dream. And we used these fonts to typeset poetry written by another neural network model, GPT3, which can generate surprisingly coherent text.
In all these experiments, we found that the AI systems we were using were as wonderfully advanced and effective as still primitive and faulty. They shared the ambitions and the limitations of any emerging technology – producing messy results that were far more interesting for their promises than for their quality of results. These images, the first ever dreamt by a computer that had been fed the whole visual knowledge of humankind, have the same powerful vibe of 8-bit videogames, analog synthesizers and steam-powered machines, and as such are perfect tools to illustrate the links between mistakes and creativity.
Still, after the initial enthusiasm, we soon realized the many complex implications of the technology. First and foremost are the doubts about authorship and intellectual property, given the way in which these technologies appropriate, remix and regurgitate existing artistic styles, both historical and distinctly personal. Who is the author of these images? While it feels incredibly powerful to summon detailed visual inventions with few words, one can’t deny it feels a little bit like cheating. More than creating feels like fishing: the prompt is like a bait, and you never know exactly what you will get.
Still, Deep Learning is here to stay. And it’s incredibly fun to use – it feels a bit like having the super-talented artist friend you ask to draw weird things just for the sheer pleasure of seeing drawings appear magically on paper. Therefore we decided that we could try and use Midjourney AI and Dall-E as sparring partners using the ambiguous, liquid letterforms produced by these still primitive generative models as a base for new typeface families.
The starting point for the first experiment was an image realized by Midjourney AI as part of a series on the prompt “typographic poster for an eighties movie”. Midjourney AI (then in V2) had answered with an appropriate bonanza of hyper saturated colors, triangular shapes, and pixelated typography. Francesco Canovaro started from this input, tracing the original “eighties movie” logo and trying to create a coherent set of glyphs around it.
Like in any typeface development project, the challenge is to identify the typeface “DNA” so that it can be shared by all letterforms in a coherent way. In this case, Francesco focused on the idea of having a “quasi-bitmap” font, with the regular rhythm of pixel fonts continuously challenged by unexpected artifacts and weird ligatures implied by the neural network’s interpretation of pixel letterforms. The “it” ligature, especially, has been kept straight out of the original image and – since it’s quite a common combination of characters in languages using latin glyphs, it works perfectly in the final typeface.
Another nice characteristic of Midjourney typography is the use of “monocase”, the effect of alternating upper- and lower-case letterforms in the same font. This, again is something that Canovaro kept in the typeface, thanks to a generous use of scripted alternate forms, trying to give the final typeface the dreamy dynamics of the original font.
For the second typeface, Midjourney Zero, Cosimo Lorenzo Pancini started again by a Midjourney image, this time selected from a bunch of images realized after the prompt “black and white typographic specimen”. Once again, the challenge was one of giving meaning to the mindless result of the neural network image by treating it as if it was a coherent input by a creative collaborator. In this case Pancini focused on the way that neural networks had interpreted the rhythm of geometric sans, again inserting noise and uncommon construction in an otherwise standard construction.
The key signature glyph here was the lowercase a in the third row of the original image (or, to be more precise, the formless blob in the original image that Pancini interpreted as a lowercase a). In bold geometric sans it is always very difficult to balance the black in double-story characters like “a” and “g”, and this is why the single-story alternate is often preferred. Here, the idea is to minimize the counter space rather than the lines, introducing something that was repeated also in the lowercase e.
These are only the first steps in our research, and we are hoping to expand the Deep Dream collection in the near future.
| 2023-01-09T00:00:00 |
https://zetafonts.com/blog/a-walk-in-deep-dreams-designing-type-with-artificial-intelligence/
|
[
{
"date": "2023/01/09",
"position": 93,
"query": "artificial intelligence graphic design"
}
] |
|
Automation and AI: The Future of Work Is Here - ALIS
|
Automation and AI: The Future of Work Is Here
|
https://alis.alberta.ca
|
[] |
Automation and AI could bring great opportunities for workers who move in the right directions. Robotics, automation, and AI may create jobs we can't even ...
|
Whether it’s computerized processes replacing workers (automation) or machines performing complex tasks and making decisions (artificial intelligence, or AI), the way we work is changing dramatically.
Nothing illustrates that point better than machine learning—the process where a machine learns for itself based on the data it’s given. Machine learning is already evolving faster than human intelligence. It is expected to someday be able to handle the type of complex tasks we assume only people can do.
In the past, people have made dire predictions about how automation and AI will steal jobs. But it is now becoming clear that AI will actually create more jobs than it takes over.
Are automation and AI anything to worry about? Could they help your career? How do you keep your skills up to date in this new technological era?
Automation and AI are evolving in surprising ways
Some of the biggest concerns around automation and AI involve how many jobs could be eliminated and what will happen to those workers.
As AI picks up speed, the number and variety of things it can handle is growing quickly. Consider the range of processes that have already been automated, like various assembly line functions, and the range of products on the horizon, such as self-driving cars and trucks. With companies in Alberta road-testing driverless trucks, some people predict that workers could eventually be displaced.
But automation and AI won’t just take people’s jobs. They will also create new jobs to help maintain the new products and processes that come along. For example, someone will have to design and sell driverless trucks, and the trucks will need upgrades and ongoing maintenance. There may still be a need for an operator in case of a malfunction.
What careers and industries will be affected?
What can you do to prepare yourself for the challenges and opportunities that AI brings?
A report from Deloitte [pdf] looked at the careers that will likely involve humans for the foreseeable future. It said that specific skills—such as auditing, auto mechanics, or computer coding—can quickly become outdated.
It might be smarter over the long term for Canadians to become expert in skills they can transfer between many jobs, and where AI and robots will have trouble competing. These include skills such as collaboration, adaptability, and conceptual thinking, which will always be an advantage for humans over machines.
Careers that have a lower risk of becoming fully automated include:
Automation and AI can improve how we work
As automation and AI gain momentum, some surprising opportunities are becoming clearer. Creative work is one area where AI and humans are already teaming up in new and exciting ways. Here are some examples:
Video game design. If a game designer is having trouble starting a project, AI can take the first step. The designer provides creative feedback, then the AI learns from the feedback and works together with the designer to improve the design process.
If a game designer is having trouble starting a project, AI can take the first step. The designer provides creative feedback, then the AI learns from the feedback and works together with the designer to improve the design process. Advertising. An AI system selects images for an advertising campaign. Then a graphic designer makes sure the images are right. The designer’s feedback helps the AI learn and improve.
An AI system selects images for an advertising campaign. Then a graphic designer makes sure the images are right. The designer’s feedback helps the AI learn and improve. Article writing. AI takes on the tasks of basic research and preparing a new document. But it needs a qualified writer or editor to check the article, refocus it, and improve how the words flow. Humans will be needed for this type of creative task well into the future.
AI can also help in professions where gaps in skills exist:
Radiology. Our health-care system doesn’t have enough radiologists. We use AI to review medical images and determine which ones need to be seen by an expert radiologist. This frees radiologists from looking at simple cases and lets them focus on more complex ones.
Our health-care system doesn’t have enough radiologists. We use AI to review medical images and determine which ones need to be seen by an expert radiologist. This frees radiologists from looking at simple cases and lets them focus on more complex ones. Water treatment. The Alberta Machine Intelligence Institute works with staff at water treatment plants, helping humans get more information from their systems. For example, it helps them decide whether to treat water right away or delay. This makes the system more efficient and cuts costs.
One thing to consider is that when automation of any kind is introduced in an industry, costs go down and demand for goods and services increases. Increased demand leads to more work. While the future of automation and AI is still uncertain, computers and humans will continue for many years to have different but complementary strengths.
How can AI affect fairness and equity?
Many people assume that machines think in a way that is objective. But humans design AI systems. That means that our biases become part of how AI systems operate.
We have seen many troubling outcomes of human bias in AI:
Hiring. Recruiting systems have been notoriously biased against women. When algorithms are based on past resumés and previous candidates have been mostly men, the algorithm favours men over women.
Recruiting systems have been notoriously biased against women. When algorithms are based on past resumés and previous candidates have been mostly men, the algorithm favours men over women. Justice system. Algorithms that are used to predict which criminal offenders are most likely to reoffend have been biased against people of colour.
Algorithms that are used to predict which criminal offenders are most likely to reoffend have been biased against people of colour. Policing. AI that is used to predict where crimes are likely to take place have been biased against neighbourhoods where people of colour live. Whether or not certain neighbourhoods actually have higher crime rates, AI has typically sent police to these neighbourhoods.
A University of Toronto team researching AI bias reported that nearly every AI system it tested had significant bias. It said that industries writing algorithms must make sure their products are bias free by removing bias from the data set they use to train AI.
Society is making progress on eliminating bias in AI. The not-for-profit Responsible AI Institute certifies AI products and systems and supports organizations building AI as they deal with these issues.
How can you prepare?
Get a jump on working together with machines and AI. What to do depends on your job and career stage. Here are a few of the many options to consider:
Further your education and engage in continuous learning. You could consider staying in school, returning to school, or taking part-time training. For example, a piping welder could train to become a robotics welder.
Prioritize work that requires intuition, empathy, creativity, or hands-on skills. AI will mainly take over routine tasks, freeing workers to focus where only a human could make a difference. Some examples are marriage counsellors, massage therapists, landscapers, and teachers.
Think about how you could change your job to make the human aspect more central. Businesses will always need strategic thinkers. For example, some accounting tasks may be automated, but financial planners will still do analysis. Robots may replace certain oil and gas jobs, but people will be needed for exploration and production.
To maximize your potential as AI takes greater hold in society and the workplace, you should:
Automation and AI could bring great opportunities for workers who move in the right directions. Robotics, automation, and AI may create jobs we can’t even think of yet. Take steps now to make sure you’re ready for these exciting changes.
| 2023-01-10T00:00:00 |
https://alis.alberta.ca/plan-your-career/workplace-trends/artificial-intelligence-ai/automation-and-ai-the-future-of-work-is-here/
|
[
{
"date": "2023/01/10",
"position": 16,
"query": "AI replacing workers"
},
{
"date": "2023/01/10",
"position": 25,
"query": "machine learning job market"
},
{
"date": "2023/01/10",
"position": 12,
"query": "AI job creation vs elimination"
},
{
"date": "2023/01/10",
"position": 19,
"query": "future of work AI"
},
{
"date": "2023/01/10",
"position": 39,
"query": "government AI workforce policy"
},
{
"date": "2023/01/10",
"position": 28,
"query": "artificial intelligence wages"
}
] |
|
AI's Role in Enhancing Human Creativity: A Continuing Debate
|
AI’s Role in Enhancing Human Creativity: A Continuing Debate
|
https://spartanshield.org
|
[
"Kushi Maridu"
] |
As advances in artificial intelligence (AI) continue to accelerate, there has been much debate about the potential for machines to replace human workers, ...
|
As advances in artificial intelligence (AI) continue to accelerate, there has been much debate about the potential for machines to replace human workers, including in fields that require creativity. While it is true that AI can perform certain tasks faster and more efficiently than humans, there is a growing consensus among experts that AI cannot fully replace human creativity.
AI is rapidly becoming much more advanced, allowing it to be used in a variety of fields: art, writing and science to name a few. While AI has the potential to greatly enhance our creative abilities, it will never outshine human creativity.
OpenAI recently launched a chatbot called ChatGPT. This new AI has many abilities: conversing with the user, having the ability to recall previous questions and answer follow-up questions, fixing and admitting its mistakes and rejecting inappropriate prompts.
Additionally, it is capable of creating poems in a specific style, writing and correcting code and writing original scripts for movies from a basic prompt.
ChatGPT is advantageous because it’s a generative AI, meaning it can create responses from scratch. Its knowledge is based on texts and information from up until 2021, which means that it will not be able to provide accurate or relevant responses to queries about more recent events or developments.
Although it has ample knowledge, ChatGPT also has a relatively high rate of error, meaning that one should double-check their answers it gives them. Although due to ChatGPT’s versatility, many students have been using it to assist with their homework– and sometimes even do it for them.
A PV student has chosen to stay anonymous to talk about using ChatGPT. “I’ve been using it to help me with my math and english. It provides a step-by-step explanation for math problems, helping me learn. I always double-check the answers because it’s wrong sometimes. And for English, it helps me generate ideas but it’s really not good enough to write whole essays on its own,” they shared.
In fact, the first graf of this article was written by ChatGPT by inputting the prompt, “I’m writing an article for a school newspaper about how AI can’t replace human creativity. Write an introduction for it.”
Results sounding nearly human-made makes it tough for teachers to distinguish student’s writing from AI written work.
AP Literature teacher Robyn Samuelson frets about students using this bot. “I’m really worried students will use this to replace critical thinking. Most of the English curriculum is structured so students can think and connect the dots on their own. However, with ChatGPT, students just skip that step,” Samuelson shared.
Samuelson is also interested in using it to better the classroom. “I’ve also been thinking of ways to incorporate ChatGPT into our curriculum so that students can learn about it and know when it is appropriate to use it. I’ve been playing with it a lot and intrigued on what else we can use this for,” Samuelson continued.
In addition to writing, another one of AI’s endeavors is art.
OpenAI also released a tool called DALL-E. This is an AI that makes art based on the prompt and style given by the user. This tool has gained popularity due to its accurate and realistic art pieces, even winning an art competition.
The problem with this tool is that it might be stealing art without giving credit.
The AI draws inspiration from a large database of art created by numerous artists. Subsequently, the original artists do not receive credit for their pieces. Some mangled signatures even make it to the final product, especially on a popular AI Art app called Lensa AI.
Lensa rose to popularity after a wave of content creators posted AI art pictures of themselves on social media. To use the app, one feeds selfies into the AI and must pay an annual fee of $29.99. The app then uses those selfies to make art about them in a way that the user selects.
There is no doubt these images look original and creative, but where are they really coming from?
We don’t know what databases are specifically being used for DALL-E and Lensa AI, but these databases definitely contain art from real people who aren’t being given credit. Because AI is relatively new, there has not yet been a precedent set regarding copyright laws or what is considered ‘stealing.’
Although AI itself can never be ‘creative’ in the way that humans are, it can provide creative insight by analyzing vast amounts of data, providing humans with new angles on old problems. AI is revolutionizing the creative process and allowing humans to work more efficiently and effectively.
Recent advancements in AI technology have instigated a surge of interest in its impact on creativity.
AI can be used to create high-level works of art with complexity beyond the capabilities of any sole human endeavor.
Sophomore Tanya Rastogi is a local award-winning artist. “It’s too early to predict the long-term effects of AI on the field of visual art and design, but I do have some predictions. Since the appeal of AI art is primarily aesthetic, jobs involving ‘practical’ art may soon be at risk. Fine Art, such as that required for exhibitions, character design and children’s books, will probably stay alive for longer since they require a human creative element. History will repeat itself with a situation similar to the introduction of photography: people will create new forms of visual expression that cannot be generated,” Rastogi shared.
This symbiotic relationship between technology and human endeavor suggests that rather than replacing human creativity, AI can be used as an aid to allow creative processes to reach higher levels of innovation and expression. AI can also be utilized to optimize workflow processes, removing the need for mundane or potentially unsafe tasks to be performed by people.
But no matter how advanced AI may become, it can never outmatch human creativity. Human creativity relies on imagination, original thinking and emotion – all things that no AI technology can replace.
In the end, AI is just a tool. Human creativity will continue to remain essential in the arts, literature and other creative industries.
| 2023-01-10T00:00:00 |
https://spartanshield.org/36580/opinion/ais-role-in-enhancing-human-creativity-a-continuing-debate/
|
[
{
"date": "2023/01/10",
"position": 44,
"query": "AI replacing workers"
}
] |
|
The Inevitable Future Is Now - The Rise Of AI And Removing ...
|
The Inevitable Future Is Now - The Rise Of AI And Removing Humans From Workflows
|
https://pajuhaan.com
|
[] |
The rise of artificial intelligence (AI) has been a hot topic in recent years, with many experts predicting that it will soon replace humans in the ...
|
How Artificial Intelligence Is Slowly Phasing Out Humans From Businesses In The Coming Year
The rise of artificial intelligence (AI) has been a hot topic in recent years, with many experts predicting that it will soon replace humans in the workforce. While some people are excited about the potential benefits of AI, such as increased efficiency and productivity, others are concerned about the potential consequences of removing humans from the workforce.
Impressive Statistics Showing the Growth and Adoption of Artificial Intelligence
The global artificial intelligence (AI) market is projected to reach a staggering $190.61 billion by 2025, growing at a compound annual growth rate of 36.62% during the forecast period (2020-2025). Furthermore, 37% of companies are already using AI to automate tasks, and 27% are using it for predictive analytics. Many businesses also believe that AI will have a significant impact on their operations in the next five years, with 40% anticipating significant changes. Additionally, 65% of businesses believe that AI will create more jobs than it eliminates. In the healthcare industry, the adoption of AI is expected to save an estimated $150 billion per year by 2026. And in the financial services industry, the use of AI is expected to save an estimated $447 billion per year by 2023. These impressive statistics highlight the rapid growth and adoption of AI in various industries, and the potential benefits it offers in terms of cost savings and increased efficiency.
Days Tasks Done In Seconds
One of the main reasons why AI is expected to replace humans in the workforce is its ability to automate many tasks that are currently performed by people. For example, AI algorithms can be trained to perform tasks such as data analysis, customer service, and even creative tasks such as writing and design. This means that companies will be able to save a significant amount of money by replacing human workers with AI systems.
Fast Learn And Fast Adapt
Another reason why AI is expected to replace humans in the workforce is its ability to learn and adapt. Unlike humans, AI systems can be trained to constantly improve their performance by learning from data and experiences. This means that AI systems will be able to perform tasks more accurately and efficiently than humans, making them a more attractive option for companies.
The Dark Side of AI
Job Loss and Unemployment in the Age of Artificial Intelligence! Really?
Despite these benefits, the rise of AI in the workforce has also raised concerns about job loss and unemployment. As AI systems replace human workers, many people will be left without jobs, leading to increased unemployment and potentially even social unrest. This is a particularly pressing concern in industries such as manufacturing and transportation, where a large number of jobs are expected to be replaced by AI in the coming years.
In order to address these concerns, some experts have suggested that we need to invest in education and training programs that will help people learn the skills necessary to work with AI. By providing people with the knowledge and tools they need to work with AI, we can ensure that they are able to adapt to the changing workforce and find new employment opportunities.
The Four Generations of Business Tools by Business OS: From Flexible Framework to AI-Powered Interface
Selldone Business OS has come a long way in its journey to change the landscape of businesses and eliminate the cost of running them. The first generation of Selldone focused on creating a flexible and powerful framework that could handle a wide range of business needs. This foundation allowed Selldone to build on its capabilities in subsequent generations.
The second generation of Selldone added commerce functionality layers with a visual approach. This included features such as payments, shipping, marketplace builder, fulfillment center, and more. These tools allowed businesses to easily manage their sales and operations, making it easier for them to grow and succeed.
But Selldone didn't stop there. In the third generation, Selldone will take things even further by removing the user interface (UI) entirely and replacing it with a conversational interface (CI) for managing businesses on Selldone by AI. This revolutionary change allows businesses to manage their operations using natural language, making it even easier and more efficient to run their businesses on Selldone.
Eliminating the Cost of Support, Design, and Writing
AI can be a powerful tool for businesses, and SD Business OS provides a suite of AI-powered features that can help companies save time and money. With Selldone, businesses can use AI to eliminate the cost of support by automating customer service tasks and responding to common customer inquiries. AI can also be used to manage community forums and social media accounts, freeing up time and resources for other tasks. In addition, Selldone Business OS's AI-powered design tools can help businesses create professional-looking pages and images, even if they lack design experience. And with AI-powered writing tools, businesses can easily generate high-quality blogs and product descriptions without the need for expensive writers. Overall, Business OS's AI features can help businesses save time and money, allowing them to focus on what matters most: growing their business.
AI can be a valuable tool for ecommerce businesses, helping them to automate tasks, improve their operations, and maximize their key performance indicators (KPIs). Selldone Business OS offers a suite of AI-powered features that can help businesses optimize their costs and improve their decision-making. With Selldone, businesses can use AI to automate campaign optimization, allowing them to target the right customers with the right offers at the right time. This can help businesses to maximize the effectiveness of their marketing efforts and drive more sales.
In addition, Selldone's AI-powered website personalization features can help businesses create customized experiences for their customers. By using AI to analyze customer data and behavior, businesses can tailor their website content and offers to each individual customer, providing a more personalized and engaging experience.
Finally, Selldone's AI engine can help businesses to drive their resources more efficiently. By using AI to analyze data and identify areas for improvement, businesses can make better decisions about how to allocate their resources and maximize their KPIs. Overall, Selldone Business OS's AI features can help businesses to automate their operations, improve their decision-making, and drive better results. By using AI to optimize their costs and maximize their KPIs, businesses can increase their competitiveness and drive growth.
Solving the Labor Shortage in the US with AI-Powered Tools
One potential solution to the labor shortage in the United States is the use of AI-powered tools such as Selldone Business OS. Selldone offers a suite of AI-powered features that can help businesses automate tasks and improve their operations, freeing up time and resources for other tasks. By using AI to handle tasks such as customer service, community management, design, and writing, businesses can reduce their reliance on human labor and address the labor shortage.
In addition to helping businesses address the labor shortage, Selldone's AI-powered features can also help them improve their operations and drive better results. For example, Selldone's AI-powered website personalization features can help businesses create customized experiences for their customers, improving customer satisfaction and loyalty. And with AI-powered marketing and analytics tools, businesses can make more informed decisions and optimize their costs and performance.
The Inevitable Future Is Now
As an expert in the field of artificial intelligence, I can say that the rise of AI in the workforce is a complex issue that will have significant implications for both companies and individuals. While AI has the potential to bring many benefits, such as increased efficiency and productivity, it is important that we also consider the potential consequences and take steps to address them. By investing in education and training programs, we can ensure that the rise of AI in the workforce is a positive development for everyone involved. It is crucial that we approach this issue with a clear understanding of the potential benefits and drawbacks of AI in the workplace, and take steps to mitigate any negative effects while maximizing the potential advantages.
| 2023-01-10T00:00:00 |
https://pajuhaan.com/blog/the-inevitable-future-is-now-the-rise-of-ai-and-removing-humans-from-workflows-3568/
|
[
{
"date": "2023/01/10",
"position": 70,
"query": "AI replacing workers"
}
] |
|
Technology in the public sector and the future ...
|
Technology in the public sector and the future of government work
|
https://laborcenter.berkeley.edu
|
[] |
Advanced technologies—algorithms, artificial intelligence, robotic process automation—have begun to change some public jobs significantly, either augmenting or ...
|
Executive Summary
More than 20 million people—about 15% of the United States workforce—work for a local, state, or federal government entity. A majority of these work in local government (e.g., schools, police and fire departments, county social service agencies), about a third in state government (e.g., universities, tax bureaus, state hospitals), and the remainder in federal government (e.g., post offices, national parks). Millions more work for private employers who receive most or all of their funding from public contracts or grants.
With the exception of the military, government has been generally slower to adopt technology than the private sector. Reasons for this include lack of funding, higher public scrutiny, complex contracting processes, lack of internal IT capacity, and agency fragmentation. The slow pace of technology adoption in some cases has led to both costly and cumbersome service provision; the vision of digital government outlined by federal policymakers in the 1990s has yet to be realized. Greater use of technology by governments holds a lot of promise for both workers and the public: it can remove some of the time-consuming and glitchy processes that frustrate everyone, allow workers to focus on the complexity inherent in providing public services, make government more accessible to more people, and get assistance more quickly into the hands of people who need it.
But there are reasons to be attentive to how technologies are rolled out, especially as the recent jump in technology funding opens up the floodgates of consultants and contractors pitching their products. Technology cannot be used to paper over the lack of investment in the public sector that has characterized the past two decades. In fact, technology presents the greatest risk when it’s simply layered on top of already overwhelmed workers and processes, because there is no capacity built in for evaluation and recalibration to ensure that the technology is working as intended. Within the public sector there is enormous variation in size, resource capacity, mission, and political and social context, all of which affect whether and how technology is implemented. But nearly all public sector employers have spent the past decades watching revenues fail to keep up with the costs of providing government services. Since 2008, public sector employment has been stagnant or declining, while private sector employment has grown by 12% and the U.S. population—a measure of demand for government services—by nearly 7%.
Some technologies also present inherent risks, such as those intended to replace or supplement human decision-making. Research suggests that people are reluctant to make different decisions than those suggested by analytics designed to supplement human decision-making, sometimes leading to worse outcomes than those the technology was intended to remediate. There has been considerable evidence that advanced technologies can replicate or even exacerbate racial and ethnic biases. Governments should be deliberate and cautious as they adopt such technologies. Involving workers in the scoping, design, implementation, and evaluation of advanced technologies in particular can help safeguard public trust. Technology as a cost-saving measure must be implemented within a framework that recognizes the role public workers play in assessing whether systems are serving the people the programs are intended to serve.
How governments use technology
The public sector covers an enormous set of occupations and activities, and technology plays many different roles within that landscape. This report sorts technologies into five overlapping categories:
Manual task automation: technologies that replace physical processes or tasks performed by a person. This includes such technology as document scanners, mail sorting machines, digital printers, “smart” parking meters, transcription software, driverless transit, robotic vacuums, and automated toll collectors.
technologies that replace physical processes or tasks performed by a person. This includes such technology as document scanners, mail sorting machines, digital printers, “smart” parking meters, transcription software, driverless transit, robotic vacuums, and automated toll collectors. Process automation: technologies that process information or automate interactions between workers and clients. This includes e-Government processes like online payments and benefit applications, as well as more complex automation such as customer service chatbots and “robotic process automation” (RPA). More complex process automation technologies may use artificial intelligence to “learn” from interactions, rather than relying entirely on human programming.
technologies that process information or automate interactions between workers and clients. This includes e-Government processes like online payments and benefit applications, as well as more complex automation such as customer service chatbots and “robotic process automation” (RPA). More complex process automation technologies may use artificial intelligence to “learn” from interactions, rather than relying entirely on human programming. Automated decision-making systems: the use of complex computer programming to replace or augment human decision-making. This group of technologies includes artificial intelligence, machine learning, and predictive analytics. By processing large amounts of data and using human-programmed algorithms or more complex artificial intelligence, ADM systems generate decisions and assessments.
the use of complex computer programming to replace or augment human decision-making. This group of technologies includes artificial intelligence, machine learning, and predictive analytics. By processing large amounts of data and using human-programmed algorithms or more complex artificial intelligence, ADM systems generate decisions and assessments. Integrated data systems: integrated data systems and networked cloud storage allow vast amounts of public data to support automation and automated decision-making technologies, as well as provide public access to information about government activities and enable more robust performance evaluation and management.
integrated data systems and networked cloud storage allow vast amounts of public data to support automation and automated decision-making technologies, as well as provide public access to information about government activities and enable more robust performance evaluation and management. Electronic monitoring: technologies such as cameras and drones may be used to enforce laws or regulations and feed information into other government processes. Monitoring technologies built into software used by workers can also enable new forms of performance evaluation.
Key findings on government technology use:
There are many examples of innovative agencies using cutting-edge tech, but it is generally true that governments have been slower to modernize than the private sector. Financial constraints, reliance on external contractors, limited in-house IT expertise, and the challenges of providing equitable services to millions of people are just some of the most significant constraints faced by public technology adopters.
Process automation is widely used by governments, and much of it has made public services easier for people to access and while freeing up workers from often overwhelming amounts of paperwork. But there is still much to negotiate about how to serve clients with limited access to electronic services, and how to integrate technology with the complex and idiosyncratic knowledge that humans (caseworkers, counselors, parole officers) use every day to provide effective and personalized support.
Advanced technologies—algorithms, artificial intelligence, robotic process automation—have begun to change some public jobs significantly, either augmenting or replacing some human decision-making, especially in areas of public safety and welfare services. Community and civil liberties advocates are concerned about government’s increasing reliance on technologies for complex decision-making and monitoring.
Drivers of technology adoption
In the public sector there are many different (and often overlapping) motivations for adopting new technologies, which in turn shape the design of the tech, the goals of implementation, and how these systems are evaluated. Nothing about the process of change is inevitable; it is highly dynamic and contingent.
Given these complex structures, this report looks at four driving forces underlying the expansion of technology in the public sector:
Efficiency and cost reduction. In many areas of government, per capita revenue has declined over time as a result of tax-cutting politics, forcing governments to figure out how to provide services in an increasingly constrained environment. Promises that technology can increase efficiency and reduce labor and other costs carry a lot of weight in this context. On other hand, prolonged austerity has constrained funding for technology and other infrastructure.
In many areas of government, per capita revenue has declined over time as a result of tax-cutting politics, forcing governments to figure out how to provide services in an increasingly constrained environment. Promises that technology can increase efficiency and reduce labor and other costs carry a lot of weight in this context. On other hand, prolonged austerity has constrained funding for technology and other infrastructure. Performance. Technology is framed as a core element of promises to make government serve people better and reinstill confidence in government. Technology can potentially improve many aspects of government service: speed, reliability, accuracy, convenience, and even program outcomes (although digitization or automation can also lead to deterioration of service quality).
Technology is framed as a core element of promises to make government serve people better and reinstill confidence in government. Technology can potentially improve many aspects of government service: speed, reliability, accuracy, convenience, and even program outcomes (although digitization or automation can also lead to deterioration of service quality). Transparency and accountability. Technological advances in secure data storage, data sharing, data analytics, and data visualization have the potential to enhance government transparency and accountability. Increased data accessibility allows citizens to understand how resources are being used and whether programs are effective. Transparency is a necessary step toward accountability; advocates for robust data sharing hope it will enable the public to hold their governments accountable to specific objectives and values.
Technological advances in secure data storage, data sharing, data analytics, and data visualization have the potential to enhance government transparency and accountability. Increased data accessibility allows citizens to understand how resources are being used and whether programs are effective. Transparency is a necessary step toward accountability; advocates for robust data sharing hope it will enable the public to hold their governments accountable to specific objectives and values. Crises. Crises can offer important pivotal moments for innovation—and in the case of COVID-19, large amounts of funding for new technologies—but they can also leave agencies too overwhelmed to incorporate technology strategically. The COVID-19 pandemic dramatically accelerated the adoption of technology in the public sector, as agencies had to figure out how to quickly pivot to offering services while complying with public health orders. As the pandemic unfolded and unprecedented numbers of people needed government services, the public sector’s outdated technological infrastructure was exposed along with other areas of underinvestment.
Key findings on technology drivers:
Technology use has allowed many government agencies to restructure cumbersome processes, becoming more user-friendly and increasing productivity by allowing workers to focus on more complex tasks.
The fiscal and workload pressures faced by governments have led many agencies to see technology as a way to bridge the deficit of resources needed to adequately perform their core functions. Technology use can normalize the inadequacy of public staffing rather than resolving it. Chatbots might allow clients to interact with a system 24 hours a day instead of waiting in line in a benefits office and never getting to the front, but if the chatbot is ultimately unable to provide entitled benefits, technology has provided only the illusion of better service.
The COVID-19 pandemic sparked a wave of technology adoption across the public sector, in some cases hastening already planned transformations. Areas like education, where there has typically been significant skepticism about the role of technology, saw an explosion of “edtech” vendors eager to capitalize on schools’ experience with using technology for remote learning. The apparent permanency of hybrid and remote work is likely to continue to drive increased automation, monitoring of workers, and reliance on cloud-based data systems.
The public sector must contend with complicated policy, social, ethical, and legal contexts that don’t similarly constrain private sector actors. Public sector technology projects are accountable to a more diverse set of stakeholders (often with diverse needs) than private employers. Values of transparency and fairness make adopting new technologies much more complex for the public sector, which must ensure that its services are accessible to everyone and accountable to a broad set of public values.
The public sector has struggled to attract and sustain internal IT expertise, and has often relied on outsourcing many IT functions; this has led to a significant reliance on “govtech” companies and consultants. Building internal IT capacity could help address some of the cost overruns and poor outcomes associated with large technology projects.
Impacts on work and workers
The extent to which technology will displace workers or fundamentally change workplace dynamics is uncertain and will vary across the public sector. There are many logistical, ethical, legal, institutional, and social dynamics that affect the trajectory of technology adoption. The growing adoption of complex technologies is likely to restructure work processes as well as to reshape the interactions between workers and the public. Whether these changes ultimately benefit or harm workers will depend significantly on how this restructuring is managed.
This report looks at four categories of impacts on public sector workers:
Employment impacts: when technology is introduced into a workplace, tasks are transformed and redistributed, possibly reducing the need for some occupations and increasing demand for other types of work. It is hard to attribute job fluctuations to technology directly—especially given the cyclical nature of government funding—but this report discusses occupations where automation has likely contributed to declining employment, as well as growing occupations that require more technical skills to oversee and manage computerized processes, including providing direct IT services.
when technology is introduced into a workplace, tasks are transformed and redistributed, possibly reducing the need for some occupations and increasing demand for other types of work. It is hard to attribute job fluctuations to technology directly—especially given the cyclical nature of government funding—but this report discusses occupations where automation has likely contributed to declining employment, as well as growing occupations that require more technical skills to oversee and manage computerized processes, including providing direct IT services. Job complexity: Working with new technologies may require new skills, which aren’t always accompanied by training or the time to adapt. Automating technologies may take over the more mundane aspects of work, making jobs more complex and rewarding for workers. But more advanced technologies—such as automated decision-making systems—may have the opposite effect, taking over complex thinking tasks and leaving workers to simply verify outcomes.
Working with new technologies may require new skills, which aren’t always accompanied by training or the time to adapt. Automating technologies may take over the more mundane aspects of work, making jobs more complex and rewarding for workers. But more advanced technologies—such as automated decision-making systems—may have the opposite effect, taking over complex thinking tasks and leaving workers to simply verify outcomes. Managerial control: incorporating new technologies can lead to work intensification and stress for workers if the tech does not produce the expected efficiency or performance improvements, leaving workers to make up the difference. When tech is adopted without sufficient understanding of how work is actually performed, service quality can suffer. New technologies can also permit additional worker surveillance; chatbot technology can include real-time feedback to workers on their tone and speed during customer interactions, information that feeds into a workers’ performance evaluation.
incorporating new technologies can lead to work intensification and stress for workers if the tech does not produce the expected efficiency or performance improvements, leaving workers to make up the difference. When tech is adopted without sufficient understanding of how work is actually performed, service quality can suffer. New technologies can also permit additional worker surveillance; chatbot technology can include real-time feedback to workers on their tone and speed during customer interactions, information that feeds into a workers’ performance evaluation. Outsourcing: Bringing in new technologies often involves an increased role for private contractors, with the outsourcing of both the development and implementation of new tech as well as increasing reliance on private entities to perform even highly sensitive public functions. Technologies like cloud-based storage and virtual call centers can facilitate the outsourcing of jobs to private contractors by enabling work to be done from anywhere and shifting tasks out of established job descriptions.
Key findings on worker impacts:
Technologies have taken over some of the tasks performed by government workers, predominantly in areas involving basic paperwork-processing and financial transactions. Occupations like clerks and secretaries have been declining for several years and are projected to continue to decline, likely in part because of technological changes. The growing automation of government processes and adoption of more complex technologies has likely contributed to the increase in higher-level business and financial occupations and computer-related occupations.
The growth of complex technologies has begun to restructure work in significant ways, raising fundamental questions about how technology changes responsibility for decision-making and who is responsible for overseeing and fixing the inevitable malfunctions, mistakes, and negative impacts of digitized processes. Workers often feel stressed and uncertain as their jobs are transformed. They value the improvements technology can bring, but they also see technology projects rolled out without a clear plan for training and worker involvement, without clear expectations of workers, and without internal IT capacity adequate to managing the impacts of technology on the lives of clients.
Despite the relatively high share of public sector workers represented by a union, technological solutions are still frequently developed without involving workers. Governments are just beginning to put policies and regulations in place to manage the impacts of technology on citizens, but there are few examples of such policies addressing the impacts on workers.
People who go into public service often have aspirations to improve people’s lives, and are concerned about how technology can deepen existing inequities, make it harder for people to access critical services, and jeopardize the trust between citizens and their government. The possibilities for technology to greatly improve public services are significant, and recent public investments in infrastructure and internal technology capacity development present an important opportunity. A high road approach to this rapidly expanding use of technology will require policies and regulations that bring transparency, accountability, evaluation, and worker and client voices into the process of designing and implementing technology.
Read the full report.
Suggested citation: Hinkley, Sara. 2022. Technology in the Public Sector and the Future of Government Work. Berkeley: UC Berkeley Labor Center. https://laborcenter.berkeley.edu/technology-in-the-public-sector-and-the-future-of-government-work/
| 2023-01-10T00:00:00 |
https://laborcenter.berkeley.edu/technology-in-the-public-sector-and-the-future-of-government-work/
|
[
{
"date": "2023/01/10",
"position": 96,
"query": "AI replacing workers"
},
{
"date": "2023/01/10",
"position": 11,
"query": "government AI workforce policy"
},
{
"date": "2023/01/10",
"position": 63,
"query": "AI labor union"
}
] |
|
AI and Machine Learning Software Testing Tools in ...
|
AI and Machine Learning Software Testing Tools in Continuous Delivery
|
https://www.bairesdev.com
|
[] |
There are many AI and machine learning software testing tools available in the market. ... Job Opportunities · Talent Referrals. Get in touch. Schedule a Call ...
|
In this post, I’ll be going over what I have learned about testing in QA and continuous delivery over my professional career. We’ll also discuss some of the most popular solutions on the market and the benefits of understanding and applying AI to CI/CD pipeline.
Let’s dive into it!
What Is Continuous Delivery?
Continuous delivery starts with the idea of continuous integration (CI). For the software to be released, it must first be built and tested by a team, whose size depends on the scope of the project. Continuous integration is the practice of automating the integration of the work of each of the contributors into a single repository.
This DevOps practice allows developers to frequently merge code changes into a central repository where builds and tests are then run. Automated tools are used to assert the new code’s reliability before integration. And that’s where machine learning algorithms kick in, allowing the team to test their commits in real time and detecting bugs, misbehaviors, potential exploits, and even confusing code that could be refactored.
What follows is continuous delivery (CD), the idea that we are constantly updating our product in a production environment. Instead of a major release, or slow and big updates, continuous delivery means pushing out your product as fast as possible and delivering new content in small chunks. It doesn’t have to be agile in the strictest sense, but it’s certainly a step away from traditional waterfall release schedules.
CI/CD pipelines are not just new software tools; it’s a paradigm, a way of looking at software development. The software has to be built, tested, and finally deployed — that’s true for any methodology you apply. What makes CI/CD stand out is that we see this process as an iteration, with each cycle increasing the quality and scope of our product. Put all of this together and we have the notion of a continuous release process.
The team that builds the code has a release plan in the form of a schedule, usually based on the requirement of the product owner or the investor. Under a standard waterfall model, if you have a product that needs to go into production, that is going to take a while from conceptualization to release.
But in the current market, speed is everything. You have to release that product as fast as possible. And one way to increase productivity is to automate the development process. Code reviews can be a bottleneck to any team due to a little thing I like to call the pyramid funnel.
In essence, teams are usually structured as a hierarchy. With each increasing rank we have fewer and fewer team members whose job tends to involve working with the lower strata (ergo the pyramid). What tends to happen is that you have a code reviewer who has to answer to several different developers, while also handling their own part of the project.
The result? Unless your code reviewer is a machine (pun intended), there is a limit to what a person can feasibly do. So, approvals are delayed. If a developer is sticking to continuous delivery, then they will probably make pull requests faster than what the code reviewer can handle, and that’s how we end up with a funnel.
And that’s just one aspect of the whole process. Also consider that writing your own tests is extremely time-consuming, and as your product changes, so do your tests. Manually building and adapting every test as the project scales is necessary, but it’s one key area that stands to gain a lot from automation.
What Is AI & ML Automation Testing?
AI and machine learning automation testing is the process of building a test automation framework that will automate the testing of a software system. This is an essential feature for testing the quality of the software or its systems, and therefore it requires high testing standards.
Automation testing with AI is performed by software engineers with very little to no input from humans. In an automation test environment, several test cases are executed by the automation tester to verify the functionality of software systems. These test scenarios are also referred to as tests. The test suite can be executed using a tool such as Selenium, JUnit, RSpec, etc.
Test cases can be written in any number of languages, depending on the scope and scale of the project. Automated testing requires the interaction of automation testers with the application and with each other. A failed test can be handled in several ways, from outright rejecting the request, to even proposing new solutions.
Say for example that a function returns an object from a list, but the test finds that when you pass a number higher than the list length you get an error. The automated tester finds that this is because the developer forgot to add an exception handler (a rookie mistake, but a common one). So it automatically messages the author explaining the error and how they can fix it.
During the test procedure, the testers must interact with test tools and test scripts. As the number and complexity of tests increase, automated testing becomes more complicated, which is a problem in itself. Forethought and carefully planning are fundamental when building your automated test framework.
“Wait a minute,” I hear you say. “Isn’t this what automated testing tools already do?” Well, yes, in fact, it is. In many ways, automated testing tools are already AI, but they come with limitations.
The trick here is that with machine learning we can broaden the scope of our testing environments. The truth is that no two development processes are equal; each team will have different needs, and no algorithm can cover every single use case. As such, by training an AI with carefully selected data we can tailor an agent that aligns with the idiosyncrasies of our project.
Now, it’s very important to understand that automated testing is not here to replace code reviewers, QA, and testers, but to work in tandem. AI, no matter how advanced, isn’t yet on par with human ingenuity and the ability to think creatively. AIs are limited to their training, and the more we deviate from their training patterns, the more likely our digital helpers are going to go off the rails.
To summarize, we can integrate AI and machine learning in three key areas of our CI/CD pipeline:
Automatically generating tests for your application. This can help ensure that your application is always up to date and compliant with the latest changes. Analyzing your application’s codebase and identifying areas that are most likely to break in the future. This information can be used to prioritize which areas of the codebase should be tested more thoroughly before each release. Monitoring your application’s performance in production and identifying potential issues early on. This can help you avoid potential outages or degraded performance for your users.
How to Train an AI as a Software Testing Tool
The methods of training an AI vary depending on the type and complexity of the system being trained. The most basic method is to provide it with a set of data points it can then use to learn about and identify patterns within that data. This process can be repeated with different data sets until the AI can generalize its understanding to new data sets.
More complex methods may involve using reinforcement learning, in which the AI is given rewards or punishments based on its performance in achieving certain tasks. This allows the AI to learn through trial and error, gradually improving its performance over time.
In both cases, you will need a data set of software testing examples that includes input values and expected outputs. This data set can be created manually or sourced from an existing database. It can be based on your previous work or taken from open sources. The bigger the data and the more variability, the more powerful the model.
Once we have found the data, what comes next? Most commonly, the data is separated in a 75/25 split. We call the bigger group “training data” and the second group “test data.” As the name implies, we use the training data to actually train our model.
Then you can check if your AI is working correctly by examining the accuracy of your prediction on training data. If you get good accuracy, it means that we are on the right track.
But, is the machine really predicting an outcome, or did it just learn the training data set? To answer this question we check the model with our test data. If you get good accuracy on test data, it means that your AI is working well and has not memorized the training data.
It’s quite normal to not get a good model the first time around. Depending on the underlying algorithm you may have to change the parameters until you find the best possible model. For example, with neural networks, we play around with both layers and nodes per layer until we are satisfied with the result.
Ultimately, whichever method or combination of methods is used will depend on what kind of AI is being trained and what applications it will be used for. Creating an AI requires time, resources, and processing power, and while the prospect might seem enticing, ask yourself, “Is my project struggling with testing?” Small projects have very little to gain from all this effort.
On the plus side, once you have an automated tool, it can be repurposed for future projects.
There are many AI and machine learning software testing tools available in the market. However, it is difficult to select the most appropriate tool for continuous delivery pipelines. Some of the popular AI and machine learning software testing tools are:
TensorFlow : an open-source platform for machine learning. It offers various features such as data flow programming, automatic differentiation, and deep neural networks.
: an open-source platform for machine learning. It offers various features such as data flow programming, automatic differentiation, and deep neural networks. Keras : a high-level API for deep learning that can be used with TensorFlow or Theano. It offers various features such as model construction, training, and prediction-making.
: a high-level API for deep learning that can be used with TensorFlow or Theano. It offers various features such as model construction, training, and prediction-making. Scikit-learn : a free software Machine Learning library for Python programming language. It offers various features such as classification, regression, and clustering algorithms.
: a free software Machine Learning library for Python programming language. It offers various features such as classification, regression, and clustering algorithms. Microsoft Azure ML Studio: a cloud-based service that allows developers to build, deploy, and share predictive analytics solutions. It offers various features such as a drag-and-drop interface, pre-built models, and sample data sets
Clever readers will probably realize that these tools are mainly for machine learning, but they can be integrated with CI/CD solutions like:
Jenkins : an open-source automation server that can be used to automate various tasks related to software development, such as building, testing, and deploying code changes. It has a large community of users and plugins that make it easy to extend its functionality. Jenkins can also be used to trigger other processes, such as sending notifications or triggering deployments in other systems.
: an open-source automation server that can be used to automate various tasks related to software development, such as building, testing, and deploying code changes. It has a large community of users and plugins that make it easy to extend its functionality. Jenkins can also be used to trigger other processes, such as sending notifications or triggering deployments in other systems. Bamboo : a commercial continuous integration and delivery tool from Atlassian (the company behind Jira and Confluence). It offers many features that are similar to those found in Jenkins, such as the ability to build, test, and deploy code changes automatically. Bamboo also has many plugins available that allow it to integrate with other Atlassian products (such as Bitbucket) or third-party tools (such as Slack).
: a commercial continuous integration and delivery tool from Atlassian (the company behind Jira and Confluence). It offers many features that are similar to those found in Jenkins, such as the ability to build, test, and deploy code changes automatically. Bamboo also has many plugins available that allow it to integrate with other Atlassian products (such as Bitbucket) or third-party tools (such as Slack). GoCD: an open-source continuous delivery tool from Thoughtworks (the company behind the popular Agile project management tool Rally). It offers similar features to Jenkins and Bamboo but is designed specifically for use with the Go programming language (hence its name). GoCD has many plugins available that allow it to integrate with other tools (such as Jira), but its main focus is on supporting builds and deployments for Go applications.
And that’s just a small sample. There are several different commercial and open-source testing tools on the market that offer the aforementioned capabilities. Selecting the right tool for your needs will depend on some factors, such as your budget, the size and complexity of your application, and your team’s skill set.
There are many benefits of implementing AI and machine learning software testing tools in your business. Some of these benefits include:
Increased accuracy: AI and ML software testing tools can help to increase the accuracy of your test results. This is because these tools can help to identify errors and issues that would otherwise be missed by human testers. Increased efficiency: AI and ML software testing tools can help to automate repetitive tasks. This can free up time for your testers to focus on more important tasks. Improved quality: ArtIficial inteligence and ML software testing tools can help to improve the quality of your software products. This is because these tools can identify errors and issues that would otherwise be missed by human testers. Reduced costs: AI and ML software testing tools can help to reduce the costs associated with your software development projects. This is because these tools can automate repetitive tasks, which can save you money on labor costs. Faster testing: AI and machine learning software testing tools can also help to speed up the testing process. This is because these tools can automate repetitive tasks, such as test case execution and data collection. Improved coverage: Artificial intelligence and machine learning software testing tools can also help to improve the coverage of your tests. This is because these tools can generate new test cases based on past data and experiences. Increased flexibility: AI and ML software testing tools can also help to increase the flexibility of your testing process. This is because these tools can be used to test a variety of different applications and systems. Easier integration: AI and ML tools can also help to make the integration of your testing process easier. This is because these tools can be used to test a variety of different applications and systems across projects. Improved scalability: AI and ML testing tools can also help to improve the scalability of your testing process. This is because these tools can be used to test a large number of applications and systems with very little overhead.
What’s the Next Step?
Continuous delivery is a process of delivering new content in small chunks instead of one big release. It starts with continuous integration, which is automating the process of integrating code changes into a central repository. Automated testing with AI is used to verify the functionality of the software system.
This AI has to be trained with previous data, which requires a lot of work, but once that’s done, it provides a very powerful and flexible model that can be repurposed for different projects or different areas of the same project.
AI and Machine learning software testing tools are here to stay. To build the AI, you are probably going to need both software engineers and data scientists to gather the data, build the model, test it, and find it. This might mean expanding your team or employing AI development services to handle the workload. Whichever may be the case, AI and machine learning are powerful options for companies looking to up their game and optimize their pipelines.
If you enjoyed this, be sure to check out our other AI articles.
| 2023-01-10T00:00:00 |
2023/01/10
|
https://www.bairesdev.com/blog/ai-and-machine-learning-testing-tools/
|
[
{
"date": "2023/01/10",
"position": 56,
"query": "machine learning job market"
}
] |
BS Artificial Intelligence for Business (BUAI) - USC Marshall
|
BS Artificial Intelligence for Business (BUAI)
|
https://www.marshall.usc.edu
|
[] |
... machine learning and artificial intelligence for business applications ... markets through AI innovation. ExpandClose. APPLYING TO USC MARSHALL ...
|
By the end of the program:
- Students will have expertise in leveraging AI within organizations with a goal of enhancing societal value, understanding the ethical issues and risks in AI technologies.
- Students will be able to conceptualize new products and services that leverage AI, with a sophisticated understanding of the state-of-the-art capabilities and practical limitations of what AI technologies can realistically achieve in a business context.
- Students will be capable of forming effective teams with the expertise required to implement and scale AI technology solutions and formulating realistic timelines for AI products.
- Students will have expertise in analytics-driven decision making and strategy, able to discern the value of data assets in an organization and the effort required to realize the full potential of analytics in a business context.
Students will understand and situate advanced AI technologies, make realistic assumptions about potential and risks for business ventures, and envision new markets through AI innovation.
| 2023-01-10T00:00:00 |
https://www.marshall.usc.edu/programs/undergraduate-programs/undergraduate-degrees/bs-artificial-intelligence-for-business-buai
|
[
{
"date": "2023/01/10",
"position": 77,
"query": "machine learning job market"
},
{
"date": "2023/01/10",
"position": 24,
"query": "artificial intelligence business leaders"
}
] |
|
How Can AI Be Used in the Workplace?
|
How Can AI Be Used in the Workplace?
|
https://cxapp.com
|
[
"Cxai Team"
] |
And while the rise of AI will eliminate 85 million jobs, it will also create 97 million new ones by 2025. More relevant here, perhaps, is how it applies to ...
|
Recent advances in machine learning and natural language processing have transformed the Artificial Intelligence (AI) landscape. While still error-prone, language models like those employed by ChatGPT are providing an early glimpse into the transformative potential of AI. But AI already has real-world applications, especially when it comes to mobile devices. It's being used successfully in technology like Google's Pixel lineup (such as Tensor SoC and TPU), to power features like image processing, text-to-speech, and machine vision. It's helping people communicate in languages they don't speak and changing the way we live and work.
It's no surprise then that these capabilities are starting to manifest in workplace environments — particularly when it comes to workplace apps, and the mobile experience — and helping to improve productivity, personalization, predictive analytics, and more.
What is Artificial Intelligence (AI)?
Artificial intelligence is the ability of a computer or machine to perform tasks that would otherwise require human intelligence. Everything from problem-solving to speech recognition can be handled by AI.
Application-specific AI (narrow AI) is used is used to perform specific tasks — this is the AI that most of us are used to, a good example of them being chatbots on websites and apps. General AI (AGI), on the other hand, has the ability to perform a wide range of tasks and can adapt to new situations through the processing of experience and context — the most compelling example of this, might perhaps be Google's PaLM, that can not only understand an entirely novel joke, but also explain why it's funny. But while strides are being made here, we're still very much in the early, theoretical stages of AGI.
AI Adoption in the Global Enterprise
Currently, 35% of businesses and organizations employ narrow AI technology, while an additional 42% are exploring its utility. And while the rise of AI will eliminate 85 million jobs, it will also create 97 million new ones by 2025.
More relevant here, perhaps, is how it applies to mobile experiential apps. For this, AI is typically used to measure and extract insights from data, often related to traffic, booking behaviors, time of day sequencing, ordering preferences, and so on. That information is then used to improve employee experiences and streamline operations, in many ways making the on-site experience smoother. It's crucial to note that the data is anonymized so that user privacy is preserved.
How Does AI Manifest in the Workplace?
An excellent way to understand how AI will shape the workplace, both now and in the future, is to consider several personas and how a variety of tasks in their daily work can be enhanced by AI. Bear in mind, these are generalized and are not meant to represent any one particular professional or group.
Persona 1: Workplace Automation
Traveling back and forth between home and the office, or another remote location carries with it several new responsibilities. For example, you need to plan by booking a workspace for your big day in the office. Where else are you going to work if you don't?
People tend to fall into a habit, and the same is true when it comes to room booking. All employees have personal preferences, including our persona, which means they might need certain amenities or features in the rooms they use. Maybe they always look for a television or digital whiteboard in the rooms they're using. Maybe they have a favorite floor or section, or maybe they always book near colleagues, or for a certain day or time.
Whatever the case, when they open their workplace experience app, the AI knows these preferences and helps ensure they're booking the right workstation. The system might ask "Would you like to book your favorite desk?" Or, it might make suggestions on bookable spaces to help speed up and optimize the reservation process based on usage patterns, pre-defined criteria, and user behavior.
Persona 2: Intelligent Personalization
Imagine someone who loves coffee so much — maybe it's you — that they stop for a java every time they leave their office. Visiting a client? Grabbing a coffee beforehand. Going on a quick break? Grabbing a coffee then too. Leaving for the day? Another pick-me-up perhaps.
Even when they're visiting another office or building with an on-site café they might stop for a coffee there, and who can blame them? An AI can build behavioral patterns based on when they know a user will want coffee and direct them to the nearest one with the shortest line. It can even offer to put in the right order online, before the user reaches the coffee place, so that their wait time is minimized.
This is especially useful for users who may work outside of one campus. If they're new to a place and need to get their coffee, the AI can help them pinpoint where to go without ever having to pull up a map.
Beyond coffee, an AI can help employees better understand their daily workplace routines and optimize them to be more productive. For example, an AI can send reminders for walk breaks and suggest corporate events and seminars that may be of interest to them based on their role.
Since an AI can learn each user's habits, it can find new ways to provide useful information, reminders and insights on a daily basis, like a digital assistant. Providing one for each employee is a powerful way to keep them engaged at work, as it offers many opportunities for ongoing personal and professional growth as well as convenience and ease.
Persona 3: Predictive Workplace Analytics
Workplace analytics is one area where AI can really make a difference. One of the key things that a workplace analytics manager does is look at space utilization, desk utilization, and room usage to understand common bookings, workplace requirements, and capacities. It's a lot to keep track of, even with digital tools for support.
The AI gathers measurements on the fly, giving managers the full utilization rates, per hour, day, or over multiple weeks, months, and quarters. Instead of trying to extract the necessary information themselves, they can see it at a glance. Even better, AI can be used to make predictive utilization reports, directly leveraging past data patterns and trends. Workplace managers can then use these insights to make better decisions about how to manage their campuses.
Maybe they'll notice bookings are higher on a certain day of the week — like more people traveling to the office Tuesday through Thursday, and staying home on Mondays and Fridays. Or, maybe more desks and rooms are being booked between the hours of 10 AM and 3 PM during the fall and winter seasons, where before there was a clear ramp up and ramp down.
Predictive analytics and AI can help determine how busy the office will be on precise days of the year, all by utilizing past data, local conditions — like the weather — and other relevant information. For example, if it's raining on a Friday afternoon on a cold fall day, you know you're going to get fewer people coming into the office than on a sunny Friday during the summer.
As soon as it notices that attendance is down, an AI can instantly look at full day attendance metrics during rainy days in the past and make the recommendation to close down extra rooms for cleaning and maintenance. As a result, the workplace manager gains more time to focus on solving more complex and nuanced issues, since the AI can learn to look for and fix lower-hanging fruit for them.
AI In the Workplace
AI elevates the workplace experience for employees by speeding up decision-making processes, but also by delivering smarter and more precise results. Better yet, it makes going to the office easier, by streamlining booking, navigation, ordering, and much more. With an AI-powered employee experience platform managing predictive analytics, workplace leaders can make better and more informed decisions about the office, year-round.
| 2023-01-10T00:00:00 |
https://cxapp.com/blog/ai-in-the-workplace
|
[
{
"date": "2023/01/10",
"position": 25,
"query": "AI job creation vs elimination"
},
{
"date": "2023/01/10",
"position": 96,
"query": "future of work AI"
},
{
"date": "2023/01/10",
"position": 1,
"query": "workplace AI adoption"
}
] |
|
What Should We Expect From AI?
|
What Should We Expect From AI?
|
https://builtin.com
|
[] |
Half of all companies now use AI in at least one business function, and while many jobs have been replaced by machines, AI is expected to create millions of new ...
|
Artificial intelligence (AI) is slowly but steadily transforming the way we do nearly everything, from driving cars to creating award-winning art. Half of all companies now use AI in at least one business function, and while many jobs have been replaced by machines, AI is expected to create millions of new jobs and inject up to $1.8 trillion into the economy by 2030. It’s the defining technological concept of our time.
Rooted in the scientific method, artificial intelligence uses a model-based approach to solving problems: observing facts, formulating hypotheses, testing them, analyzing the results, and drawing conclusions. The viability of this approach depends on how much data can be brought to bear on a problem, with a larger quantity of data generally yielding more reliable results.
For years, limits on the technical capacity to amass and analyze data have prevented organizations from mobilizing and processing enough of it to solve more than the simplest problems. But in recent years, a massive increase in computing power has made data more accessible and raised the roof of AI’s power to process and analyze raw information, resulting in an explosion of algorithms that control and facilitate many aspects of modern life.
For example, AI is integrated into image recognition programs used to assist in medical diagnoses. Facial recognition technology helps casinos, sports stadiums and other public venues identify known scammers and criminals. AI is woven into quality assurance (QA) processes to detect flaws in manufacturing processes. It’s used to recognize and assess the tendencies of athletes. The list of applications grows longer each day.
How Does AI Work? Rooted in the scientific method, artificial intelligence uses a model-based approach to solving problems: observing facts, formulating hypotheses, testing them, analyzing the results, and drawing conclusions. The viability of this approach depends on how much data can be brought to bear on a problem, with a larger quantity of data generally yielding more reliable results.
More on Artificial IntelligenceShould AI Be Used to Interpret Human Emotions?
Is AI About Intelligence or Velocity?
A massive expansion of computing power is allowing companies to harness unprecedented quantities of data, making AI the high R&D priority it is for so many organizations today. Just a generation or two ago, it would have taken several lifetimes to process the amount of data we can now process in just a few hours. In fact, data processing capacity today has expanded so much that an ordinary laptop could handle the 1969 moon launch.
But is that intelligence or just velocity? Computers can certainly “think” (process) much faster than humans, but they can’t do anything at all unless humans first tell them what to think. Some organizations go astray here, fixing their attention on execution but leaving out purpose, feeding mountains of data into sophisticated programs and expecting them to spit out something significant.
AI can solve problems we can think of, but it can’t think of problems we should solve. Reaching useful conclusions depends on asking the right questions and applying the scientific method. Artificial intelligence is used to detect genetic and other biological anomalies, for example, but without specific human instructions to pursue specific genetic markers, the process will return nonspecific (and therefore not very useful) results. If the program is trained to recognize specific genetic markers, on the other hand, it will do so accurately and thoroughly. All discovery is useful, of course, but directed discovery will always be more fruitful than random.
How to Hire for AI-Based Jobs
AI also represents a massive field of new career opportunities for technology professionals. Up-and-coming software engineers interested in working in AI and its sub-fields (machine learning, deep learning, natural language processing, robotics and others) may presume they need a whole new set of skills, but that’s not the case. A successful AI software engineer still relies on the core skills that engineers have always used to convert abstract ideas into useful tools. These skills include the ability to think analytically, reason abstractly, avoid bias, and express complex concepts in ways that non-technical people can understand.
Like others, my company is currently working to incorporate more AI capabilities into our product offering. We’ve had success hiring young software engineers, and I believe that’s because new engineers are generally free of bad habits, blind spots, or limits they may have picked up from previous employers. An engineer who’s works for years may develop rigid ideas about what works and what doesn’t.
Take the QA function in manufacturing I mentioned above, for instance: If an engineer spends ten years in an organization that accepts a 1.2 percent product defect rate as acceptable, for instance, that engineer may be less motivated to push through obstacles to achieving the 0.7 percent defect rate tolerance demanded by a different employer.
Being an experienced engineer has its advantages, of course. Seasoned engineers are generally better at collaborating and have learned the critical lesson that real companies work within time and budget limitations—that it’s not always possible to explore endless theoretical paths to achieve an outcome. As a student, gaining experience is the point, but in a real company, we need to achieve results.
In general, I look for engineers who have a clear understanding of their own personal limitations. Regardless of experience level, I find that people who are a bit unsure of themselves often make the best engineers because they tend to question their assumptions automatically. It’s OK to struggle and even fail, as long as you openly acknowledge failure and these mistakes help the team identify and eliminate false paths to finding the best solution for a given problem.
Communication skills are often overshadowed by technical prowess, but they really are critical. If you have a great idea but can’t make other people whose input is necessary to bring that idea to life understand it, your idea will never gain any traction. For example, if I were to come up with a cure for cancer but couldn’t explain it to other experts necessary to bring it to practical application, no one would be cured.
More in Machine LearningHow Machine Learning Helps Predict Stock Prices
Changing the AI Game
Artificial intelligence is a game-changer, but it’s also a fairly predictable next step in the long quest to harness technology for improvements to human life. It gets closer to mimicking human intelligence every day, but it will never be able to take over and turn against us. AI is not destined to substitute for people; it’s intended to help us do a better job at solving problems that prevent us from living and working up to our full potential.
And as long as it’s driven by quintessentially human skills — curiosity, skepticism, reason — artificial intelligence will continue to knock down barriers and ensure that the only limits we face as humans are the limits of our own imagination.
| 2023-01-10T00:00:00 |
https://builtin.com/artificial-intelligence/ai-expectations
|
[
{
"date": "2023/01/10",
"position": 74,
"query": "AI job creation vs elimination"
}
] |
|
Artificial Intelligence Vs. Jobs: The Future Of Work In ...
|
Amazon.com
|
https://www.amazon.com
|
[] |
"AI vs Jobs" is a thought-provoking book that explores the potential impacts of artificial intelligence (AI) on the job market.
|
Click the button below to continue shopping
| 2023-01-10T00:00:00 |
https://www.amazon.com/Artificial-Intelligence-Vs-Jobs-Future/dp/B0BRYZNGMH
|
[
{
"date": "2023/01/10",
"position": 40,
"query": "future of work AI"
},
{
"date": "2023/01/10",
"position": 50,
"query": "generative AI jobs"
}
] |
|
How AI will Transform Project Management
|
How AI will Transform Project Management
|
https://graduate.northeastern.edu
|
[
"Meghan Gocke"
] |
In addition, AI robots are able to work on routine project tasks, allowing greater bandwidth for team members to take on more critical and complex tasks through ...
|
There is no shortage of information about the expansion of Artificial Intelligence (AI) and the impact it will have on every facet of our lives. Not too many years ago, there was much skepticism around the use of AI for anything other than repetitive tasks that could be duplicated through machine learning. AI can be developed to perform complex tasks that once could only be performed by human intelligence.
Since the emergence of programs such as ChatGPT, AI’s existence has expanded beyond the automotive, aerospace, healthcare, and financial industries that primarily leveraged this technology with advanced robots. Now, industries such as education and marketing are seeing major shifts in workplace processes and procedures.
The adoption of these technologies continues to grow according to a report issued by the McKinsey Global Institute, stating that many companies are seeing “the highest financial returns from AI,” which has greatly contributed to their competitive advantage in the marketplace.
Download Our Free Guide to Advancing Your Project Management Career
Learn what you need to know, from in-demand skills to the industry’s growing job opportunities.
DOWNLOAD NOW
How Will AI Affect Project Management?
Like many other professions, project management will not be immune to the impacts of AI. Many phases of the project lifecycle are already undergoing an evolution in which traditionally manual tasks performed by humans are becoming automated tasks performed by machines.
Here are two key examples to illustrate this change.
Risk Management
Take risk management for example. It is very common to have a project team develop a risk register using various inputs at the onset of a project. From there, updates are made to the register as project managers become aware of new risks through:
Conversations with stakeholders
Observations of the work in progress
Schedule delays based upon impacts from a dependency
In the past, these risks were often perceived as “a foregone conclusion.”
An organization’s risk register was built by an experienced team who had led similar projects in the past and was well-versed in historical risks. Eventually, those risks would become issues, impacting the project team for a variety of reasons, including poor root-cause analysis or lack of collated information for effective evaluation. However, the downside of this method is the stagnation of a register and the inability to capture emerging threats that project managers may have not previously encountered.
There are a number of ways AI can address these challenges in risk management. For example, certain AI technology can synthesize two years’ worth of risk and issue logs while still leveraging historical data to predict the future success or failure of a project—without the hassle of manual upkeep. Using sophisticated algorithms, AI is also able to assess the performance of dependent systems to identify end-of-life risks to projects or even security vulnerabilities to the product being developed.As a result, project managers—and the organizations they support—that leverage AI capabilities will see significant time, money, and resources saved in risk management.
Project Estimations
Many project managers leverage Organizational Process Asset repositories or historical business information to estimate a project’s duration, costs, and progress. This is often done one of two ways:
Top-down estimate: Performed quickly by functional management so a project can be fast-tracked
Performed quickly by functional management so a project can be fast-tracked Bottom-up estimate: Completed by team members who may be too conservative in their estimates—leading to inflated costs
Since the emergence of AI, project managers are using robots to streamline this process, allowing them to analyze three years’ worth of historical project data—leveraging factors such as productivity rates, attrition rates, and holiday time—to come up with a project estimate that provides an accurate forecast of future investment needs. In addition, AI robots are able to work on routine project tasks, allowing greater bandwidth for team members to take on more critical and complex tasks through an intelligent process automation (IPA) system.
Will Artificial Intelligence Improve Project Management?
With new innovations, there is always some level of uneasiness when it comes to job security and major industry changes. For example, most industry projections indicate that around 1.7 million jobs have been slowly phased out since 2000. For project managers, much AI apprehension revolves around their utility within their organizations. If an automated program can efficiently allocate and assign projects, create accurate investment projections, and calculate risk, they may wonder, “What is left for me?”
While cross-functional teams within an organization are seen as groups of individuals matrixed to a variety of managers, AI’s prominence has altered this definition to the blending of human and robot competencies. Today, no machine can replace the human intuition, creativity, and adaptability that project managers provide for organizations.
It is clear various industries are taking advantage of the impact artificial intelligence has on everyday tasks—especially within project management. In fact, it’s predicted that AI will create 97 million new jobs by 2025.
AI’s ability to automate repetitive processes, such as administrative tasks, frees project managers to focus on strategic planning and problem-solving instead. Potential risks can also be predicted more accurately with a more holistic view of the project vision, allowing project leaders to estimate resource requirements for improved allocation and budgeting. AI-powered tools can also perform real-time monitoring to ensure project plans go smoothly and identify potential bottlenecks, enabling higher project success rates. With the help of AI, project managers can acquire crucial information, make well-informed decisions, and attain superior project results.So how can project managers prepare for the potential impact of AI on the industry?
Is AI the Future of Project Management?
In today’s business world, project managers must embrace technology and leverage AI where possible to increase the likelihood of project success. Humans have unique critical thinking skill sets that when applied to systems, projects, and achieving an organizational mission, can create insights and recommendations needed to propel the project forward to a successful conclusion.
Leveraging AI to automate and improve data sets utilized in project execution will allow organizations to realize optimal investment value in the project and potentially identify savings that could be leveraged for further investments in product development leading to organizational growth.
Northeastern University is at the forefront of incorporating AI into our curriculum through experiential learning opportunities as explained by President Joseph Aoun in his book Robot-Proof. Check out Northeastern University’s catalog of project management degree and certificate programs to learn how you can join our mission of developing “robot-proof” students and become part of an innovative institution leading this new age of technology.
To learn more about improving your project management skills to advance your career, download our free guide below.
| 2023-01-10T00:00:00 |
2023/01/10
|
https://graduate.northeastern.edu/knowledge-hub/ai-and-project-management/
|
[
{
"date": "2023/01/10",
"position": 88,
"query": "future of work AI"
}
] |
How AI Can Boost Employee Autonomy, Competence
|
How AI Can Boost Employee Autonomy, Competence
|
https://www.informationweek.com
|
[
"Nathan Eddy",
"Freelance Writer"
] |
Deployment of AI technologies can be used to automate business processes and give more time back to employees who are keep parts of a transformation.
|
While many people fear the rise of a digital workplace will replace workers with machines, smart technology leaders know that utilizing artificial intelligence and machine learning should benefit employees, not replace them.
A recent study from MIT Sloan and Boston Consulting Group (BCG) suggests AI tools can drive individuals to excel in their independence by helping them learn from past actions.
These tools can also help individuals deepen relationships with coworkers, customers, business partners and other stakeholders.
Automation powered by AI and ML helps companies save time and money by making workers’ lives easier, allowing them to focus on more pressing tasks. Meanwhile, AI/ML technologies are among the few tools that can improve employee competence and autonomy at high rates.
“In addition to employee productivity and autonomy, AI/ML focused initiatives also enhance the effectiveness of an organization's employee decision making.
Leaders can integrate AI tools with applications to help employees learn from past actions, project future outcomes, and make better decisions. By doing so, leaders can provide their employees with AI-enabled autonomy, giving them room to focus on higher-level tasks and less managerial oversight.
Turning Data Entry Employees into Bot Managers
J. P. Gownder, vice president and principal analyst for the future of work at Forrester, says delaying AI can take rote, repetitive, predictable tasks off the plates of workers by automating things that probably should be done better by machines in the first place.
“There are entire groups of people who have been employed as data entry people, but data entry is not a great human task. It's boring, but it's also prone to lots of errors,” he says. “The better scenario there is to give that worker new skills and new tools, including robotic process automation bots that can help to do the data entry.”
That worker is retrained to become a bot master, and that bot master knows how to handle exceptions to situations that come up, as well as performing quality assurance and acting as a subject matter expert to teach the algorithms to be more effective.
“Adding technology to human workers gives them better tools to be more productive, to do less boring, repetitive stuff, and to use their creativity and their judgment, and indeed their expertise about the process, to actually have a more fulfilling job,” Gownder says.
Autonomation Contributes to Employee's Digital Journey
Anand Rao, global AI lead and US innovation lead for PwC's emerging technology group, says it is important to think of how autonomous employees can contribute to a company’s digital transformation efforts.
“While digital transformation journeys are typically thought of in a company-wide context, individual’s digital journeys are just as important,” he explains. “By upskilling employees, leaders are giving their workers more digital freedom and autonomy. This ultimately leads to company-wide digital proficiency, an environment ripe for innovation.”
Rao explains that AI and machine learning technology help employees improve competence by equipping them with better, more accurate data to make better decisions, ultimately deepening their understanding of their work.
“Without AI/ML, no human being could gather and analyze enough data to effectively do their job,” he says. “But with AI/ML technologies, employees can get this data at lightning-fast speeds, leaving them more time to work on innovative and creative solutions for their companies.”
He points out how AI/ML technologies also raise the average proficiency and competency of employees by capturing the knowledge and insights from an organization's best performers in an AI/ML system.
“The ‘cognification’ of subject matter expertise is one of the main benefits of an AI/ML system that enable novice employees to perform better,” Rao says.
Forming an Employee-Focused AI/ML Strategy
Rao says responsibility for developing a strategy typically sits with AI and emerging technology leaders and CIOs. However, with any transformation effort, it’s important to engage each leader of the C-suite when implementing new technology to ensure there is have buy-in across the board to increase tech adoption.
“HR teams or talent teams focused on ‘future of work’ initiatives are also key stakeholders in developing a strategy,” he adds. “If leaders do not adopt AI/ML capabilities, they run the risk of becoming outdated and being left behind as the rest of the world continues their digital transformation journeys.”
This means leaders must assume authority and enact guidelines and ramifications when using AI/ML technology.
“Without governance, there is potential for harm to manifest through disinformation based on inaccurate data. Regulation helps provide stability and security to the business, and builds trust among their employees and consumers,” Rao says. “A robust and sophisticated data strategy will ensure businesses have control over their data, while still encouraging innovation within their organization.”
The Psychology of Autonomy
Gownder notes that autonomy, generally speaking, has a psychological impact: How do I feel about the work I'm doing? Can I do it without someone micromanaging me? Can I do a certain amount of work on an independent basis?
“It allows employees to get into a flow state where they're involved in the work and not having to deal with lots of interruptions,” he says. “Oftentimes with AI, you're talking about things like chatbots that might help with self-service.”
This could include an employee typing in a question about who at the organization is the expert on a certain topic and then connecting to them, or querying through the bot to find out how to complete a certain task
“This comes up increasingly in a lot of different job categories, for example in field service or a frontline worker,” Gownder explains. “Field service technicians might be able to pull up [a] kind of schematic or some sort of reference that tells them the steps that they should take.”
When trying to determine the next best action when they're trying to fix some complex machinery, AI and automation are increasingly part of that picture. “The tools are getting more sophisticated, and they can help you to understand what you should do next,” Gownder says. A lot of these AI instances are just little moments of assistance and automation that get woven into the technology we already use. And it just creates these micro moments of improvement in your day.”
Building Trust Management, Involving HR, and IT
From Rao's perspective, a critical element for the adoption of AI/ML systems is “trust management”, which means the AI/ML system needs to build trust with the human users; they need to be designed in a manner that engenders explainability, believability, and fairness.
“Good AI/ML systems have a process for humans to trust the machines and vice versa,” he says. “While change management focuses on human-human interactions, trust management focuses on human-AI interactions.”
Gownder agrees deploying AI to aid employees is not merely an IT issue or an HR issue.
“It is business leaders who are managing people in all roles that are affected,” he says. “HR manages learning and development, but also things like trying to understand what skills the organization has and how they can help employees level up their skills by learning new tools.”
The organization's technologists can make sure all this is done in a secure, safe, and effective, performative way.
“It's really business plus HR plus IT,” Gownder says. “All those folks are going to be important to the effort.”
What to Read Next:
How to Measure Automation Success for the Enterprise
Automation Gains a Foothold, But How to Scale It Is the Challenge
Enterprise Guide to Robotic Process Automation
| 2023-01-10T00:00:00 |
https://www.informationweek.com/it-leadership/how-ai-can-boost-employee-autonomy-competence
|
[
{
"date": "2023/01/10",
"position": 4,
"query": "workplace AI adoption"
},
{
"date": "2023/01/10",
"position": 56,
"query": "artificial intelligence business leaders"
}
] |
|
Innovation: Your solution for weathering uncertainty
|
Innovation: Your solution for weathering uncertainty
|
https://www.mckinsey.com
|
[
"Matt Banholzer",
"Michael Birshan",
"Rebecca Doherty",
"Laura Laberge"
] |
In this article, we look at how innovation in business is a crucial element that can help overcome supply-chain disruption and economic uncertainty.
|
In times of disruption and great uncertainty, most organizations tend to protect what they have and wait for a return to “normal.” That’s a high-risk strategy today because we may be on the cusp of a new era. Structural supply-chain issues, rising interest rates, and sustainability challenges are just a few conditions that have become the new norm and hold critical implications for business models. Amid this much change, merely trying to manage costs and raise productivity is unlikely to overcome the growth challenge that seven out of eight organizations face today. Instead, companies need to find emerging pockets of growth that can help them secure long-term success.
Innovation is critical to achieving that goal. Enduring outperformance requires management teams to refocus innovation efforts on fresh opportunities for growth and diversification—and to develop new products, invest in new business models, and forge new partnerships to seize those opportunities. By taking defensive measures such as conserving cash while also going on the offense, “ambidextrous leaders” create value despite volatility, setting up their organizations to thrive in a world that has likely changed in fundamental ways.
Indeed, our research and experience show that companies tend to fall behind if they focus solely on avoiding the downside. Since the start of the Great Recession in 2008, North American and European companies that controlled operating costs while also prioritizing revenue growth have delivered far more value to shareholders than their industry peers (Exhibit 1). To capture growth opportunities while creating more strategic options in a fast-changing environment, innovation is key. Many companies are already acting: in our 2021 New Business Building Survey, respondents reported that, on average, they expect half of their revenues in the next five years to come from entirely new products, services, and businesses.
Innovation has always been essential to long-term value creation and resilience because it creates countercyclical and noncyclical revenue streams. Paradoxically, making big innovation bets may now be safer than investing in incremental changes. Our long-standing research shows that innovation success rests on the mastery of eight essential practices. Five of these practices are particularly important today: resetting the aspiration based on the viability of current businesses, choosing the right portfolio of initiatives, discovering ways to differentiate value propositions and move into adjacencies, evolving business models, and extending efforts to include external partners.
Raise your innovation aspiration to address new risks and opportunities
In recent years, the assumptions underpinning many business lines and growth initiatives have changed or broken down entirely. Companies with business models optimized to a specific set of global conditions are more vulnerable to the sea change underway and need to invest more, not less, in innovation to open new paths of viability. We have already seen this play out in the shortages affecting consumer goods, retail, and auto sectors, and as the energy crisis expands, disruptions are impacting more industries. Conversely, business models that address today’s uncertainties through reshoring production or expanding into digital offerings, for example, can help companies ride out disruptions. In effect, the risks associated with business as usual versus bold innovation have been inverted: in times of fundamental change, shifting resources toward big innovation bets is an important hedge against uncertainty (Exhibit 2).
We saw this timeless pattern play out in earlier cycles. For instance, amid the 2002 downturn, Best Buy recognized it couldn’t win on assortment breadth and competitive pricing alone so it invested in business model innovation, creating and scaling their Geek Squad consumer support service that other online and brick and mortar rivals couldn’t easily provide. Two decades later, a European energy company likewise facing a declining core business recently opted to move into a high-growth and high-innovation segment by expanding into renewable energy in other markets.
The energy crisis is pushing even distant sectors to embrace innovation. Take beer: the carbon dioxide used for carbonation is a byproduct of ammonia production, an energy-intensive process employed mostly in the manufacture of fertilizer. High energy prices in Europe have stalled ammonia production, creating a shortage of carbon dioxide at the peak of the region’s beer season. While some European brewers opted to shut down, others have instead embraced process innovation by exploring nitrogen or other means of creating beer foam.
Choose a balanced portfolio of short- and long-term innovations
In times of disruption or deep uncertainty, companies have to carefully balance short-term innovations aimed at cost reductions and potential breakthrough bets. As customers’ demands change, overindexing on small product tweaks (that address needs which may be temporary) is unlikely to boost long-term performance. However, “renovations” to designs and processes can produce savings that help fund longer-term investments in innovations that may create routes to profitable growth.
For example, when a consumer packaged goods company found itself falling short of its growth and margin targets but lacked capital to invest in new offerings, a cross-functional team built road maps for both immediate and longer-term product offerings. Within four months, the company sized the financial impact and execution feasibility of cost-reducing quick wins such as package optimization and formula rationalization as well as more ambitious projects, including a shift to sustainable packaging and entirely new products. The cost reductions from the incremental innovations yielded funds the company could reinvest in longer-term growth ideas while an innovation process reduced product development timelines by 75 percent.
Similarly, when a global manufacturer of sinks and faucets found itself falling behind competitors, it conducted a review of its innovation portfolio. The company found it had a limited innovation pipeline to feed new growth and that 65 percent of employees were focused on largely incremental projects expected to contribute only 5 percent to the portfolio’s net present value (NPV). The management team then sought to rebalance the portfolio toward bolder, higher-NPV initiatives, including bottom-up product redesigns to reduce the time to market. After reallocating resources to projects with higher commercial potential, the company saw a threefold increase in revenue and reduced the time to market for new products by nearly 40 percent.
Discover and tap into emerging adjacencies
Pervasive uncertainty is a good opportunity for companies to look for diversification or expansion opportunities outside their core businesses. Economic shocks such as the COVID-19 pandemic, supply-chain disruptions, and geopolitical tensions have led numerous organizations to tap innovation opportunities in adjacent markets, such as grocers ramping up delivery options. Similarly, mobility-as-a-service providers have found a valuable new niche in delivering restaurant food, and some electric vehicle manufacturers are now monetizing battery production and recycling. A May 2021 McKinsey survey revealed that during the first 12 months of the pandemic, top-decile economic performers innovated nearly twice as fast as their low-performing peers in generating new products and services (Exhibit 3).
Our recent research shows that adjacencies closer to the core business and current competencies tend to be easier to capture but can still represent significant new sources of growth. Some agricultural companies, for example, have shifted from selling farming machinery and fertilizer to building ecosystems and providing insights to help farmers be more productive. Drug and medical device manufacturers likewise are increasingly looking beyond selling medications and machines to helping patients manage their conditions and live longer, healthier lives through end-to-end care journeys.
Today, the sustainability imperative is driving many such innovation bets. Consumers’ concern about climate change is leading packaged goods companies to invest in sustainable ingredients and packaging, while some clothing manufacturers are recycling old clothes to make new ones. Sometimes, regulations can spur innovations in adjacent markets. For example, tax credits under 45Q, a section of the US federal tax code enacted in 2008, encourage investments in carbon capture and storage. The Inflation Reduction Act of 2022, meanwhile, creates opportunities in sustainable fuels and chemicals for firms that can leverage these incentives to build new businesses. Similarly, the European Union’s decarbonization goals under Roadmap 2050 have created significant incentives to innovation.
As industry landscapes shift and customer demands evolve, incumbents should look for innovation opportunities with the mindset of start-ups. Expecting revenue or margin growth to continue as before can prevent bold action and invite attackers that view established companies’ margins as opportunity.
Evolve business models for changing conditions
Capturing new opportunities—either by aligning with emerging trends or venturing into adjacent markets—often requires business model changes, which can have the added benefit of boosting resilience. Adopting new business models can enable companies to put more core competencies into play than investments further afield while also making the organization more adaptable and generating new growth. Such innovations can include evolutions of value propositions, economic models, production models, routes to market, and the use of assets and capabilities. For example, new ways to organize supply chains or ecosystems, shifting from selling products to offering services, or moving from B2B to B2C can give companies new strategic options as business conditions change.
Energy companies that pivoted to providing locally produced, renewable energy, for instance, are finding the shift insulates their operations from near-term swings in energy prices. Similarly, organizations moving to provide products to increasingly health-conscious consumers are by necessity diversifying their supply bases to acquire the needed ingredients, thus improving the resilience of their supply chains.
Such tactical gains often bring strategic benefits. For example, one shoe manufacturer is now offering recyclable shoes on subscription—a customer returns a pair at the end of the lease and the company uses the materials to construct the next batch.
Extend efforts to include external partners
Over the past three years, top economic performers have doubled down on investments in new partnerships (Exhibit 4). Alliances and joint ventures can enable large companies to rapidly scale new business models or offerings that would take a long time to develop organically.
The current market volatility can provide fresh opportunities for large companies to extend their networks of business partners, or even acquire them. With many start-ups struggling and lower availability of venture capital, incumbents can help fill the funding gap while gaining access to important capabilities and technologies.
For example, a European energy management company teamed up with a private equity firm on a joint venture that builds and operates clients’ energy infrastructure. The new business helps organizations make the transition to renewable energy sources, for example by helping fleet operators shift to zero-emissions vehicles.
In times of increasing disruption and uncertainty, continuing with business as usual can exceed the risk of leaning into the headwinds. To join the ranks of the truly resilient and enable through-cycle growth, now is the time to choose a new innovation portfolio, discover fresh insights and opportunities, and evolve your business models.
| 2023-01-10T00:00:00 |
https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/innovation-your-solution-for-weathering-uncertainty
|
[
{
"date": "2023/01/10",
"position": 68,
"query": "AI economic disruption"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.