title_s
stringlengths 2
79
| title_dl
stringlengths 0
200
| source_url
stringlengths 13
64
| authors
listlengths 0
10
| snippet_s
stringlengths 0
291
| text
stringlengths 21
100k
| date
timestamp[ns]date 1926-02-14 00:00:00
2030-07-14 00:00:00
| publish_date_dl
stringlengths 0
10
| url
stringlengths 15
590
| matches
listlengths 1
278
|
---|---|---|---|---|---|---|---|---|---|
Seasonal schools on the path to democratization of AI
|
Seasonal schools on the path to democratization of AI
|
https://www.sas.com
|
[] |
... skills shortage. This panel will reflect on the experience of SAS Seasonal Schools in the path to democratization of AI. March 17 conversation. #SASchat ...
|
SASchat
Self-service by domain experts and line-of-business users is one of the answers to solving the data science skills shortage. This panel will reflect on the experience of SAS Seasonal Schools in the path to democratization of AI.
| 2023-03-15T00:00:00 |
https://www.sas.com/sas/events/saschat/seasonal-schools-on-the-path-to-democratization-of-ai.html
|
[
{
"date": "2023/03/15",
"position": 69,
"query": "AI skills gap"
}
] |
|
Gainwell Drives Flexibility with SkyHive's Skills AI
|
Gainwell Drives Flexibility with SkyHive's Skills AI
|
https://www.skyhive.ai
|
[] |
Gainwell adopts SkyHive AI to build a skills-driven culture ... From there, Gainwell can see the skills each employee needs to learn to bridge the gap ...
|
The more you know about your workers’ skills, the faster your company can move.
That was a critical motivation for Gainwell, a leader in cloud technology in the healthcare field, when it turned to SkyHive by Cornerstone to improve talent management. Gainwell needs to move people from project to project fairly quickly as it gets new contracts with states or private companies. The company wants to improve both recruiting and retention, so better internal mobility and a skills-driven culture is a must.
“We really need to be able to staff positions quickly as we flex up and down,” says Julie Moore, Principal, Talent & Development. “If we’re working on a project for a state or client and that comes to an end, we need to be ready for the next one. Plus, we have a lot of priority roles, and with the right skills always in short supply, we can’t always fill them from the outside.”
But to do that, Gainwell needs to know the capabilities of everyone in its workforce.
It turned to SkyHive by Cornerstone.
A Quick Rollout
Implementing SkyHive Enterprise meant learning the skills of the entire Gainwell workforce, and examining the skills needed in each role. From there, Gainwell can see the skills each employee needs to learn to bridge the gap between what they know and what they need to know to progress in their careers.
The launch was in March 2022. Internally, Gainwell called the innovative skills inventory system “G>Force.”
Gainwell’s goal was to have 80 percent of employees complete a skills profile listing at least 10 skills. It marketed G>Force using the company newsletter and intranet. By the end of June, it had hit that 80 percent mark. On average, the 10,600 employees who had completed a profile averaged 22 skills per profile.
Building employee profiles was only the beginning of Gainwell’s skills transformation journey. Leaders encouraged employees to get the most out of G>Force, and in July the company launched the training, career pathing, and mentoring modules. This means employees indicate their desired career path. From there, employees can find mentors, courses, projects, and new internal jobs all based on the skills they want and need to add.
Mentors, Courses, and Career Paths
By the end of 2022, 83 percent of Gainwell had a skills profile.
When employees complete training, projects, or earn a certification, they update their profiles with the new skills or added proficiency. Moore’s team is encouraging employees to review it quarterly or semi-annually at a minimum.
Now, with G>Force, the 11,000-employee company:
Has a skills inventory to identify internal employees to fill open positions and promote from within
Is improving recruiting and retention by providing a culture of growth and opportunity
Uses employees’ expertise on critical projects; it now knows who has skills in areas such as AWS or Agile
Employees are happy, too. In a recent survey, G>Force scored a strong Net Promoter Score of 39. Said one employee: “The Skill Profile Development is an amazing tool. Whenever I get a chance I go into it, add skills I've developed, and seek more.” Another survey respondent said, “Gainwell University helped improve my skills development and other free online courses through G>force. It’s a wonderful tool that every employee should use to their advantage.”
To learn more about Gainwell’s skills transformation journey, read the full case study.
Are you ready to become a skills-based organization? Book a demo with SkyHive by Cornerstone today.
| 2023-03-15T00:00:00 |
https://www.skyhive.ai/resource/for-business-flexibility-gainwell-turns-to-a-skills-driven-culture
|
[
{
"date": "2023/03/15",
"position": 86,
"query": "AI skills gap"
}
] |
|
Data Science Job Market - March 2023 Update
|
Data Science Job Market - March 2023 Update
|
https://www.interviewquery.com
|
[] |
Breakdown of Positions Driving Growth: The position with the highest monthly growth in job openings was AI and machine learning research at 32.0%, followed by ...
|
Over the last year, Data Analyst jobs consistently had the highest demand, peaking at around 134% of their current value in mid-2022. The numbers declined towards the end of the year but have been rising since then.
Data Engineering job postings followed a fluctuating trend, with an initial surge in demand that peaked around the third quarter of 2022. However, this demand eventually subsided, reaching a low of about 88% of their current value in early 2023 before starting to recover.
Lastly, Data Scientist job postings saw a steady decline throughout the year. Starting strong in March 2022, the postings gradually decreased month-over-month, hitting their lowest point in March 2023.
Over the past month, Data Engineer jobs saw the most significant growth, with a 13.6% increase in job postings. Data Analyst jobs also experienced growth, with an 8.0% increase. Meanwhile, ML Engineer jobs declined by 3.9%, and Data Scientist jobs faced a more substantial decrease of 7.0%.
Data Analytics was the only position that showed yearly growth in job postings, increasing by 5.1%. In contrast, Data Engineer jobs decreased by 23.5%, and Data Scientist jobs saw a more drastic decline of 55.0% in the same time period. The most significant decrease occurred in ML Engineer job postings, with a 58.1% drop.
Over the past year, the job market has seen a growing preference for data analysts, reaching 51.4% of all data job postings in March 2023. Meanwhile, the share of data science job postings experienced a consistent decline. The share of data engineering job postings fluctuated, peaking at 37.6% in December 2022, declining at the start of 2023, and rising back in March 2023 to 32.0%.
| 2023-03-15T00:00:00 |
https://www.interviewquery.com/p/job-market-update-march-2023
|
[
{
"date": "2023/03/15",
"position": 82,
"query": "AI labor market trends"
}
] |
|
New study on AI in the workplace: Workers need control ...
|
New study on AI in the workplace: Workers need control options to ensure co-determination
|
https://algorithmwatch.org
|
[] |
Employees must be included in the implementation process, if so-called Artificial Intelligence (AI) systems are introduced to their workplace.
|
Employees must be included in the implementation process, if so-called Artificial Intelligence (AI) systems are introduced to their workplace. Such systems are used by many companies for automated decision-making (ADM) already. They are often based on Machine Learning (ML) algorithms. The European Union’s Artificial Intelligence Act (AI Act) draft is designed to safeguard workers’ rights, but such legislative measures won’t be enough. An AlgorithmWatch study funded by the Hans Böckler Foundation explains how workers’ co-determination can be practically achieved in this regard.
Berlin, 16 March 2023. ADM systems are used to automatically scan CVs during the hiring process, allocate shifts to employees, conduct work performance evaluations, select employees for educational programs or promotions, and they might even be used to decide who to lay off. With the help of AI applications, underlying structures in data sets at hand can be detected, which allows for predicting future developments. Such predictions can lead to decisions that have far-reaching consequences for the staff.
Power imbalance
Due to the opacity of these systems, they further advance the power imbalance between employers and employees. The employees are the ones being subjected to the decisions of these systems while they can’t evaluate if their decisions are fair, just, and based on appropriate data sets. After all, what’s missing is an oversight authority for these systems. “Automated decision-making systems in workforce management undermine established processes that ensure worker participation. Without an insight into these systems, employees exposed to them more often than not remain powerless. For this reason, comprehensive transparency has to be introduced as a standard. Furthermore, employees must be included and have a say in every process concerning their workplace,” the study’s co-author, Dr. Anne Mollen, concludes.
Regulation efforts
With the AI Act, the European Union currently tries to regulate the use of ADM systems, as they are considered to pose high risks to individuals. This approach, however, only addresses worst case scenarios in the workplace. Further political measures, such as mandatory transparency requirements, are essential to avoid having ADM systems keep levering out traditional forms of employee representation and co-determination.
Co-determination in practice
When ADM workplace management systems are introduced, employees and their representatives should actively attend to their interests during the entire planning, development, implementation, and deployment processes. An exchange with Machine Learning experts could help them get a basic understanding of fundamental systemic connections and ask important questions on a case-by-case basis. This would enable them to look at these systems critically, see their shortcomings, and assess the potential risks that come with them.
Read more on our policy & advocacy work on ADM in the workplace.
| 2023-03-15T00:00:00 |
https://algorithmwatch.org/en/working-paper-ai-workplace-2023/
|
[
{
"date": "2023/03/15",
"position": 11,
"query": "AI regulation employment"
},
{
"date": "2023/03/15",
"position": 8,
"query": "machine learning workforce"
},
{
"date": "2023/03/15",
"position": 1,
"query": "AI labor union"
},
{
"date": "2023/03/15",
"position": 3,
"query": "artificial intelligence workers"
}
] |
|
What HR everywhere needs to know about NYC's new AI ...
|
What HR everywhere needs to know about NYC’s new AI bias law
|
https://hrexecutive.com
|
[
"Phil Albinus",
"Phil Albinus Is The Former Hr Tech Editor For"
] |
The law compels employers to conduct AI tool audits to ensure that these HR solutions do not have biases that might impede the hiring and promotion of workers.
|
Update: The Automated Employment Decision Tools law went into effect on Jan. 1 as planned, but enforcement remains delayed until April 15 after significant public comment. Revisions also have been made to the proposed rules, narrowing the definition of the tools and making other changes.
Original article, Oct. 3, 2022: Last year, New York City passed a new law that requires organizations that do business in the city to perform annual audits of their AI tools to determine that biases do not appear in the tools. This groundbreaking law is the first of its kind in the U.S. and could become a reality for other cities and eventually entire states in the near future, legal experts warn.
What do HR leaders need to know about this law that goes into effect on Jan. 1? Here’s a quick guide.
What is the law?
Passed by the New York City Council, it is designed to protect employees during the hiring and promotion processes from unlawful bias by employers that rely on automated employment decision tools. These include recruitment tools that read and select a job candidate’s resume and job application.
The law compels employers to conduct AI tool audits to ensure that these HR solutions do not have biases that might impede the hiring and promotion of workers. In an effort to provide transparency, employers are required to disclose the data either publicly on the company’s website or upon request.
The new regulation mandates that “at least 10 business days prior to using a tool, an employer or employment agency must notify each candidate or employee who resides in New York City that an automated employment decision tool will be used in connection with the assessment or evaluation of the candidate or employee,” according to a blog from the law firm Ogletree Deakins.
What is a bias audit?
A bias audit, according to Ogletree Deakins, is “an impartial evaluation by an independent auditor” to test a recruitment or employee evaluation tool to determine if the AI could have a negative impact on a job candidate’s hiring or a current employee pursuing a promotion. This means the law covers a person’s race, gender and ethnicity.
Is NYC trying to outlaw AI tools?
No, the New York City law is not designed to prevent businesses from using AI, assures Simone Francis, an attorney with the technology practice of Ogletree Deakins. Instead, it aims to eliminate unintended biases that might have been programmed into these tools inadvertently.
“There’s certainly been a lot of conversation about the ability of AI to potentially eliminate biases, but the law is intended to put certain processes in place to ensure that AI is being used in a way that does not lead to unintended results, including results that would conflict with existing anti-discrimination laws,” she says.
Who performs the bias audit?
The responsibility for performing the audit resides with the organization using these tools, not the AI solution providers. However, it cannot be performed by the departments that use the AI tool.
“The New York City law specifically says that you have to have an independent audit, which means you cannot just rely on the vendor and the vendor’s assurances,” says Francis.
Should HR leaders expect to perform these audits on a regular basis? If so, how often?
Francis expects that these audits will not be a “one and done proposition.” Instead, HR leaders should assume that they must be performed on a regular cadence.
“We’re still trying to develop some understanding of what the city means by that,” says Francis.
Does this law apply to only companies headquartered in New York City?
No, this law applies to any business that has offices and employees in Manhattan and the surrounding boroughs and uses AI decision tools for hiring and promoting employees. If a business based in, say, North Carolina or Silicon Valley has a New York City office, it must comply with this law.
What are the penalties for not performing the audits in an open and timely manner?
So far, rather light. According to Francis, the penalties range from as little as $500 to $5,000. This does not include the potential damage to a company’s reputation, she adds.
Is this law set in stone?
Yes, but the details are still being worked out. The New York City Council will hold a public hearing on Oct. 24 following a comment period.
Could laws like this pop up in other cities and states and nationally?
It’s reasonable to assume so, says Francis. She adds that laws pertaining to anti-discrimination goals tend to start in one city or state and are then adopted by other cities and states. And the federal government is getting interested. The EEOC issued guidance this spring instructing employers to evaluate AI tools for bias against people with disabilities, Democrats introduced a bill in Congress focused on automation, though it hasn’t advanced, and the inaugural National Artificial Intelligence Advisory Committee held its first meeting in May to discuss AI’s use in several areas, including those related to the workforce, according to the Brookings Institute.
What should HR leaders and the IT teams that serve them consider when dealing with the new law?
It’s important to understand how AI tools are used, says Francis. HR and HRIS must “get their arms around that because how they’re actually used could either trigger application of this law in NYC or in other jurisdictions in the future,” she says.
| 2023-03-15T00:00:00 |
2023/03/15
|
https://hrexecutive.com/what-hr-everywhere-needs-to-know-about-nycs-new-ai-bias-law/
|
[
{
"date": "2023/03/15",
"position": 24,
"query": "AI regulation employment"
},
{
"date": "2023/03/15",
"position": 11,
"query": "government AI workforce policy"
},
{
"date": "2023/03/15",
"position": 54,
"query": "artificial intelligence hiring"
}
] |
What are the accountability and governance implications of ...
|
What are the accountability and governance implications of AI?
|
https://ico.org.uk
|
[] |
The use of this system has implications for the allocation of job opportunities to female candidates and the relevant economic results. Example or ...
|
Further reading – European Data Protection Board The European Data Protection Board (EDPB), which has replaced the Article 29 Working Party (WP29), includes representatives from the data protection authorities of each EU member state. It adopts guidelines for complying with the requirements of the EU version of the GDPR. The EDPB has produced guidelines on: Data protection impact assessments;
Data Protection Officers (‘DPOs’); and
Automated individual decision-making and profiling EDPB guidelines are no longer directly relevant to the UK regime and are not binding under the UK regime. However, they may still provide helpful guidance on certain issues.
How should we understand controller / processor relationships in AI?
Why is controllership important for AI systems?
Often, several different organisations will be involved in developing and deploying AI systems which process personal data.
The UK GDPR recognises that not all organisations involved in the processing will have the same degree of control or responsibility. It is important to be able to identify who is acting as a controller, a joint controller or a processor so you understand which UK GDPR obligations apply to which organisation.
How do we determine whether we are a controller or a processor?
You should use our existing guidance on controllers and processors to help you with this. This is a complicated area, but some key points from that guidance are:
You should take the time to assess, and document, the status of each organisation you work with in respect of all the personal data processing activities you carry out.
If you exercise overall control of the purpose and means of the processing of personal data – you decide what data to process, why and how – you are a controller.
If you don’t have any purpose of your own for processing the data and you only act on a client’s instructions, you are likely to be a processor – even if you make some technical decisions about how you process the data.
Organisations that determine the purposes and means of processing will be controllers regardless of how they are described in any contract about processing services.
As AI usually involves processing personal data in several different phases or for several different purposes, it is possible that you may be a controller or joint controller for some phases or purposes, and a processor for others.
What type of decisions mean we are a controller?
Our guidance says that if you make any of the following overarching decisions, you will be a controller:
to collect personal data in the first place;
what types of personal data to collect;
the purpose or purposes the data are to be used for;
which individuals to collect the data about;
how long to retain the data; and
how to respond to requests made in line with individuals’ rights.
For more information, see the are we a controller? checklist in our Guide to UK GDPR, and our more detailed guidance on controllers and processors.
What type of decisions can we take as a processor?
Our guidance says that you are likely to be a processor if you don’t have any purpose of your own for processing the data and you only act on a client’s instructions. You may still be able to make some technical decisions as a processor about how the data is processed (the means of the processing). For example, where allowed in the contract, you may use your technical knowledge to decide:
the IT systems and methods you use to process personal data;
how you store the data;
the security measures that will protect it; and
how you retrieve, transfer, delete or dispose of that data.
How may these issues apply in AI?
When AI systems involve a number of organisations in the processing of personal data, assigning the roles of controller and processor can become complex. For example, when some of the processing happens in the cloud. This can raise broader questions outside the scope of this guidance.
For example, questions about the types of scenario that could result in an organisation becoming a controller, which may include when an organisation makes decisions about:
the source and nature of the data used to train an AI model;
the target output of the model (what is being predicted or classified);
the broad kinds of ML algorithms that will be used to create models from the data (eg regression models, decision trees, random forests, neural networks);
feature selection – the features that may be used in each model;
key model parameters (eg how complex a decision tree can be, or how many models will be included in an ensemble);
key evaluation metrics and loss functions, such as the trade-off between false positives and false negatives; and
how any models will be continuously tested and updated: how often, using what kinds of data, and how ongoing performance will be assessed.
We will also consider questions about when an organisation is (depending on the terms of their contract) able to make decisions to support the provision of AI services, and still remain a processor. For example, in areas such as:
the specific implementation of generic ML algorithms, such as the programming language and code libraries they are written in;
how the data and models are stored, such as the formats they are serialised and stored in, and local caching;
measures to optimise learning algorithms and models to minimise their consumption of computing resources (eg by implementing them as parallel processes); and
architectural details of how models will be deployed, such as the choice of virtual machines, microservices, APIs.
We intend to address these issues in more detail in future guidance products, including additional AI-specific material, as well as revisions to our cloud computing guidance. As we undertake this work, we will consult and work closely with key stakeholders, including government, to explore these issues and develop a range of scenarios when the organisation remains a data processor as it provides AI services.
In our work to date we have developed some indicative example scenarios:
Example An organisation provides a cloud-based service consisting of a dedicated cloud computing environment with processing and storage, and a suite of common tools for ML. These services enable clients to build and run their own models, with data they have chosen, but using the tools and infrastructure the organisation provides in the cloud. The clients will be controllers, and the provider is likely to be a processor. The clients are controllers as they take the overarching decisions about what data and models they want to use, the key model parameters, and the processes for evaluating, testing and updating those models. The provider as a processor could still decide what programming languages and code libraries those tools are written in, the configuration of storage solutions, the graphical user interface, and the cloud architecture.
Example An organisation provides live AI prediction and classification services to clients. It develops its own AI models, and allows clients to send queries via an API (‘what objects are in this image?) to get responses (a classification of objects in the image). First, the prediction service provider decides how to create and train the model that powers its services, and processes data for these purposes. It is likely to be a controller for this element of the processing. Second, the provider processes data to make predictions and classifications about particular examples for each client. The client is more likely to be the controller for this element of the processing, and the provider is likely to be a processor.
Example An AI service provider isolates different client-specific models. This enables each client to make overarching decisions about their model, including whether to further process personal data from their own context to improve their own model. As long as the isolation between different controllers is complete and auditable, the client will be the sole controller and the provider will be a processor.
How should we manage competing interests when assessing AI-related risks?
Your use of AI must comply with the requirements of data protection law. However, there can be a number of different values and interests to consider, and these may at times pull in different directions. These are commonly referred to as ‘trade-offs’, and the risk-based approach of data protection law can help you navigate them. There are several significant examples relating to AI, which we discuss in detail elsewhere:
If you are using AI to process personal data you therefore need to identify and assess these interests, as part of your broader consideration of the risks to the rights and freedoms of individuals and how you will meet your obligations under the law.
The right balance depends on the specific sectoral and social context you operate in, and the impact the processing may have on individuals. However, there are methods you can use to assess and mitigate trade-offs that are relevant to many use cases.
How can we manage these trade-offs?
In most cases, striking the right balance between these multiple trade-offs is a matter of judgement, specific to the use case and the context an AI system is meant to be deployed in.
Whatever choices you make, you need to be accountable for them. Your efforts should be proportionate to the risks the AI system you are considering to deploy poses to individuals. You should:
identify and assess any existing or potential trade-offs, when designing or procuring an AI system, and assess the impact it may have on individuals;
consider available technical approaches to minimise the need for any trade-offs;
consider any techniques which you can implement with a proportionate level of investment and effort;
have clear criteria and lines of accountability about the final trade-off decisions. This should include a robust, risk-based and independent approval process;
where appropriate, take steps to explain any trade-offs to individuals or any human tasked with reviewing AI outputs; and
review trade-offs on a regular basis, taking into account, among other things, the views of individuals whose personal data is likely to be processed by the AI (or their representatives) and any emerging techniques or best practices to reduce them.
You should document these processes and their outcomes to an auditable standard. This will help you to demonstrate that your processing is fair, necessary, proportionate, adequate, relevant and limited. This is part of your responsibility as a controller under Article 24 and your compliance with the accountability principle under Article 5(2). You must also capture them with an appropriate level of detail where required as part of a DPIA or a legitimate interests assessment (LIA) undertaken in connection with a decision to rely on the "legitimate interests" lawful basis for processing personal data.
You should also document:
how you have considered the risks to the individuals that are having their personal data processed;
the methodology for identifying and assessing the trade-offs in scope; the reasons for adopting or rejecting particular technical approaches (if relevant);
the prioritisation criteria and rationale for your final decision; and
how the final decision fits within your overall risk appetite.
You should also be ready to halt the deployment of any AI systems, if it is not possible to achieve a balance that ensures compliance with data protection requirements.
Outsourcing and third-party AI systems
When you either buy an AI solution from a third party, or outsource it altogether, you need to conduct an independent evaluation of any trade-offs as part of your due diligence process. You are also required to specify your requirements at the procurement stage, rather than addressing trade-offs afterwards.
Recital 78 of the UK GDPR says producers of AI solutions should be encouraged to:
take into account the right to data protection when developing and designing their systems; and
make sure that controllers and processors are able to fulfil their data protection obligations.
You should ensure that any system you procure aligns with what you consider to be the appropriate trade-offs. If you are unable to assess whether the use of a third party solution would be data protection compliant, then you should, as a matter of good practice, opt for a different solution. Since new risks and compliance considerations may arise during the course of the deployment, you should regularly review any outsourced services and be able to modify them or switch to another provider if their use is no longer compliant in your circumstances.
For example, a vendor may offer a CV screening tool which effectively scores promising job candidates but may ostensibly require a lot of information about each candidate to assist with the assessment. If you are procuring such a system, you need to consider whether you can justify collecting so much personal data from candidates, and if not, request the provider modify their system or seek another provider.
Further reading inside this guidance See our section on ‘what data minimisation and privacy-preserving techniques are available for AI systems?’
Culture, diversity and engagement with stakeholders
You need to make significant judgement calls when determining the appropriate trade-offs. While effective risk management processes are essential, the culture of your organisation also plays a fundamental role.
Undertaking this kind of exercise will require collaboration between different teams within the organisation. Diversity, incentives to work collaboratively, as well as an environment in which staff feel encouraged to voice concerns and propose alternative approaches are all important.
The social acceptability of AI in different contexts, and the best practices in relation to trade-offs, are the subject of ongoing societal debates. Consultation with stakeholders outside your organisation, including those affected by the trade-off, can help you understand the value you should place on different criteria.
What about mathematical approaches to minimise trade-offs?
In some cases, you can precisely quantify elements of the trade-offs. A number of mathematical and computer science techniques known as ‘constrained optimisation’ aim to find the optimal solutions for minimising trade-offs.
For example, the theory of differential privacy provides a framework for quantifying and minimising trade-offs between the knowledge that can be gained from a dataset or statistical model, and the privacy of the people in it. Similarly, various methods exist to create ML models which optimise statistical accuracy while also minimising mathematically defined measures of discrimination.
While these approaches provide theoretical guarantees, it can be hard to meaningfully put them into practice. In many cases, values like privacy and fairness are difficult to meaningfully quantify. For example, differential privacy may be able to measure the likelihood of an individual being uniquely identified from a particular dataset, but not the sensitivity of that identification. Therefore, they may not always be appropriate. If you do decide to use mathematical and computer science techniques to minimise trade-offs, you should always supplement these methods with a more qualitative and holistic approach. But the inability to precisely quantify the values at stake does not mean you can avoid assessing and justifying the trade-off altogether; you still need to justify your choices.
In many cases trade-offs are not precisely quantifiable, but this should not lead to arbitrary decisions. You should perform contextual assessments, documenting and justifying your assumptions about the relative value of different requirements for specific AI use cases.
| 2023-03-15T00:00:00 |
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/what-are-the-accountability-and-governance-implications-of-ai/
|
[
{
"date": "2023/03/15",
"position": 82,
"query": "AI regulation employment"
}
] |
|
AI Guardrails - Types and the Legal Risks They Mitigate
|
https://connexcs.com
|
[] |
Example of Regulatory Action ... In 2018, it was revealed that Amazon had developed an AI-powered hiring tool that showed bias against women. The system, trained ...
|
AI is no longer just a tool—it’s a decision-maker, a content creator, even a negotiator. But with great power comes great… liability.
As artificial intelligence rapidly weaves itself into the fabric of business, law, and daily life, it’s also opening doors to legal and ethical landmines.
Biased algorithms, hallucinated facts, and opaque decisions aren't just technical glitches—they're lawsuits waiting to happen.
Enter AI guardrails—the frameworks designed to keep intelligent systems smart, safe, and on the right side of the law. In this blog, we’ll break down five critical types of guardrails—ethical, technical, regulatory, transparency & accountability, and human oversight & intervention—and explore how each one mitigates specific legal risks.
Strap in. The future of AI isn’t just about capability—it’s about control.
Imagine them as the safety nets for artificial intelligence. They are the boundaries that prevent it from veering off course. So essentially, they're a set of rules, policies, and technical implementations.
AI Guardrails are designed to ensure AI systems behave ethically, responsibly, and within predefined limits. Their purpose? To prevent unintended consequences, biases, or harmful outputs.
Think of it like this: a powerful engine needs a steering wheel and brakes. AI, with its vast capabilities, needs guardrails to control its trajectory. These guardrails can range from simple input validation to complex algorithms that monitor and adjust AI behavior.
Now, where do AI governance frameworks come into play? They're the blueprints, the overarching structures that establish these guardrails.
These frameworks provide a comprehensive set of guidelines, standards, and best practices for developing, deploying, and managing AI systems. They address crucial aspects like data privacy, algorithmic fairness, transparency, and accountability.
Essentially, governance frameworks are the architects, and guardrails are the construction crew, building a safe and reliable AI environment.
Without a robust governance framework, the guardrails might be haphazardly placed, leading to potential risks. Thus, they work in tandem, ensuring AI's power is harnessed responsibly.
AI's power demands careful control. Guardrails, both technical and legal, shape its behavior. Let's explore the diverse types of these safeguards and their critical legal implications.
Ethical guardrails are frameworks designed to ensure fairness, accountability, and transparency in decision-making processes.
These guardrails help organizations prevent bias and discrimination. They ensure that systems do not impose a disadvantage on individuals based on race, gender, age, or other protected characteristics.
Ethical guardrails involve measures such as diverse and representative training data, bias detection tools, and human oversight to ensure equitable outcomes.
Bias in AI systems or decision-making processes can lead to significant reputational and legal consequences. Without ethical safeguards, organizations risk implementing unfair hiring practices, biased financial decisions, or discriminatory law enforcement applications.
Ethical guardrails help organizations proactively address bias, ensuring compliance with anti-discrimination laws and fostering trust among consumers and regulatory bodies.
They are particularly critical in industries such as finance, healthcare, and employment, where biased outcomes can significantly impact individuals' lives.
Discrimination lawsuits arising from biased hiring, lending, or law enforcement decisions.
Regulatory fines and penalties for non-compliance with anti-discrimination laws.
Reputational damage due to public backlash over biased AI or policies.
Equal Credit Opportunity Act (ECOA, U.S.) – Prohibits discrimination in lending decisions.
Prohibits discrimination in lending decisions. Title VII of the Civil Rights Act (U.S.) – Prevents workplace discrimination.
Prevents workplace discrimination. General Data Protection Regulation (GDPR, EU) – Ensures fairness in automated decision-making.
Ensures fairness in automated decision-making. EU AI Act – Requires AI transparency and bias mitigation.
In 2018, it was revealed that Amazon had developed an AI-powered hiring tool that showed bias against women. The system, trained on historical hiring data, unintentionally favored male candidates.
It was observed to be penalizing resumes that contained terms associated with women, such as "women's chess club" or attendance at women's colleges.
Although Amazon shut down the system before regulatory action was taken, the case highlighted the risks of biased AI decision-making. In the US, the average settlement for employment discrimination cases can range from $40,000 to over $100,000.
Technical guardrails are the digital fortifications designed to protect AI systems and the sensitive data they handle. They are the practical, implementable measures that ensure security, data integrity, and system resilience.
These guardrails encompass secure coding practices, robust authentication protocols, data encryption, and continuous monitoring.
They are necessary because AI systems, especially those processing personal or confidential information, are prime targets for cyberattacks and data breaches.
Without these guardrails, vulnerabilities can be exploited, leading to unauthorized access, data leaks, and system disruptions, causing financial and reputational damage.
Considering that it takes on an average 194 days to discover a breach, proper technical guardrails can prevent a lot of damage.
Technical guardrails mitigate the risks of data breach liabilities and security-related lawsuits. They help prevent violations of data protection laws and cybersecurity regulations, reducing exposure to substantial fines and damages. They reduce the likelihood of sensitive data falling into the wrong hands.
These guardrails aid compliance with various laws, including:
GDPR (General Data Protection Regulation): Imposes strict obligations on organizations handling personal data within the EU, mandating data protection by design and default.
Imposes strict obligations on organizations handling personal data within the EU, mandating data protection by design and default. CCPA (California Consumer Privacy Act): Grants California residents specific rights regarding their personal data, including the right to know, delete, and opt-out of data collection.
Grants California residents specific rights regarding their personal data, including the right to know, delete, and opt-out of data collection. Cybersecurity Laws: Vary by jurisdiction but generally require organizations to implement reasonable security measures to protect against cyber threats.
Vary by jurisdiction but generally require organizations to implement reasonable security measures to protect against cyber threats. NIST Cybersecurity Framework: A voluntary framework in the U.S. that provides guidance on managing cybersecurity risks.
A voluntary framework in the U.S. that provides guidance on managing cybersecurity risks. PCI DSS (Payment Card Industry Data Security Standard): Requirements for organizations that handle credit and debit card information.
OpenAI’s ChatGPT Data Breach – Investigation by Italian Data Protection Authority (Garante)
In March 2023, OpenAI temporarily took ChatGPT offline after a bug in an open-source library allowed some users to see others’ chat history titles and payment-related information.
Italy’s Garante (the national data protection authority) launched an investigation, citing violations of the GDPR—particularly concerning lawful data processing, user consent, and protection of minors.
The outcome? Italy temporarily banned ChatGPT, demanding that OpenAI:
Improve transparency in data usage.
Enable users to correct/delete personal data.
Implement age-gating features.
Offer clearer opt-out mechanisms.
Regulatory compliance guardrails are frameworks designed to ensure that businesses adhere to industry regulations, legal requirements, and ethical standards.
These guardrails help organizations avoid financial penalties, reputational damage, and operational restrictions by establishing clear policies and procedures.
They involve continuous monitoring, risk assessments, and documentation to demonstrate compliance with laws governing data privacy, financial reporting, consumer protection, and more.
Compliance guardrails are necessary because regulatory landscapes are constantly evolving. Failing to comply can lead to severe consequences, including fines, sanctions, or loss of operating licenses.
These guardrails provide a structured approach to mitigating risks by enforcing internal policies, training employees, and ensuring business practices align with legal standards.
They also foster trust among customers, partners, and regulators by demonstrating a commitment to lawful and ethical operations.
Regulatory compliance guardrails help mitigate:
Fines and penalties for non-compliance with financial, data protection, and consumer rights laws.
Legal action and lawsuits due to violations of privacy, anti-corruption, or labor laws.
Reputational damage from publicized regulatory breaches and enforcement actions.
General Data Protection Regulation (GDPR) – Ensures data privacy and security.
Ensures data privacy and security. Sarbanes-Oxley Act (SOX) – Regulates corporate financial reporting and fraud prevention.
Regulates corporate financial reporting and fraud prevention. Foreign Corrupt Practices Act (FCPA) – Prohibits bribery and corruption in international business.
Prohibits bribery and corruption in international business. Health Insurance Portability and Accountability Act (HIPAA) – Protects healthcare data privacy.
In 2023, Amazon was fined €746 million by the EU for breaching GDPR regulations regarding improper handling of customer data. This case highlights the importance of robust compliance guardrails in preventing costly regulatory violations and maintaining consumer trust.
As AI-generated content becomes more prevalent, transparency and accountability guardrails are essential to mitigate legal and ethical risks.
These guardrails ensure that AI-generated outputs are clearly identified, fact-checked, and aligned with ethical standards. Ethical guardrails in AI content creation involve principles such as fairness, accuracy, and responsible data usage.
They prevent misinformation, biased content, and unauthorized data exploitation, safeguarding both consumers and organizations.
Transparency and accountability guardrails are essential in AI systems to mitigate legal and ethical risks. As AI-generated content becomes more influential, ensuring clarity around data sources, decision logic, and authorship helps prevent misinformation and misuse.
These safeguards establish trust, define responsibility, and reduce liability when content causes harm or bias. Without them, organizations risk reputational damage, legal action, and regulatory penalties.
Guardrails transform AI from a black box into a governed tool—accountable, auditable, and aligned with societal and legal expectations.
Transparency and accountability guardrails help mitigate risks such as:\
Defamation and misinformation liability (false AI-generated content causing harm).
Copyright infringement (unauthorized use of copyrighted materials in AI training).
Data privacy violations (AI-generated content revealing personal or sensitive data).
EU AI Act: Mandates transparency in AI-generated content and risk assessments.
Mandates transparency in AI-generated content and risk assessments. Digital Services Act (DSA): Requires platforms to prevent misinformation.
Requires platforms to prevent misinformation. Copyright Directive (EU): Protects copyrighted material from AI misuse.
In 2023, OpenAI faced scrutiny from European regulators over potential GDPR violations related to AI-generated responses that contained personal data.
The case underscored the necessity of guardrails ensuring AI transparency and responsible content generation.
Human oversight and intervention guardrails ensure that AI-driven decisions are monitored, reviewed, and corrected when necessary.
These guardrails establish a framework where AI systems operate under human supervision. Thus, allowing for intervention in cases where automated decisions could lead to harm.
They are particularly crucial in high-stakes industries such as healthcare, finance, and law enforcement. Here, unchecked AI outputs could result in legal violations, discrimination, or even physical harm.
AI models are powerful but not infallible. They can produce biased, unfair, or harmful decisions due to flawed training data or algorithmic errors.
Without human oversight, AI systems could deny loans unfairly, misdiagnose patients, or enforce discriminatory policies.
Ensuring human intervention helps prevent errors, maintain ethical standards, and improve AI decision-making accuracy.
Furthermore, accountability remains a key factor—organizations deploying AI must be able to justify and rectify its decisions when needed.
Human oversight guardrails help mitigate:
Bias and discrimination lawsuits (e.g., AI-driven hiring systems that discriminate based on gender or race).
Consumer harm and liability (e.g., AI-generated medical misdiagnosis leading to health risks).
Regulatory non-compliance penalties (e.g., failure to provide explainability for automated decisions).
EU AI Act: Requires human oversight for high-risk AI applications.
Requires human oversight for high-risk AI applications. General Data Protection Regulation (GDPR): Grants individuals the right to challenge automated decisions.
Grants individuals the right to challenge automated decisions. Equal Credit Opportunity Act (ECOA, U.S.): Prevents AI-driven bias in loan approvals.
In 2022, the U.S. Consumer Financial Protection Bureau (CFPB) fined a major bank for using an AI-driven lending model that disproportionately denied loans to minority applicants. The lack of human oversight led to biased outcomes, reinforcing the necessity of intervention guardrails.
The future of AI guardrails is inextricably linked to the rapid evolution of legal landscapes. As AI technologies advance, regulations are scrambling to keep pace.
Carefully aiming to strike a delicate balance between fostering innovation and mitigating potential harms.
Here's a breakdown of key trends:
Governments worldwide are moving towards more flexible and adaptable regulatory frameworks. The EU's AI Act, with its risk-based approach, is a prime example, setting a potential global standard.
We're seeing a shift towards regulations that can evolve alongside AI, rather than becoming quickly obsolete.
Navigating the patchwork of international AI laws presents a significant challenge. Diverging regulations across jurisdictions create compliance complexities for multinational organizations.
Predicting the precise trajectory of global AI laws is difficult, but a trend towards greater standardization and international cooperation is likely.
Future regulations will likely place a heavy emphasis on transparency and accountability. They may require organizations to demonstrate how their AI systems work and who is responsible for their outputs.
Expect increased scrutiny of algorithmic decision-making, with laws mandating explainability and auditability.
There is an increase in global cooperation to set safety standards. Recent AI summits, and voluntary safety standard releases, show that there is a global push to make sure AI is developed safely.
In essence, the future of AI guardrails will be shaped by a continuous dialogue between technology, law, and ethics, with the goal of ensuring that AI serves humanity responsibly.
In sum, AI guardrails—ethical, technical, regulatory, transparency-focused, and human-centric—are indispensable for navigating the complex legal terrain of artificial intelligence.
They mitigate risks ranging from discrimination lawsuits and data breaches to hefty fines and reputational damage stemming from misinformation and harmful AI decisions. As AI's influence expands, these guardrails become the bedrock of responsible innovation.
For organizations, proactive implementation is key. Begin by establishing a robust AI governance framework, embedding ethical principles into AI design. Prioritize data privacy and security, adhering to regulations like GDPR and CCPA.
Embrace explainable AI (XAI) to ensure transparency and auditability. Integrate human-in-the-loop models for critical decisions, maintaining human oversight. Stay informed about evolving AI laws, and foster a culture of accountability.
By taking these actionable steps, organizations can harness AI's power while minimizing legal liabilities and building public trust.
| 2023-03-15T00:00:00 |
https://connexcs.com/blog/ai-guardrails-types-and-the-legal-risks-they-mitigate/
|
[
{
"date": "2023/03/15",
"position": 94,
"query": "AI regulation employment"
}
] |
||
Advising UK's Office for AI on AI Regulatory Framework
|
Advising UK’s Office for AI on AI Regulatory Framework
|
https://nebuli.com
|
[] |
In March 2023, following the above policy paper review, the government launched its AI white paper to guide the use of artificial intelligence in the UK, to ...
|
The white paper outlined 5 principles that the government’s advised regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor.
Nebuli provided detailed reviews of these 5 principles, suggesting further principles for the government to consider based on the company’s research on human-centric augmented intelligence and experience in building responsible AI solutions in diverse markets.
Below is a summary of the recommendations provided to the government’s Office for AI:
Promoting Transparency
Transparency is the bedrock of responsible AI. We fully endorsed the white paper’s proposition of requiring organisations to be clear about their use of AI. Transparency cultivates trust among users and stakeholders, aligning with our human-centric approach that values open communication and informed decision-making.
Envisioning Explainable AI
We endorsed the white paper’s emphasis on the adoption of explainable AI models. These models enable users to understand the decision-making processes behind AI systems, instilling confidence in their outcomes. Our philosophy strongly advocates for the use of “Human-in-the-Loop” approaches, steering clear of opaque “black-boxed” models that can perpetuate bias and propagate harmful content. Encouraging organisations to adopt explainable AI models reinforces ethical considerations and ensures responsible AI deployment.
Empowering Redress and Accountability
The white paper’s insights regarding the need to improve current routes for contesting AI-related harms were welcomed. Our human-centric augmented intelligence approach resonates with the government’s commitment to providing effective mechanisms for reporting inaction by regulated entities. We suggested going a step further by establishing a clear process for users to report inaction, creating an additional layer of accountability. This strengthens the framework’s responsiveness and ensures prompt resolution of reported issues, fostering trust in the system.
Cross-Sectoral Principles
The white paper outlined the government’s revised cross-sectoral principles, which is an important step. By encompassing safety, security, fairness, accountability, and contestability, the framework demonstrated a comprehensive understanding of the diverse risks associated with AI technologies. We supported the application of context-specific approaches by regulators, as it aligns with our belief in tailoring AI practices to different industries and sectors. To enhance the framework, however, we recommend incorporating sector-related AI expertise by establishing sector-specific regulatory bodies. This will enable regulators to address unique challenges effectively and facilitate responsible AI development within each domain.
Skill Development for a Sustainable Future and Reducing Digital Inequality
The white paper highlighted the current problem concerning skill gaps in the AI sector. We advocated for significant investments in AI education and training programs, particularly in sectors where AI applications require domain-specific expertise. This collaborative approach between the government, educational institutions, and industry stakeholders ensures a skilled workforce capable of responsible AI development and deployment. By closing the skill gap, we can create a sustainable ecosystem that embraces ethical considerations and safeguards against potential risks.
Strengthening the Framework
The government aims to introduce a statutory duty for regulators to have due regard to the principles. Our team agreed and supported reinforcing accountability within the AI landscape. However, we recommended an incremental approach to introducing this statutory duty, allowing regulators to gradually adapt and strengthen their mandates. We also recommended establishing a certification mechanism or labelling system to recognise AI systems are meeting specific standards, incentivise responsible AI practices and foster greater transparency.
Educating and Empowering through Public Awareness Initiatives
The white paper’s focus on educating the public about AI aligns with our philosophy and, thus, we welcomed it. We advocate for government-led national campaigns that employ jargon-free language and user-friendly interfaces to empower individuals and foster a deeper understanding of AI’s benefits, risks, and responsible usage. Collaborations with educational institutions and schools can further drive public awareness and ensure inclusivity in AI adoption.
Harmonising Innovation and Regulation
The white paper’s call for effective coordination mechanisms and stakeholder collaboration is essential for avoiding overlapping and contradictory guidance. We emphasised the importance of nurturing a collaborative ecosystem where regulators work transparently and share information. Continual engagement with industry experts and organisations will enable regulators to adapt to the rapidly evolving AI market while staying attuned to emerging technologies.
| 2023-03-15T00:00:00 |
https://nebuli.com/work/uk-governments-office-for-ai-regulatory-framework/
|
[
{
"date": "2023/03/15",
"position": 41,
"query": "government AI workforce policy"
}
] |
|
Privacy Policy
|
Privacy Policy
|
https://www.skyhive.ai
|
[] |
... , sharing practices, and user rights regarding personal information, emphasizing transparency and security in their AI-driven workforce solutions.
|
SkyHive Technologies Inc.
Privacy Policy
Last updated: March, 2023
SkyHive by Cornerstone (“SkyHive,” “we,” or “us”) values the trust you place in us when using our products and services and providing us with your personal data. We are committed to protecting your privacy when using our services or visiting our website (www.skyhive.ai).
This Privacy Policy explains how your personal data may be collected and processed through our Services. It also describes your legal rights over the personal data that is processed by SkyHive. To help you understand our privacy practices, this Privacy Policy explains:
Who we are and what we do What personal data we collect about you How we obtain data about you How we use your personal data With whom we share your personal data International data transfers How long we keep your personal data How we protect your personal data What rights you may have in relation to your personal data How we use cookies and similar technologies How you can contact us How we may update this Privacy Notice
By providing your personal data to us, you agree to the processing of your personal data as set out in this Privacy Policy. Further notices highlighting certain uses we wish to make of your personal data together with the ability to opt in or out of selected uses may also be provided to you when we collect personal data from you.
In order to fully understand your rights, we encourage you to read this Privacy Policy as well as our Terms of Use (www.skyhive.ai/terms). We reserve the right to amend this policy and our Terms of Use at any time and without notice, simply by posting such changes on our website. Any such change will be effective immediately upon posting. Please check this Privacy Policy from time to time for any changes.
This Privacy Policy does not apply to, and SkyHive takes no responsibility for, any third-party websites which may be accessible through links from this website. If you follow a link to any of these third-party websites, they will have their own privacy policies and you should review those policies before you submit any personal data to such third-party websites.
| 2023-03-15T00:00:00 |
https://www.skyhive.ai/privacy
|
[
{
"date": "2023/03/15",
"position": 64,
"query": "government AI workforce policy"
}
] |
|
Guidance on AI and data protection | ICO
|
Guidance on AI and data protection
|
https://ico.org.uk
|
[] |
This update supports the UK government's vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of ...
|
This guidance was updated on 15 March 2023.
The Guidance on AI and Data Protection has been updated after requests from UK industry to clarify requirements for fairness in AI. It also delivers on a key ICO25 commitment, which is to help organisations adopt new technologies while protecting people and vulnerable groups.
This update supports the UK government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI.
We continue to engage with the UK government, along with our partners within the Digital Regulation Cooperation Forum (DRCF), on its broader proposals on regulatory reform.
The ICO supports the government’s mission to ensure that the UK’s regulatory regime keeps pace with and responds to new challenges and opportunities presented by AI. We look forward to supporting the implementation of its forthcoming White Paper on AI Regulation.
We will continue to ensure ICO’s AI guidance is user friendly, reduces the burden of compliance for organisations and reflects upcoming changes in relation to AI regulation and data protection.
For ease of use and given the foundational nature of data protection principles we decided to restructure the guidance moving some of the existing content into new chapters. Acknowledging the fast pace of technological development the ICO believes more updates will be required in the future so using data protection’s principles as the core of this expanding work makes editorial and operational sense.
We outlined below where new content resides so past readers of the Guidance on AI and Data Protection can navigate the changes at speed.
What are the accountability and governance implications of AI?
Change overview: This is an old chapter with new additions
What you need to know:
How do we ensure transparency in AI?
Change overview: This is a new chapter with new content
What you need to know:
We have created a standalone chapter with new high-level content on the transparency principle as it applies to AI. The main guidance on transparency and explainability resides within our existing Explaining Decisions Made with AI product.
How do we ensure lawfulness in AI?
Change overview: This is a new chapter with old content - moved from the previous chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’ - and two added new sections.
What you need to know:
What do we need to know about accuracy and statistical accuracy?
Change overview: This is new chapter with old content.
What you need to know:
Following the restructuring under the data protection principles, the statistical accuracy content – that used to reside with the chapter ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’ - has moved into a new chapter that will focus on the accuracy principle. Statistical accuracy continues to remain key for fairness but we felt it was more appropriate to host it under a chapter that focuses on the accuracy principle.
Fairness in AI
Change overview: This is a new chapter with new and old content.
What you need to know:
The old content was extracted from the former chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’. The new content includes information on:
Data protection’s approach to fairness, how it applies to AI and a non-exhaustive list of legal provisions to consider.
The difference between fairness, algorithmic fairness, bias and discrimination.
High level considerations when thinking about evaluating fairness and inherent trade-offs.
Processing personal data for bias mitigation.
Technical approaches to mitigate algorithmic bias.
How are solely automated decision-making and relevant safeguards linked to fairness, and key questions to ask when considering Article 22 of the UK GDPR.
Annex A: Fairness in the AI lifecycle
Change overview: This is a new chapter with new content
This section is about data protection fairness considerations across the AI lifecycle, from problem formulation to decommissioning. It sets outs why fundamental aspects of building AI such as underlying assumptions, abstractions used to model a problem, the selection of target variables or the tendency to over-rely on quantifiable proxies may have an impact on fairness. This chapter also explains the different sources of bias that can lead to unfairness and possible mitigation measures. Technical terms are also explained in the updated glossary.
Glossary
Change overview: This is an old chapter with old and new content.
What you need to know:
New additions include definitions of:
| 2023-03-15T00:00:00 |
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/
|
[
{
"date": "2023/03/15",
"position": 65,
"query": "government AI workforce policy"
}
] |
|
Machine Learning Operations with Impact
|
Machine learning operations with impact
|
https://www.deloitte.com
|
[] |
Discover real-world challenges and results from companies successfully using MLOps & Edge AI to drive business value in these machine learning case studies.
|
As artificial intelligence (AI) gains importance, it’s creating amazing results across many industries. From retail sales forecasting to supply chain issue resolution to potential disease prediction to customer service automation, there are endless opportunities.
Every department in every company wants some aspect of AI to drive business value. The technology is fundamentally world-changing. Its invention can be equated to that of the lightbulb.
| 2023-03-15T00:00:00 |
https://www.deloitte.com/us/en/services/consulting/articles/edge-ai-and-machine-learning-operations-case-studies.html
|
[
{
"date": "2023/03/15",
"position": 13,
"query": "machine learning workforce"
}
] |
|
The Benefits of Workplace Learning and Development
|
The Benefits you can glean from pursuing workplace learning and development
|
https://www.nextthought.com
|
[] |
... intelligence and machine learning programs. Like a properly functioning market, the "profit" employees make will increase their usage in those key areas. In ...
|
Workplace learning and development is crucial for longevity and loyalty among your employees. If the past few years have taught us anything, it’s that there are more ways than ever for someone to earn a living, so getting employees to commit to a position for longer is a much harder achievement.
There are several ways you as an employer can utilize learning and development strategies to retain your valuable employees. We’re going to take a look at some vital ways learning and development can help modern businesses with their employees, their workplace environments, and how all that improves your organization overall.
Mental Health Awareness in the Workplace
Amid the events many have experienced the last few years, employees crave ways to focus more on wellness, well-being, and mindfulness, not only for themselves but for their teams and families. Without it, employees may feel as though they can't or shouldn't stay with their employers for the long haul.
One report mentioned the rise in anxiety and mental health issues since the pandemic set in: "Nearly half of U.S. workers suffer from mental health issues since COVID-19 pandemic hit." See also Deloitte's Millennial Leadership Survey and young workers' emphasis on stress and work-life balance. These issues won't go away any-time soon. McKinsey analysts argue that the wellness market has grown to $1.5 trillion.
But well-being and mindfulness can be learned and incorporated into your business goals. This is where L&D comes in. L&D teams can create courses with the latest research and tips on handling stress, mental health, and mindfulness. Through learning solutions like social learning discussions and chats, employees can connect with each other as they work through similar issues. These development programs are key when acclimating your business strategy to adapt to the mental health of your workers.
Pursuing Mindfulness
One of the chapters from Deloitte's Millennial Leadership Survey states that nearly half of all millennials and Gen Zs say they're stressed most or all of the time. These figures can be even higher in women and people of color. For Gen Zs, half of their stress derives from their current job and career paths, illustrating how learning mindfulness techniques at work can have a dramatic effect on younger workers' mental health. By implementing mental health awareness into your workplace learning program, you’re also allowing for key business improvements like productivity and creating positive work environments.
Companies are taking notice. An entire section of McKinsey's insights are devoted to well-being in the workplace and analysis by The Starr Conspiracy shows a 502% increase in investment for employee well-being from 2019 to 2020.
The learning and development function, in collaboration with other roles serving the employee experience, is uniquely capable of addressing this need. By developing mindfulness practices and courses, learning and development strategies can be deployed to check in on employees, their mental health, and how they’re feeling about their individual workplace.
The last few years have seen a rise in employee tools to learn well-being and mindfulness, from journals to apps and beyond. Gallup's latest book is even called Wellbeing at Work. With such a dramatic increase in mental health and well-being initiatives like these, L&D can support them with mindfulness courses. By integrating these into learning opportunities and even mental safety training, we can utilize experience-based learning to create better workplace environments for everyone.
Consider springboarding off your organization's HR material regarding well-being as though it were the foundation of a course on the matter. Then supplement that material with podcasts, videos, and guided meditation tracks to further cement this skill in their day-to-day lives. When you combine these growth and developing tools with already-existing infrastructures (like HR) you’re utilizing systems that are already in place to improve employee engagement and their environment.
Workers Want to Know Their Work Contributes
Another aspect of well-being and resilience that impacts worker productivity is an emphasis on meaning. The more meaningful employees find their work, the more resilient they can become when facing setbacks. Many employees simply want to know that their job matters and their work is noticed. What are they working toward in their role? What is the company working toward in its mission? And how are the two connected?
A recent McKinsey article outlines how organizations can go from the "Great Attrition" to the "Great Attraction" by embracing some of these larger perspectives. For example, they give advice about building great cultures, avoiding transactional environments, and developing opportunities for employees to grow their careers. Many of these recommendations flow through how the day-to-day minutiae of employees' work days can fit into the bigger picture. By integrating these into professional development, employees are more likely to feel relevant and involved through their daily activities if they’re working towards something.
Using your LMS or LXP can be a great avenue for connecting the dots between an individual's role and the organizational mission.
Connect the Dots Between Roles and Goals
Most organizations educate their employees on the strategic goals and vision of the company, usually during onboarding. But it's often quickly forgotten, or it's so vague that the employee doesn't understand how their work fits into the bigger picture.
By building a course that emphasizes the strategic goals of the company but that is tailored to many job types, you can help your employees develop a career path and a sense of purpose for the long term. By ensuring that talent development doesn’t cease at the onset, employees are more likely to keep learning and advancing their skills instead of growing stagnant and complacent.
Caution: Don't be cheesy. Sometimes the connection between a low-level employee and how the company "makes the world a better place” can be tenuous at best. You can build trust with your people by showing them the practical connections of how their role serves the company’s larger purpose without making it seem like they’re saving the world, one application at a time.
Use L&D to Identify Struggles and Progress
Awareness is a crucial way to identify struggling employees and chart their progress as they adapt, improve, or even continue to struggle. An employee’s learning progress and talent improvements can be measured through learning and development strategies, like any other metric an employer uses.
However, even for the most seasoned L&D leaders, finding metrics to measure how much someone has "learned" is decidedly difficult. It requires maintenance, feedback, and communication that you have to be willing to devote to the employee and task at hand.
Traditionally, measuring progress boils down to metrics related to completion events and test scores. That is, you can easily track whether an employee has engaged with a particular section or course, and whether they retained knowledge through a quiz or test. Yet these metrics are crude, unable to provide the granularity L&D leaders need to accurately assess their people.
Adapt Content Based on Engagement
Not just performance, engagement should be a key metric you utilize to chart an employee’s progress and level of interaction with content. Use deduction from analytics to ask yourself some difficult questions like:
Why does this particular course have a higher adoption rate than others?
Why does this course have a 95% completion rate while this other one only has 35%?
Just like any good marketer, and following the relevance principle from earlier, it's crucial for L&D leaders to personalize content as much as possible based on engagement. While you may not be able to get a recommendation engine like you find in Netflix, you should be able to get a sense of how content is performing. The next step is to deduce what the performance means for your learners.
Further, innovative L&D teams find ways to reward learners based on the skills they're learning and how well they're learning them. This is doubly important depending on the strategic objectives of the organization. Like we mentioned earlier, if the company seeks to become more data-centric, perhaps more rewards should be offered for completing courses in data analysis, data visualization, or even business intelligence and machine learning programs. Like a properly functioning market, the "profit" employees make will increase their usage in those key areas.
In contrast, great L&D leaders find ways to support struggling learners. Suppose a course has a low completion rate, but through surveys and conversations on the discussion board, you see that this course is popular among employees. Perhaps the data tells you that the learning curve is too steep. So, the course may need extra introductory material at the beginning, extra coaching resources throughout, or even more rewards for certain completion events.
The point is that whether employees are doing well or doing poorly with your content, there is plenty that a platform with quality data can tell you about their progress and how to adapt. Learning and development strategies allow for monitoring, customization, and analytical tools to be sure your employees are progressing the way you want them to.
Personalized Content From Usage Scores
As mentioned, there are several key ways to personalize the content based on your data. You can find individual performance measures that allow you to focus on the engagement of any given learner to see when you can step in and help. There are also general trends to view the trajectory of your course engagement and progress from a comprehensive view. Finally, you can learn what content is the most and least engaging so your organization continually improves its learning outcomes.
Long-Term Career Development
When an employee feels valued and is allowed to cultivate their skills through a position, they are much more likely to stay. When you start treating a job as a place of stagnation, that’s when long-term career growth gets put on the back burner and complacency and despondency set in.
Younger workers like millennials want to grow their careers by quickly developing new skills and taking on more responsibility. By gaining supplemental education and certifications, sometimes even on their own dime, they are pursuing advanced skill development over a job. According to reports from The Wall Street Journal and The New York Times, they would rather bet on a career path than waste time at a dead-end job. This young segment of the overall workforce — the largest segment, by the way — is the most educated in modern history, they’re adept at digital skills, and they’re hungry to stand out.
This desire places the onus squarely on L&D’s shoulders. You’ve seen the headlines about the “Reskilling Revolution,” how reskilling and upskilling dominate CEOs’ priority lists, quickly moving L&D leaders to the front of the executive table. Now it’s time to rise up to the challenge to retain young, ambitious talent.
This doesn’t have to be a death knell for employers and businesses - simply by offering to help a little they can attract the attention of these young, ambitious workers. Not just through tuition reimbursement, student loan alleviation, or certifications - you can help by reminding them that you have access to skills and they can get paid to earn while they learn them.
L&D leaders can retain the best and the brightest by learning the demands of the workforce and how a learning and development workplace aligns with those demands.
Flexibility
Things have changed significantly in recent years, and one of the greatest changes has been employees’ expanded options to work wherever and however they want. The range is extreme: Young workers have moved away from bustling cities to Caribbean islands — these are the “digital nomads” giving tax professionals everywhere splitting headaches — and others want to move to quieter exurbs, while others simply want to be in the office less.
Deloitte noted in its latest Millennial Leadership Survey that 25% of millennials said they wanted to work in the office “a little to a lot less often.” In fact, millennials and Gen Zs rated flexibility as the most critical employee characteristic for successful businesses. This shift toward flexible arrangements to support so-called hybrid work has led to prominent organizations like Google to redesign their office buildings or just buy new ones.
Learning and development can highlight the ways flexibility can work for employees no matter what workplace they choose. Acquiring new skills is a huge benefit to employment, no matter if they learn from a cubicle or their home office.
Just because we’ve changed the ways and settings of workplace learning doesn’t mean our employees have stopped entirely. Learning platforms and online capabilities for learning need to be accessible, adaptive, and scalable for as many employees as you need. Employees expect learning platforms to be as nimble as their consumer products. People listen to podcasts while they do the dishes or weave through traffic. They watch videos on their phones while putting on makeup. They read articles and newsletters over a cup of morning coffee. Our tolerance for clunky platforms, slow load times, and repetitious training modules has sunk through the floor the more adept we become at handling the latest thing.
Modern LMSs and LXPs must adapt to these preferences and modalities. The most innovative platforms offer quick course creation tools with modules ready to embed multimedia content like articles, images, audio, and video.
Stay on the Cutting Edge with New Skills
Job skills and roles change more quickly than they ever have. Young workers deeply understand this. They’re often more well read on fast-moving topics than their bosses. Consider how much more the typical 20-something knows about advances in technology like blockchain or cryptocurrencies than their 50-year-old boss. Of course, experienced bosses often know much more about the business feasibility of these topics, and these technologies may be years away from becoming overly useful.
Younger workers have mastered the language, culture, and tools of burgeoning industries, and they often see further down the road to where advances can overlap with their skills and the needs of the business. For example, Andreesen Horowitz’s new media arm called Future posed a probing question to many young leaders across a wide array of industries: “How is expertise being redefined in the modern era?” Several emphasized the growing need to decentralize information so “experts” can arise from lower in the hierarchy. In other words, just because someone doesn’t have a particular title or experience doesn’t mean they don’t know a thing or two from which the rest of the organization could learn.
That’s why it’s important for learning platforms to embrace a tech-agnostic approach with their product development. By utilizing tech-agnostic platforms, your learning and development strategies can keep up with ever-changing technologies.
As fast as things change, the best many L&D leaders can do is to free up their learning platforms to embrace plugins, integrations, and embeddable content for any type of technology that your employees use.
For example, suppose your organization grows in its data-driven decision making, and some employees feel inspired to learn Python, one of the most popular machine learning languages. Can your current platform embed a recent YouTube video on the subject, underneath which is an embedded Jupyter notebook for trying out the code in real time? This way of thinking about how to build courses with mixed modalities will usher in the next generation of workers’ skills.
Reduce Departmental Silos for Collaboration
Following up on the previous point about decentralization, another important trend within innovative workplaces is a more open, cross-functional hierarchy. This matrixed approach allows workers to collaborate on projects as they arise, and then disband as the project wraps up.
Consider this kind of collaboration an internal talent workplace instead of conventional departments. This shift in your way of thinking can drastically approach what writers in the MIT Sloan Management Review have called work without jobs. Whatever you call it, it’s increasingly common: As the Harvard Business Review noted, “Collaborative work — time spent on email, IM, phone, and video calls — has risen 50% or more over the past decade to consume 85% or more of most people’s work weeks.”
The challenge is to optimize this collaborative atmosphere for everyone involved. After all, the more you emphasize collaboration, the more you risk wasting people’s time on tasks they don’t actively impact or participate in. More productive work is ideal while also accomplishing team-based learning. The HBR article provides ideas for resolving the former, but the latter remains an obstacle for many learning leaders. To begin, L&D leaders should prioritize learning built for teams instead of trying to adapt learning platforms for individuals to a team-based setting.
The modern learning platform helps to foster community around its learning modules. Learning should be more like discussion than lectures. That’s why it’s more common to see social learning experiences like contextual conversations, private and public communities, and real-time chat further enhance the total learning experience.
Consider how to redesign the learning path to include more social learning to help discover new content as well as help to make the learning stick with peer discussions.
Use L&D to Balance Learning with Career Cultivation
We often talk about a career as if it is something you attain, not something you’re currently working on. Every job, position, and even task is a milestone in your career, not a way to build up to an eventual career.
Young professionals have a very black and white view on careers - what contributes to it and what doesn’t. If they decide that their position within your company, business, or under your tutelage is not contributing to their career growth, they’ll bolt.
Human beings are like plants — they want to grow. They crave it. They want to get over the learning curve, master a subject, and then move on and conquer something new. If you don’t offer employees this trajectory, they’ll go somewhere that will.
Give Employees Room to Fail, Learn, and Grow
You can see this dynamic play out in many areas. In hedge fund manager Ray Dalio’s popular book and media initiative Principles, he often talks about this growth trajectory. The logo for the brand represents it: An arrow spiraling its way up and to the right. First, an employee moves toward mastering their subject. They encounter difficulty and often fail. From this failure, they recover, learn, and grow towards the next level. This cycle happens again and again, and it’s part of the natural evolution of humankind.
Similarly, a popular, peer-reviewed area of study in psychology, Self-determination Theory, posits that human beings need three things to be successful: Autonomy, relatedness, and mastery.
Autonomy: We need the latitude to direct ourselves in the direction we see fit. Without it, we become resentful and stagnant. Relatedness: We need others to help us along in our journey, to encourage us, provide feedback, and support our development. This is why social learning is so important for L&D initiatives. Mastery: We need the competence that comes with training and skills to give us a sense of accomplishment. Without it, we feel useless, meaningless, and complacent.
Provide Feedback and Social Learning
Too often, training initiatives are one-and-done: You host an event, webinar, or workshop, and then employees are sent off into the unknown to wander alone in the world to figure it out for themselves.
This disassociation can come across as a lack of accountability and responsibility in that employee’s growth. Clarification, check ins, and feedback are instrumental in providing effective teachings.
New technologies and new perspectives offer L&D teams the opportunity to check in with employees. As they’re applying their training in the real world, they’re sure to come across questions they couldn’t have imagined and had no way of articulating. This helps reinforce the content and make it stick. Peers and managers can also help. By discussing what they learned on an ongoing basis, training and development becomes much more effective over time.
Relevance is Key
One of the biggest criticisms against higher education is that it lacks relevance to the real world. You’ve likely had professors who spent their whole lives in academia rather than a participant - and it shows. Or you may have had professors who were leaders in their field and developed important principles — but they applied that to a subject 20 or 30 years ago, not today.
Similarly, many training courses associated with tech skills, media, and everyday tools don’t apply anymore, even though the training occurred only two years ago. Things change fast. Employees also receive webinars or video training recorded three, five, or 10 years ago that show their age in the first five seconds, causing employees to tune out immediately.
To stay relevant, it’s crucial that learning content be created and distributed quickly. Quick course creation tools allow learning specialists to grab the latest article, YouTube video, or podcast, and embed them directly into the learning module at speed. When you show an employee outdated content, they will likely roll their eyes and wonder why you’re wasting their time. After all, what could a decades old video teach them about their job now?
With quick course creation tools and embeddable content from innovative learning platforms, the world is your oyster. Almost anything you read, watch, or hear online can be immediately used for the next course.
This also gives a new perspective on the world around you. For example, pay attention to what your grown kids or friends may be talking about and learning, especially if it’s a developing subject matter or technology. Chances are, the very media from which they’re being taught can be added to the employee curriculum.
L&D Takeaways For Your Employee’s Benefit - and Yours
Today’s workforce is more ambitious, driven, and articulate than they’ve ever been. They’ve been trained from a young age to ask for what they want and not stop until they get it. These are amazing skills in workers, but it can be difficult to adjust the paradigm from workers pitching themselves to you to you pitching yourselves to workers.
The truth of the matter is that there doesn’t need to be a total shift in your perspective for this paradigm to work for you. What’s ultimately best for your employees will benefit your organization just as much as it does them.
Learning and development strategies are helpful, adaptive ways you can emphasize your employee’s mental health, talent progression, and career paths to demonstrate to them how their career goals contribute to their work at your organization.
Consider the advantages of these L&D scenarios:
An employee that actively participates in a workplace that prioritizes their mental health will feel appreciated. They’re less likely to be stressed and this will, ideally, increase their productivity knowing that their work is being utilized by their organization.
An employee whose progress is tracked not just with metrics, but with feedback and engagement is sure to improve while their skills are deployed, developed, and improved.
An employee with access to flexibility, training, and adaptive platforms for continual learning can succeed while also achieving their professional goals for growth.
An employee’s growth doesn’t have to be on hold when they have a job. The right employer can help advance all of their workers with the right tools, setting them up for success within their organization.
Learning and development is a shift but it doesn’t have to be a total upheaval. You’re not sacrificing your business’s priorities for your employee’s priorities - you’re aligning them.
| 2023-03-15T00:00:00 |
https://www.nextthought.com/thoughts/benefits-of-workplace-learning-and-development
|
[
{
"date": "2023/03/15",
"position": 71,
"query": "machine learning workforce"
}
] |
|
Nucleus 2023 WFM Technology Value Matrix Report Lists ...
|
Nucleus 2023 WFM Technology Value Matrix Report Lists WorkForce Software as a Leader for the 9th Time
|
https://workforcesoftware.com
|
[] |
... Machine Learning, specifically in areas such as labor optimization and complex scheduling. ... WorkForce Hub and Digital Assistant, directly into the WorkForce ...
|
Nucleus Research recently released its 2023 Workforce Management Technology Value Matrix. For the ninth year in a row, WorkForce Software has been recognized as the leader for the value, functionality, and usability of our solutions.
“WorkForce Software has been proactive in adapting to the evolving needs of employees by making substantial investments in automation and analytics, particularly in areas such as scheduling and communication. With capabilities that include demand forecasting, schedule optimization, and support for intricate compliance regulations, WorkForce Software consistently delivers a strong solution that meets the unique requirements of its customers.” Evelyn McMullen, Research Manager, Nucleus Research
The report examines the requirements and competencies that organizations require while searching for and evaluating WFM (Workforce Management) vendors. The Nucleus 2023 evaluation continued to focus on employee engagement while highlighting the critical means to optimize labor spend, reduce employee turnover, and avoid non-compliance fees.
The following is a summary of Nucleus Research’s findings, but readers should read the report for the full details.
What Workforce Management Capabilities Are Organizations Looking For?
Ongoing challenges in attracting and retaining frontline and hourly employees are now paired with economic uncertainty, pressing organizations to increasingly do more with less. Organizations are looking for modern workforce management solutions to optimize resource planning, labor spending, and allocation while mitigating unplanned overtime, compliance risk, and employee burnout.
Configuration of schedules based on unique performance management factors is critical to running lean operations without understaffing or violating compliance regulations.
According to the customers interviewed by Nucleus, the capability to customize schedules based on various distinctive factors is essential for efficient operations, particularly when workforce conditions are unstable, to avoid understaffing or violating regulatory compliance.
Over the past year, vendor investment has continued to focus heavily on AI (Artificial Intelligence) and Machine Learning, specifically in areas such as labor optimization and complex scheduling. Reporting and analytics capabilities leverage data across systems to give employers and managers a more complete view of factors contributing to employee satisfaction, fatigue, and flight risk.
What Capabilities WorkForce Software Offers to Confront These Challenges
WorkForce Software is a purpose-built, SaaS-delivered platform that provides users with a broad range of workforce management solutions such as time and attendance, advanced scheduling, demand forecasting, fatigue management, leave and absence management, and analytics functionality. The vendor offers a large network of API integrations to seamlessly integrate existing HR technology and payroll solutions to eliminate data siloes. Additionally, the platform provides users with their own configurable WorkForce Hub to manage priorities and gain visibility into a range of actions.
The software runs on a configurable and automated rules engine to approve and deny requests and validate input actions in processes such as scheduling, forecasting, and PTO. WorkForce Software expands its rules engine to incorporate the latest local, state, and federal regulations to ensure compliance with current labor laws. WorkForce Task Management provides users with task management functionality integrated as a result of the vendor’s acquisition of Foko Retail in 2021.
Organizations can use WorkForce Task Management to improve store productivity and ensure process standardization across locations. The schedule optimization functionality assists organizations in matching schedules with expected customer demand to improve operational efficiency and reduce instances of schedule padding.
Recent updates and announcements include:
In September 2022, WorkForce Software integrated the core communication capabilities of WorkForce Experience, including the WorkForce Hub and Digital Assistant, directly into the WorkForce Suite User Interface. This allows customers to access the chat, channels, and broadcast capabilities of WorkForce Experience in the same UI as traditional time, scheduling, absence, and analytics capabilities.
These communication capabilities have been enhanced to extend the vendor’s vision for smart communications, which provides a layer of automation and automatic generation of suitable context to communications to reduce manager workload and improve the quality of communications and expected outcomes.
Throughout 2022, the vendor expanded its standard product global template library to include new best practices, rules, and regulations in France, Singapore, Vietnam, South Korea, Spain, and Italy. In addition, the vendor is actively working with industry-leading SI’s to develop country and industry-specific fast implementation packages to support growing demand for the mid-market.
In March 2023, WorkForce Software developed new HCM and Payroll Connectors for Workday, which utilizes WorkForce Integration Platform APIs for Suite access. Also in March, the vendor released WorkForce Suite Mobile Timesheets as part of its mobile-first strategy to improve automation, reduce manager workload, and improve the overall employee experience.
Throughout 2022 and early 2023, the vendor continued to expand its broad range of scheduling solutions to improve productivity and automation. Enhancements include automatic publishing of generated shifts, auto-assignment support for multi-week contracts, enhanced editing in the scheduling editor, a new swap schedule screen, new assistant HUB cards for shift swap requests, and the ability to run auto-scheduler for specific employees.
Stay ahead in the rapidly evolving workforce management software market with WorkForce Software’s innovative, purpose-built solutions for scheduling, compliance, analytics, employee self-service, and labor forecasting.
Read the Full Nucleus Research Report Here.
| 2023-05-18T00:00:00 |
2023/05/18
|
https://workforcesoftware.com/blog/nucleus-2023-wfm-technology-value-matrix-report-lists-workforce-software-as-a-leader-for-the-9th-time/
|
[
{
"date": "2023/03/15",
"position": 72,
"query": "machine learning workforce"
}
] |
The Top Workforce Optimization Platform Vendors for 2023
|
The Top Workforce Optimization Platform Vendors for 2023
|
https://www.cxtoday.com
|
[
"Rebekah Carter"
] |
The tool leverages UJET's cloud-native contact centre platform and Google's artificial intelligence and machine learning capabilities to provide insights, ...
|
The world of work has changed drastically in recent years. Since the pandemic, concepts like remote and hybrid work have become commonplace, and companies have begun investing in new strategies to build more engaged, satisfied, and productive teams.
Not only are employees demanding more from their employers, from greater levels of empathy to improved workplace technology, but business leaders are also beginning to understand the importance of properly empowering and utilizing their human resources. This is particularly evident in the contact centre landscape, where managers and supervisors need to ensure they’re effectively managing their teams, to preserve high levels of customer satisfaction and retention.
Workforce optimization tools, either offered as standalone products or implemented into CCaaS and CRM software, help companies to manage everything from scheduling and staffing to employee engagement, in one unified environment. So which tools are delivering the most value in this growing market? Here are some top vendors offering workforce optimization tools in 2023.
Avaya
Observe.ai
NICE
Genesys
Talkdesk
Playvox
8×8
Five9
Calabrio
Verint
OpenText
Vonage
Khoros
EvaluAgent
UJET
Eleveo
Alvaria
Amazon Web Services (AWS)
Puzzel
Avaya empowers companies with a wide variety of software solutions, hardware, and platforms, covering everything from contact centre capabilities to unified communications. The Avaya Workforce Optimization platform integrates with existing contact centre technologies and provides business leaders with the tools they need to take full advantage of their human resources.
The all-in-one tool allows users to record interactions and monitor service quality, ensuring high levels of customer and employee satisfaction, while boosting compliance levels. The solution also comes with tools which support supervisors in monitoring employee performance during interactions, as well as forecasting tools for staffing and scheduling.
→ Explore Avaya
Specializing in conversational intelligence and customer experience solutions, Observe.ai provides businesses with an all-in-one toolkit for boosting staff performance. The comprehensive platform offered by the company combines conversational intelligence with omnichannel routing and conversation management, reporting, analytics, and more. There are even real-time AI solutions available to support both supervisors and agents during conversations.
Observe.ai’s technology gives companies new ways to unlock value in their teams with access to artificial intelligence. Not only does the platform help with scheduling and forecasting requirements, but it also comes with a host of useful tools for automatic quality assurance and agent coaching. Plus, intelligent analytics make it easier for businesses to preserve compliance and high customer satisfaction levels.
→ Explore Observe.ai
CCaaS and customer experience solutions vendor NICE offers a host of valuable products to today’s contact centres, including conversational AI and chatbots, interaction analytics, AI routing tools, and robotic process automation capabilities. The company’s complete suite of workforce engagement tools provides access to useful resources for quality and performance management, real-time interaction guidance, and call/screen recording services.
The nice Workforce Management platform helps companies to hit SLAs with greater accuracy and engage their agents across multiple locations and channels. There are automation tools available for minimizing maintenance tasks, as well as built-in solutions for long-term planning, schedule management, forecasting, and more.
→ Explore NICE
Offering solutions for a range of contact centre needs, Genesys delivers a portfolio of valuable tools to businesses, including contact centre software, digital engagement suites, AI bots and automation platforms. The Genesys Workforce Optimization platform can integrate with a company’s contact centre software, providing a range of ways to engage and manage employees.
The solution comes with forecasting and scheduling tools to help business leaders ensure they’re making the most of their human resources. Within the ecosystem, users can also find speech analytics and text analytics solutions, voice and screen recording capabilities, automated training management services, and more. The end-to-end platform even includes intelligent reporting and analytics tools.
→ Explore Genesys
Focused on the customer experience landscape, Talkdesk produces contact centre platforms, AI-powered tools, and a host of other solutions specially designed to improve business interactions. The intelligent Talkdesk workforce management platform prioritizes employee experience and engagement with powerful AI insights and automation capabilities.
Businesses can use the Talkdesk ecosystem to simplify and automate the process of forecasting customer demand and developing staffing and scheduling strategies. It’s even possible to schedule employee workflows based on their skills and knowledge. Plus, the platform comes with a convenient chatbot to help manage open-ended change requests and shift changes. There’s even an adherence monitoring system to offer behind-the-scenes insights into team performance.
→ Explore Talkdesk
Recognized by market analysts like Trust Radius for their contact centre workforce management technologies, Playvox helps companies to engage and empower their workforce. With built-in AI capabilities, the Playvox platform offers behind-the-scenes insights into opportunities for schedule and staffing optimization and provides guidance on how companies can reduce operational costs.
The Playvox WFM solution offers real-time visibility into business operations to help companies keep service levels and budgets on track. There’s an interactive dashboard companies can use to monitor crucial KPIs, as well as a convenient platform where team members can manage shift swaps and time-off requests.
→ Explore Playvox
Offering modular and customizable solutions to modern contact centres, 8×8 helps businesses to leverage the bespoke technologies they need to run more efficiently. Built into the contact centre portfolio of products, 8×8’s Workforce Management technology simplifies the process of forecasting expected interactions, service volume, and work items.
Companies can leverage the 8×8 technology to monitor and track schedule adherence with dashboards and automated alerts. Users can also automatically create schedules based on pre-defined criteria to ensure agents can deliver exceptional customer experiences. The workforce management system can also integrate with a range of other tools offered by 8×8, such as CX analytics platforms, speech analytics, and quality management services.
→ Explore 8×8
Five9 is a software as a service vendor focused on the customer experience landscape. The company offers a wide range of tools and solutions to business leaders, including CCaaS platforms, agent and supervisor desktops, and intelligent virtual assistants. Part of a comprehensive collection of employee engagement tools, the Workforce optimization platform offered by Five9 assists with scheduling staff, monitoring adherence, and improving workplace efficiency.
The all-in-one solution helps companies to manage their staff with accurate multi-channel, multi-skill forecasts, schedules, and intraday management tools. The platform also includes access to interaction analytics systems, performance management tools with KPI monitoring, and CRM integrations. Interaction recording and quality management tools are also available.
→ Explore Five9
With solutions covering everything from remote team management to customer experience and risk and compliance monitoring, Calabrio serves a wide range of contact centre use cases. The Calabrio Workforce Optimization (WFO) platform includes all of the tools businesses need to forecast employee and customer needs and schedule their human resources effectively.
Within the platform, companies can find tools for monitoring quality management alongside solutions for business intelligence and analytics. The WFO solution also comes with gamification capabilities to help engage and unify workers, as well as compliant call recording tools and contact centre reporting capabilities.
→ Explore Calabrio
Another contact centre software solution provider with their own dedicated workforce optimization and management platform, Verint has earned the recognition of leading analysts and groups such as G2 and Trust Radius. The WFO solutions offered by Verint are designed to meet the needs of evolving businesses with intelligent forecasting and business performance insights.
Companies can leverage scorecards for a behind-the-scenes view of employee efficiency and effectiveness. Plus, the cloud-based platform allows organizations to create and adjust schedules in seconds, using an all-in-one system for managing teamwork across locations and channels. The platform even comes with pre-built integrations for multiple third-party sources to provide greater visibility and access to workflow automation opportunities.
→ Explore Verint
Focusing on helping companies to empower their employees and unlock efficiency upgrades, OpenText offers a range of customer experience and employee experience management tools. The OpenText Contact Center workforce optimization platform includes a range of useful features, such as interaction recording for quality and compliance, agent scheduling and forecasting tools, and automatic interaction scoring and analysis.
Businesses using the OpenText Qfiniti platform can take advantage of tools for desktop analytics and real-time agent guidance. There are solutions to assist with contextual coaching and online training, and even automated tools for collecting customer insights through satisfaction surveys.
→ Explore OpenText
Vonage partners with workforce management and optimization solution providers to help organizations optimize their contact centre resources. Users can fuse contact center infrastructure solutions with Salesforce and other digital channels to track and monitor customer support metrics. The platform also includes real-time agent adherence monitoring and statistics to help supervisors and managers track individual and group performance levels.
Within the Vonage platform, companies can unlock visibility and real-time guidance for enhancing customer service interactions and workforce performance. Reports and analytics are available across the omnichannel customer service landscape for end-to-end journey insights. Plus, companies can also leverage Injixo workforce management as an add-on to help predict workloads, manage schedule adherence, and minimise staffing complexities.
→ Explore Vonage
Customer engagement platform provider Khoros offers businesses a variety of tools they can use to improve customer satisfaction levels and enhance brand reputations. The comprehensive Khoros cloud platform includes access to everything from agent efficiency monitoring tools to automation capabilities to help businesses maximise their resources.
Companies can track team performance, schedule staff, according to changing needs and demand levels, and leverage a range of additional tools for boosting workplace productivity. Khoros also allows companies to develop their own community engagement platforms, where employees and customers alike can leverage tools for customized support and training.
→ Explore Khoros
QA and performance improvement company, EvaluAgent assists businesses in evaluating conversations and business performance on a massive scale. The complete EvaluAgent platform provides a useful tool for workforce optimization, allowing companies to track quality scores, feedback, CSAT scores and more in a simple, intuitive dashboard.
With 100% coverage and insights into every stage of the customer journey, EvaluAgent allows business leaders to take full advantage of the intelligence they create, to build smarter, more efficient teams. The platform offers tools for automated workflows, automatic quality assurance, and a fully integrated reporting suite. Plus, businesses can utilize built-in learning and coaching solutions to help boost the performance of their teams.
→ Explore EvaluAgent
UJET and Google have partnered to create a workforce optimization tool that helps businesses improve their customer service and employee productivity.
The tool leverages UJET’s cloud-native contact centre platform and Google’s artificial intelligence and machine learning capabilities to provide insights, analytics, and automation for various aspects of workforce management.
Features include forecasting and scheduling, quality management and workforce engagement.
→ Explore UJET
Eleveo delivers a WFO platform in the cloud, on-premise, and in hybrid environments.
In doing so, it goes beyond offering workforce management, quality assurance, and call recording solutions. The suite also includes speech analytics, screen capture, and video recording tools.
Yet, perhaps where Eleveo excels is in integrating its QA and coaching workflows, enabling a connected contact center learning strategy that bolsters agent performance.
Also, it has developed a reputation for offering very competitive price options, which is a crucial differentiator for the vendor.
→ Explore Eleveo
In 2021, Aspect Software and Noble Systems merged before releasing the Alvaria WEM suite.
The suite has particularly advanced WFM features, including cross-location shift management, dynamic scheduling, and simulations to test the various forecasting models native to the platform.
Thanks to these assets, larger and mid-market enterprises typically deploy Alvaria WEM, which is more suited to more complex user requirements.
A second critical differentiator is a tool named: “Alavaria Motivate”, a solution designed specifically to increase engagement amongst contact center sales teams.
→ Explore Alvaria
Amazon Web Services (AWS) offers a comprehensive suite of cloud-based services and tools designed to drive workforce optimization for businesses in the customer experience domain. With solutions like Amazon Connect, Amazon Transcribe, and Amazon Comprehend, AWS empowers organizations to enhance agent productivity, gain valuable insights, and deliver exceptional customer experiences.
By leveraging AWS’s scalability, flexibility, and robust security measures, businesses can achieve their workforce optimization goals while maintaining data security and compliance. With its extensive partner ecosystem, AWS ensures a seamless integration experience for organizations seeking to transform their workforce optimization strategies.
→ Explore Amazon Web Services (AWS)
Puzzel is a prominent provider of solutions for optimizing workforce performance, enabling businesses to revolutionize their customer interactions.
With a comprehensive range of offerings including Puzzel Workforce Management, tools for monitoring and managing performance, real-time analytics, omnichannel customer engagement, scalability, and a customer-centric approach, Puzzel empowers organizations to maximize agent productivity, elevate customer satisfaction, and enhance operational efficiency.
By leveraging Puzzel’s extensive expertise, businesses can elevate their strategies for optimizing workforce performance and stay at the forefront in the ever-changing realm of customer experience.
→ Explore Puzzel
| 2023-03-15T00:00:00 |
2023/03/15
|
https://www.cxtoday.com/workforce-engagement-management/the-top-workforce-optimization-platform-vendors-for-2023/
|
[
{
"date": "2023/03/15",
"position": 77,
"query": "machine learning workforce"
}
] |
Business Optimization & Workforce Development Services ...
|
Business Optimization & Workforce Development Services Fairfield, CA
|
https://www.forward-learn.com
|
[] |
Explore our comprehensive services at Forward Learning Group, specializing in business optimization & workforce ... Machine-learning techniques that improve ...
|
Business Optimization & Workforce Development Services Fairfield, CA
As a business, our core ethos revolves around strategic analysis, customized solutions, and ongoing refinement.
We are committed to enhancing efficiency, productivity, and overall performance for our clients through our tailored approach.
Our DNA is rooter in optimizing others, driving transformative change, and delivery tangible results.
| 2023-03-15T00:00:00 |
https://www.forward-learn.com/services
|
[
{
"date": "2023/03/15",
"position": 81,
"query": "machine learning workforce"
}
] |
|
Labor Strategy
|
Labor Strategy
|
https://ankura.com
|
[] |
Labor Union Negotiation. Ankura conducts operational studies that examine every word of a labor contract to attribute a financial impact to contract clauses.
|
In a world where the performance of people drives financial results, having the right labor strategy is critical. After all, an organization's most controllable cost and highest value creator is its people. By leveraging the right labor strategies and creating appropriate governing principles for leaders, organizations can achieve cost savings while driving increased engagement. Whether an owner-operator or potential investor, all parties can benefit from crucial insights into an organization’s current labor environment.
Ankura’s labor strategy professionals help clients design and implement labor strategies, including shift schedules, to produce rapid and sustainable improvements. We help our clients understand their cultural stability, identify opportunities to remove hidden costs, and drive profit by engaging with their employees in a meaningful way.
| 2023-03-15T00:00:00 |
https://ankura.com/services/labor-strategy/
|
[
{
"date": "2023/03/15",
"position": 74,
"query": "AI labor union"
}
] |
|
Volume 26
|
Yale Journal of Law & Technology
|
https://yjolt.org
|
[] |
This essay examines racial formation in the context of the digital public sphere with a focus on how artificial intelligence (AI) systems' understanding of ...
|
Gilad Abiri, Sebastián Guidi
26 YALE J.L. & TECH. 240
We are witnessing the birth of a Platform Federation. Global platforms wield growing power over our public sphere–and yet our politics and public debates remain stubbornly state-based. In the platform age, speech can transcend international boundaries, but the repercussions of speech are mainly felt within our own domiciles, municipalities, and national territories. This mismatch puts countries in a difficult place, in which they must negotiate the tension between steering the public sphere to protect local speech norms and values and the immense benefits of free transboundary communication. This Article explores the outcome of this balancing act—what we call platform federalism: where it comes from, how it is unfolding, and how to make it better. The rise of global digital platforms brought up a crisis that has not yet been fully diagnosed. Until their appearance, the public sphere was disciplined by gatekeepers such as traditional mass media and other civil society institutions. They acted to enforce a common set of norms over public discourse. These gatekeepers fulfilled crucial social functions. They enacted and enforced the fundamental social norms that made public communication possible, while at the same time avoiding direct state intervention in public discourse. Through social media, people are now able to bypass these institutions and reach mass audiences directly—what we call the “bypass effect.” Countries are reacting to the consequences of the bypass effect by enforcing local social norms directly. Autocracies might enjoy the dubious luxury of shutting down Internet borders completely. This option, however, is not available for democracies, nor is it desirable. Democracies have embraced softer forms of regulation, which we call “state federalism.” As civil-society gatekeepers are bypassed, states take the mission of curating the public sphere onto themselves: they forcefully impose their own civility norms on platforms’ users (like Germany) or directly forbid fake news on them (like France). State federalism might work in restoring the public sphere’s civility, but it risks unduly imposing the state’s (as opposed to the community’s) values upon the population. State federalism, in other words, can quickly become incompatible with liberalism. We propose a new set of policy tools to maintain domestic civility in the public sphere while keeping state power at bay: civil society federalism. In civil society federalism, the state does not police the public sphere by itself, but rather requires platforms to invite civil society back into their gatekeeping role. These policies ask civil-society organizations to shape the norms that constitute public discourse; as in the past, they are the ones to exclude hate speech, profanity, or misinformation from the public sphere. By bringing civil society back, states can ensure the civility of the public sphere without exerting undue power over it.
| 2023-03-15T00:00:00 |
https://yjolt.org/volume/26
|
[
{
"date": "2023/03/15",
"position": 79,
"query": "AI labor union"
}
] |
|
AI in health care: the risks and benefits
|
AI in health care: the risks and benefits
|
https://www.medicaleconomics.com
|
[
"Jon Moore"
] |
AI algorithms can monitor patients' health data over time and provide recommendations for lifestyle changes and treatment options that can help manage their ...
|
The hype around artificial intelligence (AI) spiked again recently with the public release of ChatGPT. The easy-to-use interface of this natural language chat model makes this AI particularly accessible to the public, allowing people to experience first-hand the potential of AI. This experience has spurred users’ imagination and generated feelings ranging from great excitement to fear and consternation.
But the reality is that for many years now, AI has been making remarkable strides in a wide range of industries and health care is no exception.
The potential benefits of incorporating AI into health care are numerous but like every technology, AI comes with risks that must be managed if the benefits of these tools are to outweigh the potential costs.
One of the most significant benefits of AI is improved diagnostic speed and accuracy. AI algorithms can process large amounts of data quickly and accurately, making it easier for health care providers to diagnose and treat diseases.
For example, AI algorithms can analyze medical images, such as X-rays and MRI scans, to identify patterns and anomalies that a human provider might miss. This can lead to earlier and more accurate diagnoses, resulting in better patient outcomes.
In addition, AI algorithms can help health care providers by providing real-time data and recommendations. For example, algorithms can monitor patients’ vital signs, such as heart rate and blood pressure, and alert doctors if there is a sudden change. This can help health care providers respond quickly to potential emergencies and prevent serious health problems from developing.
AI can also help health care providers better manage chronic conditions. AI algorithms can monitor patients’ health data over time and provide recommendations for lifestyle changes and treatment options that can help manage their condition. This can lead to better patient outcomes, improved quality of life, and reduced health care costs.
Finally, AI has the ability to improve access to care. Its algorithms can enable providers to reach more patients, especially in remote and underserved areas. For example, telemedicine services powered by AI can provide remote consultations and diagnoses, making it easier for patients to access care without having to travel.
However, along with the many benefits of AI there are security and privacy risks that must be considered. One of the biggest risks is the potential for data breaches. As health care providers create, receive, store and transmit large quantities of sensitive patient data, they become targets for cybercriminals. Bad actors can and will attack vulnerabilities anywhere along the AI data pipeline.
Another risk is the unique privacy attacks that AI algorithms may be subject to, including membership inference, reconstruction, and property inference attacks. In these types of attacks, information about individuals, up to and including the identity of those in the AI training set, may be leaked.
There are other types of unique AI attacks as well, including data input poisoning and model extraction. In the former, an adversary may insert bad data into a training set thereby affecting the model’s output. In the latter, the adversary may extract enough information about the AI algorithm itself to create a substitute or competitive model.
Finally, there is the risk of AI being used directly for malicious purposes. For example, AI algorithms could be used to spread propaganda, or to target vulnerable populations with scams or frauds. ChatGPT, referenced above, has already been used to write highly convincing phishing emails.
To mitigate these risks, health care providers should continue to take the traditional steps to ensure the security and privacy of patient data. These include conducting risk analyses to understand their unique risks and responding to those risks by implementing strong security measures, such as encryption and multi-factor authentication. Additionally, health care providers must have clear policies in place for the collection and use of patient data, to ensure that they are not violating patient privacy.
Health care providers should consider being transparent about the algorithms they are using and the data they are collecting. Doing so can reduce the risk of algorithmic bias while ensuring that patients understand how their data is being used.
Finally, health care providers must be vigilant about detecting and preventing attacks on the AI algorithms themselves.
Jon Moore is chief risk officer and head of consulting services and customer success of Clearwater, a cybersecurity firm.
| 2023-03-15T00:00:00 |
https://www.medicaleconomics.com/view/ai-in-health-care-the-risks-and-benefits
|
[
{
"date": "2023/03/15",
"position": 3,
"query": "AI healthcare"
}
] |
|
Generative AI for business leaders
|
Generative AI for business leaders - Generative AI for Business Leaders Video Tutorial
|
https://www.linkedin.com
|
[] |
Learn how to adapt and use this advanced technology, its limitations, and why it could be disruptive for your business and your career.
|
“
- Over the past two decades, artificial intelligence has quietly transformed our lives. And now the impact is about to get much, much bigger. Recent AI developments are advancing so quickly that they're becoming a powerful force with the potential to reshape entire industries, economies, even reshape society. This will redefine how we work from automating complex processes to writing new code, to creating applications and experiences that were impossible before. For some, this technological leap might feel like a threat. But I invite you to think about the possibilities. It will take our innovation to a whole new level. Imagine having your own ultimate all-powerful brainstorming partner. This course itself was co-created with AI over many brainstorming sessions and passionate debates. Here's a quick note from my AI partner. - [Chat Bot] Remember, even the most advanced AI systems can't replicate the creativity and ingenuity of a human mind. So don't worry about losing your job to a robot just yet, unless of course, you're a robot reading this, smiley. - It does have a nice sense of humor. Whether you're a senior executive or an aspiring leader, it's critical that you learn how to adapt and use this advanced technology, its limitations, and why it could be disruptive for your business and your career. My name is Tomer Cohen, I'm LinkedIn's Chief Product Officer, and for the majority of my career, I've been leading, designing, and building products powered by AI. I look forward to sharing key tools and insights to help you reinvent the future of your business.
| 2023-03-15T00:00:00 |
https://www.linkedin.com/learning/generative-ai-for-business-leaders/generative-ai-for-business-leaders
|
[
{
"date": "2023/03/15",
"position": 4,
"query": "artificial intelligence business leaders"
}
] |
|
Grenadian PM calls for Caribbean business leaders to ...
|
Grenadian PM calls for Caribbean business leaders to invest in AI
|
https://our.today
|
[
"Our Today"
] |
The impact of AI will continue to grow, and investing in AI will ensure a competitive edge in regional and global markets,” Mitchell added. Trinidadian ...
|
Reading Time: 3 minutes
In observance of ICT Week 2023, Prime Minister Dikon Mitchell delivers the keynote address at a business forum hosted by the Grenada Chamber of Commerce on February 28, 2023.(Photo: Facebook @PMOGrenada)
Prime Minister of Grenada Dickon Mitchell is urging Caribbean entrepreneurs to invest in artificial intelligence technologies (AI), as a means to optimise and transform business across the region.
Mitchell, who delivered the keynote address at a business forum held at the Grenada Trade Centre on Tuesday (February 28), stressed that AI could prove advantageous and that the region should not be slow to capitalise.
“The advent of artificial intelligence has the potential to transform businesses in the Caribbean and around the world. Through the use of AI, businesses can analyse data faster and more accurately, identify trends and make better decisions,” he said in St George’s.
“OpenAI, for example, has made significant strides in the field of natural language processing and machine learning, creating new opportunities for businesses to improve customer service and create new products. The impact of AI will continue to grow, and investing in AI will ensure a competitive edge in regional and global markets,” Mitchell added.
Trinidadian technologist and development strategist Bevil Wooding speaking at a business forum hosted by the Grenada Chamber of Commerce on February 28, 2023. (Photo: Facebook @PMOGrenada)
Bevil Wooding, a director at the American Registry for Internet Numbers (ARIN), and co-founder of CaribNOG opined that “the Caribbean is ready right now to take the next step in the digital revolution, and AI is part of our arriving future”.
“However, our adoption of AI or any emerging technology should align with what our businesses and societies need. It is critical that Caribbean governments, business leaders, technocrats and academics make technology the servant not the master of our development agenda, as we collectively pursue our business interests, national priorities and regional development programmes,” he continued.
For his part, Kevin Khelawan, co-founder of Teleios Systems Limited, said that: “AI is going to significantly impact Caribbean businesses, and it is critical that we understand that. Business leaders must connect business strategy with technology adoption.”
Kevin Khelawan, co-founder of Teleios Systems Limited.
“AI will likely move a lot faster than the Internet did, in terms of its transformative and disruptive power. So Caribbean business leaders will need to be proactive in considering how we transform our businesses to remain relevant in a world where AI proliferates,” he mused.
“Expertise in next-generation technologies like AI should be something that the Caribbean region is producing and exporting, not just importing and consuming,” said Stephen Lee, CEO of Arkitechs Inc. and programme director of CaribNOG.
“It is not enough to simply future-proof Caribbean networks at the infrastructure level against climate-related threats, such as hurricanes. As a region, we must go further, and prioritise the development of expertise in emergent technologies that are relevant to our Caribbean context, through deliberate capacity-building and knowledge-sharing, so that Caribbean thinkers and doers can work together to build regionally relevant solutions that are globally marketable.”
Hosted by the Grenada Chamber of Industry and Commerce, the business forum was part of Grenada ICT Week, spanning from February 27 to March 3 and brought together entrepreneurs, ICT professionals, civil society members and international experts to discuss the role of AI in society.
| 2023-03-15T00:00:00 |
2023/03/15
|
https://our.today/grenadas-pm-calls-for-caribbean-business-leaders-to-invest-in-ai/
|
[
{
"date": "2023/03/15",
"position": 84,
"query": "artificial intelligence business leaders"
}
] |
7 critical questions to ask when selecting your 'Ai for Hiring' ...
|
7 critical questions to ask when selecting your ‘Ai for Hiring’ technology
|
https://sapia.ai
|
[] |
Seven essential questions to evaluate AI hiring technology, focusing on ethics, and bias, to ensure a fair and effective recruitment process.
|
Interrupting bias in people decisions
We hope that the debate over the value of diverse teams is now over. There is plenty of evidence that diverse teams lead to better decisions and therefore, business outcomes for any organisation.
This means that CHROs today are being charged with interrupting the bias in their people decisions and expected to manage bias as closely as the CFO manages the financials.
But the use of Ai tools in hiring and promotion requires careful consideration to ensure the technology does not inadvertently introduce bias or amplify any existing biases.
To assist HR decision-makers to navigate these decisions confidently, we invite you to consider these 8 critical questions when selecting your Ai technology.
You will find not only the key questions to ask when testing the tools but why these are critical questions to ask and how to differentiate between the answers you are given.
Question 1
What training data do you use?
Another way to ask this is: what data do you use to assess someone’s fit for a role?
First up- why is this an important question to ask …
Machine-learning algorithms use statistics to find and apply patterns in data. Data can be anything that can be measured or recorded, e.g. numbers, words, images, clicks etc. If it can be digitally stored, it can be fed into a machine-
learning algorithm.
The process is quite basic: find the pattern, apply the pattern.
This is why the data you use to build a predictive model, called training data, is so critical to understand.
In HR, the kinds of data that could be used to build predictive models for hiring and promotion are:
CV data and cover letters
Games built to measure someone’s memory capacity and processing speed
Behavioural data, e.g. how you engage in an assessment,
Video Ai can capture how you act in an interview—your gestures, pose, lean, as well as your tone and cadence.
Your text or voice responses to structured interview questions
Public data sources such as your social media profile, your tweets, and other social media activity
If you consider the range of data that can be used in training data, not all data sources are equal, and on its surface, you can certainly see how some carry the risk of amplifying existing bases and the risk of alienating your candidates.
Consider the training data through these lenses:
> Is the data visible or opaque to the candidate?
Using data that is invisible to the candidate may impact your employer brand. And relying on behavioural data such as how quickly a candidate completes the assessment, social data or any data that is invisible to the candidate might expose you to not only brand risk but also a legal risk. Will your candidates trust an assessment that uses data that is invisible to them, scraped about them or which can’t be readily explained?
Increasingly companies are measuring the business cost from poor hiring processes that contribute to customer churn. 65% of candidates with a positive experience would be a customer again even if they were not hired and 81% will share their positive experience with family, friends and peers (Source: Talent Board).
Visibility of the data used to generate recommendations is also linked to explainability which is a common attribute now demanded by both governments and organisations in the responsible use of Ai.
Video Ai tools have been legally challenged on the basis that they fail to comply with baseline standards for AI decision-making, such as the OECD AI Principles and the Universal Guidelines for AI.
Or that they perpetuate societal biases and could end up penalising nonnative speakers, visibly nervous interviewees or anyone else who doesn’t fit the model for look and speech.
If you are keen to attract and retain applicants through your recruitment pipeline, you may also care about how explainable and trustworthy your assessment is. When the candidate can see the data that is used about them and knows that only the data they consent to give is being used, they may be more likely to apply and complete the process. Think about how your own trust in a recruitment process could be affected by different assessment types.
> Is the data 1st party data or 3rd party data?
1st party data is data such as the interview responses written by a candidate to answer an interview question. It is given openly, consensually and knowingly. There is full awareness about what this data is going to be used for and it’s typically data that is gathered for that reason only.
3rd party data is data that is drawn from or acquired through public sources about a candidate such as their Twitter profile. It could be your social media profile. It is data that is not created for the specific use case of interviewing for a job, but which is scraped and extracted and applied for a different purpose. It is self-evident that an Ai tool that combines visible data and 1st party data is likely to be both more accurate in the application for recruitment and have outcomes more likely to be trusted by the candidate and the recruiter.
Trust matters to your candidates and to your culture …
At PredictiveHire, we are committed to building ethical and engaging assessments. This is why we have taken the path of a text chat with no time pressure. We allow candidates to take their own time, reflect and submit answers in text format.
We strictly do not use any information other than the candidate responses to the interview questions (i.e. fairness through unawareness – algorithm knows nothing about sensitive attributes).
For example, no explicit use of race, age, name, location etc, candidate behavioural data such as how long they take to complete, how fast they type, how many corrections they make, information scraped from the internet etc. While these signals may carry information, we do not use any such data.
2. Can you explain why ‘person y’ was recommended by the Ai and not ‘person z’?
Another way to ask this is – Can you explain how your algorithm works? and does your solution use deep learning models?
This is an interesting question especially given that we humans typically obfuscate our reasons for rejecting a candidate behind the catch-all explanation of “Susie was not a cultural fit”.
For some reason, we humans have a higher-order need and expectation to unpack how an algorithm arrived at a recommendation. Perhaps because there is not much to say to a phone call that tells you were rejected for cultural fit.
This is probably the most important aspect to consider, especially if you are the change leader in this area. It is fair to expect that if an algorithm affects someone’s life, you need to see how that algorithm works.
Transparency and explainability are fundamental ingredients of trust, and there is plenty of research to show that high trust relationships create the most productive relationships and cultures.
This is also one substantial benefit of using AI at the top of the funnel to screen candidates. Subject to what kind of Ai you use, it enables you to explain why a candidate was screened in or out.
This means recruitment decisions become consistent and fairer with AI screening tools.
But if Ai solutions are not clear why some inputs (called “features” in machine learning jargon) are used and how they contribute to the outcome, explainability becomes impossible.
For example, when deep learning models are used, you are sacrificing explainability for accuracy. Because no one can explain how a particular data feature contributed to the recommendation. This can further erode candidate trust and impact your brand.
The most important thing is that you know what data is being used and then ultimately, it’s your choice as to whether you feel comfortable to explain the algorithm’s recommendations to both your people and the candidate.
3. What assumptions and scientific methods are behind the product? Are they validated?
Assessment should be underpinned by validated scientific methods and like all science, the proof is in the research that underpins that methodology.
This raises another question for anyone looking to rely on AI tools for human decision making – where is the published and peer-reviewed research that ensures you can have confidence that a) it works and b) it’s fair.
This is an important question given the novelty of AI methods and the pace at which they advance.
At PredictiveHire, we have published our research to ensure that anyone can investigate for themselves the science that underpins our AI solution.
INSERT RESEARCH
We continuously analyse the data used to train models for latent patterns that reveal insights for our customers as well as inform us of improving the outcomes.
4. What are the bias tests that you use and how often do you test for bias?
It’s probably self-evident why this is an important question to ask. You can’t have much confidence in the algorithm being fair for your candidates if no one is testing that regularly.
Many assessments report on studies they have conducted on testing for bias. While this is useful, it does not guarantee that the assessment may not demonstrate biases in new candidate cohorts it’s applied on.
The notion of “data drift” discussed in machine learning highlights how changing patterns in data can cause models to behave differently than expected, especially when the new data is significantly different from the training data.
Therefore on-going monitoring of models is critical in identifying and mitigating risks of bias.
Potential biases in data can be tested for and measured.
These include all assumed biases such as between gender and race groups that can be added to a suite of tests. These tests can be extended to include other groups of interest where those group attributes are available like English As Second Language (EASL) users.
On bias testing, look out for at least these 3 tests and ask to see the tech manual and an example bias testing report.
Proportional Parity Test. This is the standard EEOC measure for adverse impact on selection and recommendations.
This is the standard EEOC measure for adverse impact on selection and recommendations. Score Distribution Test. This measures whether the assessment score distributions are similar across groups of interest
This measures whether the assessment score distributions are similar across groups of interest Fairness Test. This measures whether the assessment is making the same rate of errors across groups of interest
INSERT IMAGE
At PredictiveHire, we conduct all the above tests. We conduct statistical tests to check for significant differences between groups of feature values, model outcomes and recommendations. Tests such as t-tests, effect sizes, ANOVA, 4/5th, Chi-Squared etc. are used for this. We consider this standard practice.
We go beyond the above standard proportional and distribution tests on fairness and adhere to stricter fairness considerations, especially at the model training stage on the error rates. These include following guidelines set by IBM’s AI Fairness 360 Open Source Toolkit. Reference: https://aif360.mybluemix.net/ ) and the Aequitas project at the Centre for Data Science and Public Policy at the University of Chicago
We continuously analyse the data used to train models for latent patterns that reveal insights for our customers as well as inform us of improving the outcomes.
5. How can you remove bias from an algorithm?
We all know that despite best intentions, we cannot be trained out of our biases. Especially the unconscious biases.
This is another reason why using data-driven methods to screen candidates is fairer than using humans.
Biases can occur in many different forms. Algorithms and Ai learn according to the profile of the data we feed it. If the data it learns from is taken from a CV, it’s only going to amplify our existing biases. Only clean data, like the answers to specific job-related questions, can give us a true bias-free outcome.
If any biases are discovered, the vendor should be able to investigate and highlight the cause of the bias (e.g. a feature or definition of fitness) and take corrective measure to mitigate it.
On which minority groups have you tested your products?
If you care about inclusivity, then you want every candidate to have an equal and fair opportunity at participating in the recruitment process.
This means taking account of minority groups such as those with autism, dyslexia and English as a second language (EASL), as well as the obvious need to ensure the approach is inclusive for different ethnic groups, ages and genders.
At PredictiveHire, we test the algorithms for bias on gender and race. Tests can be conducted for almost any group in which the customer is interested. For example, we run tests on “English As a Second Language” (EASL) vs. native speakers.
What kind of success have you had in terms of creating hiring equity?
If one motivation for you introducing Ai tools to your recruitment process is to deliver more diverse hiring outcomes, it’s natural you should expect the provider to have demonstrated this kind of impact in its customers.
If you don’t measure it, you probably won’t improve it. At PredictiveHire, we provide you with tools to measure equality. Multiple dimensions are measured through the pipeline from those who applied, were recommended and then who was ultimately hired.
8. What is the composition of the team building this technology?
Thankfully, HR decision-makers are much more aware of how human bias can creep into technology design. Think of how the dominance of one trait in the human designers and builders have created an inadvertent unfair outcome.
In 2012, YouTube noticed something odd.
About 10% of the videos being uploaded were upside down .
When designers investigated the problem, they found something unexpected: Left-handed people picked up their phones differently, rotating them 180 degrees, which lead to upside-down videos being uploaded,
The issue here was a lack of diversity in the design process. The engineers and designers who created the YouTube app were all right-handed, and none had considered that some people might pick up their phones differently.
In our team at PredictiveHire, from the top down, we look for diversity in its broadest definition.
Gender, race, age, education, immigrant vs native-born, personality traits, work experience. It all adds up to ensure that we minimise our collective blind spots and create a candidate and user experience that works for the greatest number of people and minimises bias.
What other questions have you used to validate the fairness and integrity of the Ai tools you have selected to augment your hiring and promotion processes?
We’d love to know!
| 2023-03-15T00:00:00 |
https://sapia.ai/resources/blog/7-critical-questions-to-ask-when-selecting-your-ai-for-hiring-technology/
|
[
{
"date": "2023/03/15",
"position": 68,
"query": "artificial intelligence hiring"
}
] |
|
Artificial Intelligence in the Classroom
|
Artificial Intelligence in the Classroom
|
https://www.chapman.edu
|
[] |
A collection of resources/ideas prepared by CETL. Use AI with students to develop higher-level thinking skills and encourage academic integrity.
|
Use AI writers as researchers. They can research a topic exhaustively in seconds and compile text for review, along with references for students to follow This material can then inform original and carefully referenced student writing. Use AI writers to produce text on a given topic for Design assessment tasks that involve this efficient use of AI writers, then critical annotation of the text that is produced. Use different AI writers to produce different versions of text on the same topic, to compare and evaluate. Use and attribute AI writers for routine text, for example, blog content. Use discrimination to work out where and why AI text, human text, or hybrid text are appropriate, and give accounts of this thinking. Use and attribute AI writers for creative text, for example Google’s Verse by Verse requires the user to input a first line, then writes the rest of the poem or provides suggestions based on the work of famous poet muses. This is just one of countless ways that AI can make interventions in creative processes. Students can research the multiple programs and algorithms on offer. Explore and evaluate the different kinds of AI-based content creators that are appropriate for your discipline. Research and establish the specific affordances of AI-based content generators for your discipline. For example, how might it be useful to be able to produce text in multiple languages, in seconds? Or create text optimized for search engines? Explore different ways AI writers and their input can be acknowledged and attributed ethically and appropriately in your discipline. Model effective note-making and record- keeping. Use formative assessment that explicitly involves discussion of the role of AI in given tasks. Discuss how AI could lead to various forms of plagiarism, and how to avoid this.
What was the body of material on which this AI was trained? In other words, what has this AI read and absorbed, to make its “assumptions” of what strings of words make “sense”?
Who, and what, has been excluded from this body of material, and therefore, potentially, the text generated?
What assumptions, biases and injustices are embedded in this material, and therefore, potentially, in the text generated?
Assess process rather than outcome (completed product); scaffold in skills and competencies associated with writing, producing, and creating.
A Sample Class Activity (from Times Higher Education)
Some of the key critical questions to ask about any AI text generators are:
Take a given week’s assigned reading. Ask students to discuss it in small groups for five minutes (this works with 10 students, or 600 students; online or face-to-face).
Then introduce them to OpenAI’s GPT-3.5.
Break students into groups of three and invite them to plug the reading’s research question into GPT-3.5 and let it generate an alternative essay. Ask the students to assess the writing in line with the course learning objectives.
They can compare the assigned reading and the AI-generated content. It is a great way to explore nuances. This can be done as an assessment, but it needs to be closely aligned to learning objectives such as: evaluation of evidence; identification of assumptions; review of methodology or lack thereof, etc.
Need to help students develop information literacy skills to counter the misinformation that a convincing AI generated text can produce.
Minimize opportunities to use AI in assessments by shifting assessment types and practices.
We must teach our students what this means in practice; how this changes the process of creating essays.
Ideas below from Times Higher Education (1)
AI-generated citations are often fully or partially fake (e.g., the author is real but there was no study published in the year indicated)
Text is original, not copied, so it won’t be flagged in plagiarism detection programs
AI gives the appearance of knowledge but has no ability to reflect on what they have written or check whether their output is decent, accurate, or honest
Need to rethink assessment and harness creative AI for learning
If we are setting students assignments that can be answered by AI tools that lack self-awareness, are we really helping students learn?
There are many better ways to assess for learning, such as constructive feedback, peer assessment, reflective practice and teach back. In a class on academic writing, transformers could show students different ways to express ideas and structure assignments. A teacher can run a classroom exercise to generate a few assignments on a topic, then get students to critique them and write their own better versions. Writing with creative machines may soon become as natural to students as processing words on a screen.
Transformer technology is a step towards a new kind of human-machine creativity where writers and artists collaborate with machines to produce interactive stories, images, films and games. How higher education manages this transition will show how it prospers in a hybrid world of real and artificial experience.
Ideas below from Times Higher Education (2)
How can ChatGPT be used to reimagine the way we teach content and deliver assessment? How can we redesign and craft cheat-proof assessments that embrace AI?
Learning designers can generate content in collaboration with AI; it enables them to be more efficient in practice and processes; can cut down on build times
AI can be used to generate content during the course design process, such as writing the course outline, first draft of course content, scripting and editing videos and podcasts (there are examples/explanations of each in the article)
AI can be used as a starting point, with content to be reviewed by SMEs or learning designers; leaves more time for other elements of course design such as interactives
Principles: AI should not be able to pass a course, when AI is used it should be attributed, AI should be open and documented (from the Sentient Syllabus Project)
Ideas below from Inside Higher Ed
Have students engage in a Socratic debate with AI as a way of thinking through a question and articulating an argument
In computer science: AI can deliver codes that work, but may not be easy to edit or understand by a real person; create assignments that distinguish between content and creative content
Assignments that require critical thinking; ChatGPT’s ability to craft logical arguments is currently weak; it generates, but does not reflect on accuracy or soundness of arguments
Compare to Wikipedia – both offer coherent prose that is prone to errors; adapt assignments to mix use of tech tools with fact-checking
Students can be expected to use ChatGPT to produce first drafts that warrant review for accuracy, voice, audience, and integration to the purpose of the writing project
Faculty may need to help students learn to mitigate and address the inherent, real-world harm new tech tools may pose
Resources for exploring ChatGPT and higher education (living document, includes readings, videos, podcasts, etc.)
Examples of ChatGPT uses
An exploration of using AI tools with students
ChatGPT and students with disabilities
Will AI tech like ChatGPT improve inclusion for people with communication disability?
AI technologies like ChatGPT may help people with communication disabilities to:
Expand on short sentences, saving time and effort
Draft or improve texts for emails, instructions, or assignments
Suggest scripts to practice or rehearse what to say in social situations
Model how to be “more polite” or “more direct” in written communication
Practice conversations, including asking and answering questions
Correct errors in texts produced for a range of purposes
Write a complaint letter, including nuance and outcomes of not taking action
Help with making that first approach to a person socially.
Suggestions below for alternate assessments from
Chat GPT Is Here! – 5 Alternative Ways To Assess Your Class!
Oral presentations: Have students give a presentation on a topic they’ve been studying. This allows you to assess their public speaking skills, as well as their understanding of the material. Group projects: Assign students to work in small groups to complete a project. This allows you to assess their teamwork and collaboration skills, as well as their understanding of the material. Self-reflection: Have students reflect on their own learning through written or oral reflections. This allows you to assess their metacognitive skills and self-awareness. Peer assessment: Have students assess the work of their classmates. This allows you to assess their understanding of the material, as well as their ability to give and receive constructive feedback. Performance-based assessment: Have students demonstrate their understanding through hands-on activities or projects such as science experiments, art projects, or mock-trials. This allows you to assess their understanding of the material, as well as their critical thinking and problem-solving skills.
Adapting your course for AI
ChatGPT: How to adapt your courses for AI?
Craft course guidelines with AI in mind Creating growth-oriented and skill-based activities Using alternative assessment practices
Specific examples of these three strategies are included in the article
Using AI to bridge attainment gaps; ideas below from "How to use ChatGPT to help close the awarding gap"
Rewording concepts and considering different perspectives Providing applied examples Comparison of essay structures
Ideas below from: “The nail in the coffin: How AI could be the impetus to reimagine education”
We can design education that is AI-proof, but we will have to do it by designing learning experiences that are so meaningful and beguiling that students wouldn’t want to turn to AI any more than they would want to have AI play a video game for them or eat a delicious meal for them.
What if we focused on designing significant learning experiences rather than asking students to witness and record our learning?
What would it look like to incorporate far more autonomy, mastery, and purpose into learning?
Are efficiency and standardization what are most needed now in a world where AI can do so much?
Did you know research suggests that non-experts (such as our students) may be more likely to stumble upon solutions to gnarly problems in our fields than experts (us), especially if we prepare them to be creative problem solvers?
ChatGPT: Understanding the new landscape and short-term solutions
Helpful resources related to ChatGPT
Reflections on the future of education in the face of AI
ChatGPT: A Must-See Before the Semester Begins
Faculty Focus article that includes strategies for designing assignments that AI cannot perform – in class writing, writing alternatives, assigning highly specific topics related to something that occurred in class, writing based on human experience/relies on student perspectives, experiences, and cultural capital
Ideas below from Inside Higher Ed:
focus on classroom and human interactions to build relationship and constructive dialogue skills
incorporate more in-class assignments that do not require computers to complete
develop courses and assignments that are specifically designed for working with GPTs and other AI text generators
need to have policies around using AI, including what counts as AI misuse (e.g., it’s not plagiarism, as original content is being created, so what kind of academic misconduct or violation of academic integrity is it?)
Notes below from The Ahead Journal
“Students turn to AI because they are increasingly desperate. They frequently have to commute long distances due to housing concerns; they often have to work north of 20 hours per week on top of their studies to make ends meet, and they face a huge assessment burden in college. They’re often over-assessed, with huge bunching of deadlines across modules, and often in rigid assessment formats, not flexible enough to cater for the needs of diverse students.”
We need to examine the role of educators in creating conditions for academic integrity to flourish; students and educators need to work together to create a trusting environment where dishonesty is minimized, and fairness and equity are demonstrated.
“Perhaps the most important way to promote academic integrity is to highlight the supports available to students who are struggling with their work, introduce more flexible assessment methods, and work across programs to reduce and space out the assessment load. We should also, in my view, be working to create an environment which places far less weight on competitive grading, and far more emphasis on rewarding growing and learning, which disincentivizes cheating.”
Higher education institutions should teach their students how to use AI tools to create better work and to build assessments that develop their skills at critically analyzing and applying AI outputs.
Additional ChatGPT Resources
Focus on the process, not the product - explore open-ended questions to encourage critical thinking, decision-making skills, and emotional intelligence; how can we see the students’ thought process and how they arrive at their decisions? How do we get them to be cognizant of their own thought process – have them document how they arrived at a decision; creating rather than producing (e.g., creating a new business model based on current trends vs just writing about/reporting on what a company is doing already).
The 10 Commandments of Instructional Design
Some earlier explorations of AI and ChatGPT (March 2023)
| 2023-03-15T00:00:00 |
https://www.chapman.edu/ai/artificial-intelligence-in-the-classroom.aspx
|
[
{
"date": "2023/03/15",
"position": 23,
"query": "artificial intelligence education"
}
] |
|
Artificial Intelligence in Education: Benefits and Applications
|
Artificial Intelligence in Education: Benefits and Applications
|
https://www.matellio.com
|
[] |
Artificial Intelligence in Education transforms learning with personalized experiences, task automation & smarter content for better outcome.
|
Those days of photocopying encyclopedia pages in the school library? Long gone. Today’s learners are growing up in a world of digital classrooms, personalized content, and on-demand tutoring—all powered by artificial intelligence in education.
Whether it’s interactive whiteboards, AI-driven eLearning platforms, or generative artificial intelligence in education, the transformation is happening fast.
And it’s big business, too. According to Global Market Estimates, artificial intelligence in the education market is projected to reach USD 20.65 billion by 2028—an eye-popping CAGR of 45.9%.
The takeaway? The role of artificial intelligence in education isn’t optional—it’s essential. And for institutions looking to stay relevant, competitive, and profitable, now’s the time to adopt.
The global market for AI in education will be worth USD 3.68 billion by 2023 , as per the reports by MarketsandMarkets.com .
Around 47% of learning management products will have AI capabilities in the next three years as predicted by eLearning Industry .
AI in education for schools has its various benefits like personalization, deep involvement, the ability to detect weaknesses and educate accordingly 24×7 availability, and many more.
The Real Impact of Artificial Intelligence in Education: What You’re Not Seeing (Yet)
Let’s get straight to the point—artificial intelligence in education is doing far more than just making learning “smarter.” It’s changing how your institution delivers value, scales its offerings, and improves outcomes—without overburdening your staff.
Still relying on traditional content models or manual admin workflows? Here’s what you’re missing out on:
Smart Content That Works Overtime
AI in education now powers digital content that adapts to the learner—age, ability, pace, preferences. That means no more one-size-fits-all textbooks.
Interactive lessons
Auto-generated practice tests
Voice-to-text and translation for accessibility
The benefits of artificial intelligence in education aren’t theoretical—they show up in better engagement, fewer drop-offs, and higher retention rates.
Virtual Learning Assistants That Never Clock Out
AI-driven chatbots and tutors provide 24/7 support for students—answering queries, guiding coursework, and giving real-time feedback. You reduce support tickets. Students get faster answers. Everyone wins.
Automated Admin Work = Real ROI
The use of artificial intelligence in education goes beyond learning. AI automates routine admin tasks—report generation, assignment submissions, progress tracking, and scheduling.
What does that mean for your ops?
Save hundreds of hours per semester
Reallocate staff to strategic roles
Lower operational costs significantly
Personalized Learning at Scale
Here’s the game-changer: With AI, you can offer personalized learning pathways to every student—without adding more teachers. That’s the power of generative artificial intelligence in education and AI in higher education combined.
AI analyzes performance data, recommends tailored content, and adapts based on engagement. It’s scalable, efficient, and proven to boost academic results.
All in all, the impact of artificial intelligence in education isn’t just about technology—it’s about positioning your institution to thrive in a competitive, digital-first world. Still waiting to implement? Your competitors aren’t.
Let’s Transform Your Business with Robust AI Applications! Get Started with a Free 30-minute Consultation Today.
Practical Applications of Artificial Intelligence in Education That Actually Drive Results
Let’s skip the sci-fi and talk strategy. Artificial intelligence in education isn’t about humanoid robots replacing teachers—it’s about helping real educators, institutions, and students do more with less effort. If you’re running a school, university, or EdTech platform, here’s exactly how you can leverage AI eLearning software development to modernize learning, reduce overhead, and improve outcomes.
Voice Assistants: Your On-Demand, AI-Powered Teaching Assistant
Students are already using voice tech to ask questions, set reminders, or learn on the go. With AI in education, you can implement voice-enabled tools across your ecosystem to:
Help students manage study schedules
Answer repetitive classroom queries (saving teacher time)
Deliver coaching instructions while commuting
Enable personalized education in real time
This is one of the fastest-growing applications of AI in education—and it works whether students have smart speakers or just a smartphone app.
Differentiated & Individualized Learning: Without Overloading Teachers
Yes, personalization works—but manually tailoring lessons for 30+ students? Not scalable. That’s where generative artificial intelligence in education takes over.
AI-powered platforms now:
Identify learning gaps
Adjust content difficulty in real-time
Recommend next topics or review materials
Work across levels—from Pre-K to PhD
With AI in higher education, colleges are using adaptive platforms to boost retention, performance, and student satisfaction—without hiring more faculty.
Smart Content: Interactive, Accessible, Always Up-to-Date
The role of artificial intelligence in education is evolving from simply supporting to actually creating content. With AI in education, you can deploy:
Auto-generated quizzes and simulations
Digital textbooks and AR/VR learning spaces
AI-driven feedback loops to detect where students struggle most
Tools that simplify content for diverse learning styles
This type of smart content isn’t just a better student experience—it’s your competitive edge.
Recommendation Systems: Think “Netflix,” But for Learning
AI can analyze student behavior and performance to suggest the next best lesson, topic, or resource. These AI-powered recommendation systems:
Boost engagement with tailored suggestions
Help students move at their own pace
Identify high-risk learners early and intervene
From K–12 to corporate training, this is one of the most profitable artificial intelligence applications in education—because when learners stay engaged, they stay enrolled.
Bottom Line?
Whether you’re looking to cut admin hours, increase personalization, or future-proof your platform, the benefits of using artificial intelligence in education are no longer optional—they’re expected.
How to Build an AI-Based Solution for the Education Industry: A Step-by-Step Guide for Decision Makers
If you’re considering building your own AI-powered education platform, you’re already ahead of the curve. But let’s be clear—artificial intelligence in education is more than just a buzzword. It requires deep planning, the right tech strategy, and a partner who understands the role of artificial intelligence in education from both a technical and academic lens.
Here’s how to build a scalable, future-ready solution that leverages the benefits of artificial intelligence in education and stands out in a competitive market.
Define the Vision (and the ROI)
Before you write a line of code, define why you’re building this. Is your goal to:
Personalize learning for higher retention?
Scale virtual classrooms in higher education?
Automate support and reduce admin overhead?
Whatever your goal, it must be tied to clear outcomes. Remember, the impact of artificial intelligence in education must be measurable—higher engagement, better outcomes, increased revenue, or all of the above.
Design a Future-Proof Architecture
A robust application of artificial intelligence in education requires a solid foundation. Your product architecture must support:
AI model integration
Real-time data processing
Scalable infrastructure
Third-party API connections (LMS, analytics, assessment tools)
Think beyond today—build a system that supports where education is going, not just where it is now.
Choose the Right Tech Stack (A Make-or-Break Step)
Your technology stack defines everything—from speed to scale to security. Here’s a sample stack for an AI-powered learning platform:
Layer Tech Stack Options Frontend (UI/UX) React.js, Angular, Vue.js Backend Node.js, Python (Django/Flask), Java (Spring Boot) AI/ML Frameworks TensorFlow, PyTorch, OpenAI API, scikit-learn Database PostgreSQL, MongoDB, Firebase Cloud Infrastructure AWS, Google Cloud, Microsoft Azure DevOps/CI-CD Docker, Kubernetes, Jenkins, GitHub Actions NLP Capabilities spaCy, GPT-based models, Dialogflow Analytics Google Analytics, Mixpanel, Power BI
Whether you’re building in-house or working with an eLearning software development company, the stack must support AI scalability, performance monitoring, and secure data flow.
Build an MVP (Minimal Viable Product)
Don’t launch a full-scale platform on day one. Instead, build a focused MVP that includes core features:
AI content recommendation
Smart testing or assessments
Personalized dashboards
Voice or chatbot-based interaction
This allows you to validate features, gather user feedback, and test the use of artificial intelligence in education in a low-risk, high-impact way.
Develop in Phases (With Full Technical Specification)
Break the project into structured sprints. Align your team (or partner agency) on:
User roles (admin, student, educator)
AI model behavior
Data flow and integration points
Accessibility, privacy, and compliance (especially in AI in higher education environments)
Development should be agile, iterative, and tightly aligned with business KPIs.
Test Rigorously—Then Test Again
Before you go live, your system must be bulletproof. That means:
Functional testing for every user flow
Security testing for student data privacy
Performance testing under scale
Regression testing after every update
Generative artificial intelligence in education can only deliver value when it’s integrated cleanly and performs predictably. This is your reputation—make sure the system reflects it.
Pro Tip: Partnering with an expert in AI eLearning software development or AI integration services can streamline the entire process—from architecture to post-launch support.
Why Matellio Is the Partner You Need for AI in Education
Bringing artificial intelligence in education to life takes more than just a dev team—it takes strategy, precision, and deep domain knowledge. That’s exactly what you get with Matellio.
We’re more than a vendor. We’re your technology ally in reshaping education for the digital-first era.
One-Stop Expertise
From building robust backends to deploying AI eLearning software development, we offer complete solutions under one roof. Whether it’s AI in higher education, K–12 platforms, or corporate L&D, we deliver.
Custom-Built for ROI
Every platform we build is engineered around your KPIs—whether that’s higher student retention, improved test scores, or admin cost reduction. Our solutions reflect the real benefits of artificial intelligence in education.
Masters in AI and Integration
We specialize in AI integration and scalable applications of AI in education, including NLP services, computer vision, predictive analytics, and recommendation engines.
Compliance-Ready and Secure
We understand the stakes in education. Our solutions are FERPA-ready, GDPR-compliant, and designed for long-term scale and privacy.
Fast MVP to Full Scale
Want to start small with a Minimum Viable Product? We’ve built dozens. Ready to launch enterprise-wide artificial intelligence applications in education? We’ve done that too.
If you’re looking to lead in the future of artificial intelligence in education, you need a partner who understands your mission, your users, and your market. That partner is Matellio. Schedule a free 30-minute consultation to get started today!
| 2023-03-15T00:00:00 |
2023/03/15
|
https://www.matellio.com/blog/artificial-intelligence-in-education/
|
[
{
"date": "2023/03/15",
"position": 55,
"query": "artificial intelligence education"
}
] |
AI in Education Market Size, Trends, Growth Analysis - 2032
|
AI in Education Market Size, Trends, Growth Analysis
|
https://www.marketresearchfuture.com
|
[
"Market Research Future",
"Https",
"Www.Marketresearchfuture.Com"
] |
Artificial Intelligence in Education Market is projected to grow from USD 4.7 Billion in 2024 to USD 26.43 billion by 2032, exhibiting a compound annual growth ...
|
Artificial Intelligence Education Market Summary As per MRFR Analysis, the Artificial Intelligence in Education Market is projected to grow from USD 4.7 Billion in 2024 to USD 26.43 Billion by 2032, with a CAGR of 37.68% during the forecast period. The market was valued at USD 3.45 Billion in 2023, driven by increased adoption of AI services in educational institutions and a focus on digital transformation. The Content Delivery Systems segment accounted for 35% of the market revenue, while Machine Learning emerged as the leading technology segment. North America is expected to dominate the market due to its advanced educational infrastructure and early adoption of AI technologies. Key Market Trends & Highlights Key trends driving the Artificial Intelligence in Education market include technological advancements and increased digital resource utilization. Market size in 2024: USD 4.7 Billion; projected to reach USD 26.43 Billion by 2032.
CAGR of 37.68% during the forecast period (2024 - 2032).
Content Delivery Systems segment held 35% of market revenue.
On-Premise AI sector valued at over USD 3 Billion in 2022. Market Size & Forecast 2023 Market Size: USD 3.45 Billion 2024 Market Size: USD 4.7 Billion 2032 Market Size: USD 26.43 Billion CAGR: 37.68%. Major Players Major players include IBM Corporation, Microsoft Corporation, Google, Pearson, and DreamBox Learning.
More educational institutions are utilizing AI services; increasing focus on the effective use of digital resources and industry-wide digital transformation are the key market drivers enhancing the AI in education market growth.
Figure 1: Artificial Intelligence in Education Market Size, 2024-2032 (USD Billion)
Source: Secondary Research, Primary Research, MRFR Database and Analyst Review
Recent Developments of Artificial Intelligence in Education Market
When ChatGPT became widely accessible last month, it swept the globe. It could respond to questions with remarkable fluency and coherence using artificial intelligence, and among other things, it might pass muster as a respectable written response to a class assignment.
In this series, educators will discuss their thoughts on how recent breakthroughs in AI technology may impact our classrooms.
On May 13, 2022, the inaugural Day of AI brought artificial intelligence literacy to classrooms around the globe. Day of AI is an initiative of MIT Responsible AI for Social Empowerment and Education (RAISE), and it gives instructors the chance to discuss artificial intelligence (AI) with K–12 students from all backgrounds about its application in their daily lives.
Artificial Intelligence in Education Market Trends
AI technology is strengthening and raising the experience and knowledge of teachers and pupils, will market growth
Vendors of AI technology are creating electrical gadgets with AI capabilities by creating sophisticated learning systems that enhance learning procedures. To survive in a cutthroat world, educational institutions must provide the finest learning environment. For instance, Century Intelligent Learning created a classroom employing AI technology that allows professors to create academic curricula online, allowing students to access their curricula whenever they choose. Moreover, based on the aptitude test results that the student administered, AI technology is utilized to spot knowledge gaps and suggest courses for them to study. AI technology examines these questions to determine the pupils' strengths and shortcomings. The instructors' and students' experience and knowledge are improved through AI-enabled educational products and services. Teachers and parents may better understand their students' performance using these products and services.
Increasing focus on effectively using digital resources to encourage managed service sector growth. The managed service market sector, which falls under services, held a sizeable Artificial Intelligence in Education market share in 2022 and is projected to expand considerably between 2023 and 2032. Using reliable technologies to automate processes and give assistance, managed IT services help schools and other educational institutions operate more efficiently when necessary. This frees up teachers' time to concentrate on teaching and to learn rather than technology administration. Also, the requirement for managed IT services to help educational institutions make efficient use of their digital resources has been greatly driven by the increasing focus that educational institutions are placing on cost reduction.
The education sector is being digitally transformed, increasing demand for on-premise AI. In 2022, the on-premise sector of Artificial Intelligence in the Education market reached a valuation of over USD 3 billion. The use of an on-premises AI model has increased due to the digital revolution in the education industry since it can successfully engage students and boost income through new conversational channels. Also, many academic institutions are investing in upskilling their students to stay up with the rapidly changing digital Technology. Thus, driving the Artificial Intelligence in Education market revenue.
Artificial Intelligence in Education Market Segment Insights
Artificial Intelligence in Education Application Insights
Based on Application, the Artificial Intelligence in Education Market segmentation includes Content Delivery Systems. The Content Delivery Systems segment dominated the market, accounting for 35% of the Artificial Intelligence in Education market revenue. A network of proxy servers and accompanying data centers that are geographically dispersed is known as a content delivery network or content distribution network. Spreading the Service spatially to end customers aims to deliver high availability and performance.
Artificial Intelligence in Education Technology Insights
The Artificial Intelligence in Education Market segmentation, based on Technology, includes Machine Learning, Natural Learning Processes. The Machine Learning category generated the most income. The expansion of virtual assistance in K–12 and high school classrooms, fueled by increasing educational institutions' expenditures in AI technology, is responsible for the rise of Artificial Intelligence in the Education market segment. The AI in education market is divided into Natural Language Processing (NLP) and Machine Learning based on Technology. Also, the use of ML technology in the education industry has grown due to the development of tools for reading knowledge that has been recorded digitally and tools for using data sets to interpret human language.
Artificial Intelligence in Education Deployment Type Insights
Based on deployment type, the global Artificial Intelligence in the Education industry has been segmented into On-Cloud and On-Premise. On-Cloud held the largest segment share in 2022. Reduced ownership costs and a growing demand for educational data sharing among international campuses are two reasons that have contributed to the expansion of the cloud segment. Moreover, it enables academic institutions to integrate cutting-edge AI technology into their current operating model without increasing their capital expenditures.
Figure 2: Artificial Intelligence in Education Market, by Deployment Type, 2022 & 2030 (USD billion)
Source: Secondary Research, Primary Research, MRFR Database and Analyst Review
Artificial Intelligence in Education Component Insights
Based on Components, the global Artificial Intelligence in the Education industry has been segmented into Services and Software. Service held the largest segment share in 2022. Due to expanding government backing and technical improvements, there is a growing need among educational organizations for AI-enabled products and services.
Artificial Intelligence in Education Regional Insights
By Region, the study provides AI in education market insights into North America, Europe, Asia-Pacific and the Rest of the World. North American Artificial Intelligence in the Education market area will dominate this market. This is attributable to the Region's early embrace of cutting-edge technologies. Also, a strong educational system, particularly in the United States and Canada, is projected to fuel market expansion. The market is expanding due to the increasing use of sophisticated tutoring systems, chatbots, and other tools for higher-quality education, as well as better learning tactics that use deep learning and machine learning approaches.
Further, the major countries studied in the AI in education market report are The U.S., Canada, German, France, the UK, Italy, Spain, China, Japan, India, Australia, South Korea, and Brazil.
Figure 3: ARTIFICIAL INTELLIGENCE IN EDUCATION MARKET SHARE BY REGION 2022 (%)
Source: Secondary Research, Primary Research, MRFR Database and Analyst Review
Due to the Region's highly established educational infrastructure, Europe Artificial Intelligence in the Education market is predicted to have considerable development in industry. Over the projected period, rising digitalization initiatives and AI investments are also anticipated to promote market expansion. Further, Germany's Artificial Intelligence in the Education market held the largest market share, and the UK Artificial Intelligence in the Education market was the fastest-growing market in the European Region.
Due to the increasing need for digitization and government efforts like e-governance in India towards Digital India, the Asia Pacific Artificial Intelligence in the Education market is predicted to have significant development. Furthermore, to help students become the leading innovators of the AI revolution, the Indian government is supporting the use of Technology in teaching in schools and colleges. Aside from that, the worldwide artificial Intelligence in education market is anticipated to grow throughout the forecast period, 2019–2026, due to the Indian government's Digital India project, which aims to increase the rate of digital literacy in various technologies, including AI. Moreover, China's Artificial Intelligence in the Education market held the largest market share, and India's Artificial Intelligence in the Education market was the fastest-growing market in the Asia-Pacific region.
Artificial Intelligence in Education Key Market Players & Competitive Insights
Leading industry companies are investing significantly in R&D to broaden their product offerings, which will spur further expansion of Artificial Intelligence in the Education market. Important market developments include new product releases, contractual agreements, mergers and acquisitions, greater investments, and collaboration with other organizations. Market participants also engage in several strategic actions to increase their worldwide presence. Artificial Intelligence in the Education industry must offer products at reasonable prices to grow and thrive in a more cutthroat and competitive environment.
One of the primary business strategies manufacturers employ in Artificial Intelligence in the Education industry to benefit customers and expand the market sector is local manufacturing to reduce operating costs. Some of the biggest medical benefits in recent years have come from Artificial Intelligence in the Education sector. Major players in the Artificial Intelligence in Education market, including IBM Corporation (US), Microsoft Corporation (US), Google (US), com Inc. (US), Cognizant (US), Pearson (UK), Bridge-U (UK), DreamBox Learning (US), Fishtree (US), Jellynote (France), Jenzabar Inc. (US)., and others, are attempting to increase market demand by investing in research and development operations.
With its headquarters in Armonk, New York, and operations in more than 175 nations, the International Business Machines Corporation, sometimes known as Big Blue, is an American global technology company. SXiQ, an Australian provider of digital transformation services focused on cloud platforms, cloud apps, and cloud cybersecurity, was acquired by IBM Corporation in November 2021. The acquisition is anticipated to improve IBM Consulting's capacity to update technological infrastructure and cloud-based applications in New Zealand and Australia.
K–12 education services for math, literacy and ELA, foreign languages, and applied sciences are offered by Carnegie Learning, Inc. Pittsburgh, Pennsylvania's Union Trust Building is home to Carnegie Learning, Inc. At a price of USD 15 million, Carnegie Learning Inc., a top provider of K–12 education solutions, bought Scientific Learning Company in September 2020. This purchase aims to improve learning outcomes and boost the company's portfolio of educational technologies.
Key Companies in the Artificial Intelligence in the Education market include
Google com Inc. (US)
IBM Corporation (US)
Pearson (UK)
Bridge-U (UK)
DreamBox Learning (US)
Cognizant (US)
Fishtree (US)
Jellynote (France)
Microsoft Corporation (US)
Jenzabar Inc. (US).
Artificial Intelligence in Education Industry Developments
July 2021 The Central Board of Secondary Education (CBSE) has partnered with the renowned technology business Intel Corporation. This partnership aims to introduce the AI Students Community learning platform (AISC). To learn from Intel's AI-certified specialists, it will also incorporate students from both non-CBSE and CBSE institutions.
January 2020 Pearson PLC, a supplier of educational publishing and assessment services, has purchased the ed-tech start-up The Smart Sparrow Pty Ltd. This purchase aims to strengthen the business's current capabilities for adaptive learning. It will also direct the implementation of Pearson's Global Learning Platform (GLP).
Artificial Intelligence in Education Market Segmentation
Artificial Intelligence in Education Application Outlook
Content Delivery Systems
Artificial Intelligence in Education Technology Outlook
Machine Learning
Natural Learning Process
Artificial Intelligence in Education Deployment Type Outlook
On-Cloud
On-Premise
Artificial Intelligence in Education Component Outlook
Service
Software
Artificial Intelligence in Education Regional Outlook
North America
US
Canada
Europe
Germany
France
UK
Italy
Spain
Rest of Europe
Asia-Pacific
China
Japan
India
Australia
South Korea
Australia
Rest of Asia-Pacific
Rest of the World
Middle East
Africa
Latin America
Report Attribute/Metric Details Market Size 2023 USD 3.45 billion Market Size 2024 USD 34.7 billion Market Size 2032 USD 26.43 billion Compound Annual Growth Rate (CAGR) 37.68% (2024-2032) Base Year 2023 Market Forecast Period 2024-2032 Historical Data 2019- 2021 Market Forecast Units Value (USD Billion) Report Coverage Revenue Forecast, Market Competitive Landscape, Growth Factors, and Trends Segments Covered Application, Technology, Deployment Type, Component, and Region Geographies Covered North America, Europe, Asia Pacific, and the Rest of the World Countries Covered The U.S., Canada, German, France, the UK, Italy, Spain, China, Japan, India, Australia, South Korea, and Brazil Key Companies Profiled IBM Corporation (US), Microsoft Corporation (US), Google (US), com Inc. (US), Cognizant (US), Pearson (UK), Bridge-U (UK), DreamBox Learning (US), Fishtree (US), Jellynote (France), Jenzabar Inc. (US). Key Market Opportunities Increasing desire for artificial Intelligence to improve educational institutions Key Market Dynamics Cost reduction Increased work efficiency Improved IT security in colleges
| 2023-03-15T00:00:00 |
https://www.marketresearchfuture.com/reports/artificial-intelligence-education-market-6365
|
[
{
"date": "2023/03/15",
"position": 59,
"query": "artificial intelligence education"
}
] |
|
What AI Will do to Job Availability
|
What AI Will do to Job Availability
|
https://www.channelchek.com
|
[
"Phoffman Channelchek.Com"
] |
According to a 2020 Forbes projection, AI and robotics will be a strong creator of jobs and work for people across the globe in the near future.
|
Image Credit: Mises
The Fear of Mass Unemployment Due to Artificial Intelligence and Robotics Is Unfounded
People are arguing over whether artificial intelligence (AI) and robotics will eliminate human employment. People seem to have an all-or-nothing belief that either the use of technology in the workplace will destroy human employment and purpose or it won’t affect it at all. The replacement of human jobs with robotics and AI is known as “technological unemployment.”
Although robotics can turn materials into economic goods in a fraction of the time it would take a human, in some cases using minimal human energy, some claim that AI and robotics will actually bring about increasing human employment. According to a 2020 Forbes projection, AI and robotics will be a strong creator of jobs and work for people across the globe in the near future. However, also in 2020, Daron Acemoglu and Pascual Restrepo published a study that projected negative job growth when AI and robotics replace human jobs, predicting significant job loss each time a robot replaces a human in the workplace. But two years later, an article in The Economist showed that many economists have backtracked on their projection of a high unemployment rate due to AI and robotics in the workplace. According to the 2022 Economist article, “Fears of a prolonged period of high unemployment did not come to pass. . . . The gloomy narrative, which says that an invasion of job-killing robots is just around the corner, has for decades had an extraordinary hold on the popular imagination.” So which scenario is correct?
Contrary to popular belief, no industrialized nation has ever completely replaced human energy with technology in the workplace. For instance, the steam shovel never put construction workers out of work; whether people want to work in construction is a different question. And bicycles did not become obsolete because of vehicle manufacturing: “Consumer spending on bicycles and accessories peaked at $8.3 billion in 2021,” according to an article from the World Economic Forum.
Do people generally think AI and robotics can run an economy without human involvement, energy, ingenuity, and cooperation? While AI and robotics have boosted economies, they cannot plan or run an economy or create technological unemployment worldwide. “Some countries are in better shape to join the AI competition than others,” according to the Carnegie Endowment for International Peace. Although an accurate statement, it misses the fact that productive economies adapt to technological changes better than nonproductive economies. Put another way, productive people are even more effective when they use technology. Firms using AI and robotics can lower production costs, lower prices, and stimulate demand; hence, employment grows if demand and therefore production increase. In the unlikely event that AI or robotic productive technology does not lower a firm’s prices and production costs, employment opportunities will decline in that industry, but employment will shift elsewhere, potentially expanding another industry’s capacity. This industry may then increase its use of AI and robotics, creating more employment opportunities there.
In the not-so-distant past, office administrators did not know how to use computers, but when the computer entered the workplace, it did not eliminate administrative employment as was initially predicted. Now here we are, walking around with minicomputers in our pants pockets. The introduction of the desktop computer did not eliminate human administrative workers—on the contrary, the computer has provided more employment since its introduction in the workplace. Employees and business owners, sometimes separated by time and space, use all sorts of technological devices, communicate with one another across vast networks, and can be increasingly productive.
I remember attending a retirement party held by a company where I worked decades ago. The retiring employee told us all a story about when the company brought in its first computer back in the late ’60s. The retiree recalled, “The boss said we were going to use computers instead of typewriters and paper to handle administrative tasks. The next day, her department went from a staff of thirty to a staff of five.” The day after the department installed computers, twenty-five people left the company to seek jobs elsewhere so they would not “have to learn and deal with them darn computers.”
People often become afraid of losing their jobs when firms introduce new technology, particularly technology that is able to replicate human tasks. However, mass unemployment due to technological innovation has never happened in any industrialized nation. The notion that AI will disemploy humans in the marketplace is unfounded. Mike Thomas noted in his article “Robots and AI Taking Over Jobs: What to Know about the Future of Jobs” that “artificial intelligence is poised to eliminate millions of current jobs—and create millions of new ones.” The social angst about the future of AI and robotics is reminiscent of the early nineteenth-century Luddites of England and their fear of replacement technology. Luddites, heavily employed in the textile industry, feared the weaving machine would take their jobs. They traveled throughout England breaking and vandalizing machines and new manufacturing technology because of their fear of technological unemployment. However, as the textile industry there became capitalized, employment in that industry actually grew. History tells us that technology drives the increase of work and jobs for humans, not the opposite.
We should look forward to unskilled and semiskilled workers’ upgrading from monotonous work because of AI and robotics. Of course, AI and robotics will have varying effects on different sectors; but as a whole, they are enablers and amplifiers of human work. As noted, the steam shovel did not disemploy construction workers. The taxi industry was not eliminated because of Uber’s technology; if anything, Uber’s new AI technology lowered the barriers of entry to the taxi industry. Musicians were not eliminated when music was digitized; instead, this innovation gave musicians larger platforms and audiences, allowing them to reach millions of people with the swipe of a screen. And dating apps running on AI have helped millions of people fall in love and live happily ever after.
About the Author
Raushan Gross is an Associate Professor of Business Management at Pfeiffer University. His works include Basic Entrepreneurship, Management and Strategy, and the e-book The Inspiring Life and Beneficial Impact of Entrepreneurs.
| 2023-03-16T00:00:00 |
2023/03/16
|
https://www.channelchek.com/news-channel/what-ai-will-do-to-job-availability
|
[
{
"date": "2023/03/16",
"position": 7,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/16",
"position": 10,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/16",
"position": 59,
"query": "artificial intelligence workers"
}
] |
The Fear of Mass Unemployment Due to Artificial ...
|
The Fear of Mass Unemployment Due to Artificial Intelligence and Robotics Is Unfounded
|
https://mises.org
|
[] |
According to a 2020 Forbes projection, AI and robotics will be a strong creator of jobs and work for people across the globe in the near future. However, also ...
|
People are arguing over whether artificial intelligence (AI) and robotics will eliminate human employment. People seem to have an all-or-nothing belief that either the use of technology in the workplace will destroy human employment and purpose or it won’t affect it at all. The replacement of human jobs with robotics and AI is known as “technological unemployment.”
Although robotics can turn materials into economic goods in a fraction of the time it would take a human, in some cases using minimal human energy, some claim that AI and robotics will actually bring about increasing human employment. According to a 2020 Forbes projection, AI and robotics will be a strong creator of jobs and work for people across the globe in the near future. However, also in 2020, Daron Acemoglu and Pascual Restrepo published a study that projected negative job growth when AI and robotics replace human jobs, predicting significant job loss each time a robot replaces a human in the workplace. But two years later, an article in The Economist showed that many economists have backtracked on their projection of a high unemployment rate due to AI and robotics in the workplace. According to the 2022 Economist article, “Fears of a prolonged period of high unemployment did not come to pass. . . . The gloomy narrative, which says that an invasion of job-killing robots is just around the corner, has for decades had an extraordinary hold on the popular imagination.” So which scenario is correct?
Contrary to popular belief, no industrialized nation has ever completely replaced human energy with technology in the workplace. For instance, the steam shovel never put construction workers out of work; whether people want to work in construction is a different question. And bicycles did not become obsolete because of vehicle manufacturing: “Consumer spending on bicycles and accessories peaked at $8.3 billion in 2021,” according to an article from the World Economic Forum.
Do people generally think AI and robotics can run an economy without human involvement, energy, ingenuity, and cooperation? While AI and robotics have boosted economies, they cannot plan or run an economy or create technological unemployment worldwide. “Some countries are in better shape to join the AI competition than others,” according to the Carnegie Endowment for International Peace. Although an accurate statement, it misses the fact that productive economies adapt to technological changes better than nonproductive economies. Put another way, productive people are even more effective when they use technology. Firms using AI and robotics can lower production costs, lower prices, and stimulate demand; hence, employment grows if demand and therefore production increase. In the unlikely event that AI or robotic productive technology does not lower a firm’s prices and production costs, employment opportunities will decline in that industry, but employment will shift elsewhere, potentially expanding another industry’s capacity. This industry may then increase its use of AI and robotics, creating more employment opportunities there.
In the not-so-distant past, office administrators did not know how to use computers, but when the computer entered the workplace, it did not eliminate administrative employment as was initially predicted. Now here we are, walking around with minicomputers in our pants pockets. The introduction of the desktop computer did not eliminate human administrative workers—on the contrary, the computer has provided more employment since its introduction in the workplace. Employees and business owners, sometimes separated by time and space, use all sorts of technological devices, communicate with one another across vast networks, and can be increasingly productive.
I remember attending a retirement party held by a company where I worked decades ago. The retiring employee told us all a story about when the company brought in its first computer back in the late ’60s. The retiree recalled, “The boss said we were going to use computers instead of typewriters and paper to handle administrative tasks. The next day, her department went from a staff of thirty to a staff of five.” The day after the department installed computers, twenty-five people left the company to seek jobs elsewhere so they would not “have to learn and deal with them darn computers.”
People often become afraid of losing their jobs when firms introduce new technology, particularly technology that is able to replicate human tasks. However, mass unemployment due to technological innovation has never happened in any industrialized nation. The notion that AI will disemploy humans in the marketplace is unfounded. Mike Thomas noted in his article “Robots and AI Taking Over Jobs: What to Know about the Future of Jobs” that “artificial intelligence is poised to eliminate millions of current jobs—and create millions of new ones.” The social angst about the future of AI and robotics is reminiscent of the early nineteenth-century Luddites of England and their fear of replacement technology. Luddites, heavily employed in the textile industry, feared the weaving machine would take their jobs. They traveled throughout England breaking and vandalizing machines and new manufacturing technology because of their fear of technological unemployment. However, as the textile industry there became capitalized, employment in that industry actually grew. History tells us that technology drives the increase of work and jobs for humans, not the opposite.
We should look forward to unskilled and semiskilled workers’ upgrading from monotonous work because of AI and robotics. Of course, AI and robotics will have varying effects on different sectors; but as a whole, they are enablers and amplifiers of human work. As noted, the steam shovel did not disemploy construction workers. The taxi industry was not eliminated because of Uber’s technology; if anything, Uber’s new AI technology lowered the barriers of entry to the taxi industry. Musicians were not eliminated when music was digitized; instead, this innovation gave musicians larger platforms and audiences, allowing them to reach millions of people with the swipe of a screen. And dating apps running on AI have helped millions of people fall in love and live happily ever after.
| 2023-03-16T00:00:00 |
https://mises.org/mises-wire/fear-mass-unemployment-due-artificial-intelligence-and-robotics-unfounded
|
[
{
"date": "2023/03/16",
"position": 70,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/16",
"position": 4,
"query": "AI unemployment rate"
},
{
"date": "2023/03/16",
"position": 11,
"query": "AI job creation vs elimination"
}
] |
|
Artificial Intelligence - is my job safe? - Blog
|
Artificial Intelligence
|
https://www.cundall.com
|
[] |
The World Economic Forum report writes by 2025, 85 million jobs may be displaced by a shift in the division of labour between humans and machines. Digital ...
|
So as an engineering consultant today, what are the risks?
“The percentage of work that AI can take over varies, but it's estimated that AI could automate up to 40% of an engineering consultant's work.”
I was told this recently by ChatGPT, an AI driven chatbot. It may be biased (in more ways than one) but after verifying the numbers from other sources it got me thinking – if this is reasonably accurate what makes up this potential 40%?
We need to think about what activities our jobs comprise of, and what proportion of those activities take up our time. Then look at the potential for automation of these tasks. For example;
| 2023-03-16T00:00:00 |
https://www.cundall.com/zh/ideas/blog/artificial-intelligence-is-my-job-safe
|
[
{
"date": "2023/03/16",
"position": 39,
"query": "automation job displacement"
}
] |
|
The Surge in Layoffs and What It Means for the Tech Industry
|
The Surge in Layoffs and What It Means for the Tech Industry
|
https://moqod.com
|
[] |
Artificial Intelligence - the impact of AI on the tech industry has ... CompTIA's analysis shows that the tech unemployment rate actually dropped to ...
|
The global tech sector is experiencing a surge in layoffs as the industry continues to grapple with the effects of the economic crisis. According to data compiled by Layoffs.fyi, a website that tracks job cuts in the tech industry, 482 companies have laid off nearly 128,202 workers since the start of the year. There are no indications that this trend will slow down, so the industry could shed more than 900,000 jobs in 2023 alone.
While the current wave of tech layoffs is undoubtedly driven by the pandemic and its economic fallout, it is worth noting that the tech sector has a long history of shedding jobs during times of economic uncertainty. The dot-com crash of the early 2000s, for example, saw many high-profile tech companies lay off large portions of their workforces as the market corrected. Similarly, the 2008 financial crisis led to widespread layoffs across the industry as companies struggled to weather the economic downturn.
The current wave of tech layoffs is a reminder that no industry is immune to worldwide crises. As companies continue to navigate an uncertain and rapidly changing landscape, it is likely that we will see further job cuts in the tech sector in the coming months. But not for every company.
# Understanding the Factors behind the Ongoing Wave of Tech Layoffs
One of the main reasons cited is the economic crisis, which has forced many companies to cut costs and reduce their workforce. Additionally, the emergence of new technologies such as no-code/low-code platforms and artificial intelligence has also led to changes in the industry and the need to restructure teams and operations.
No-code and low-code technologies are development platforms that allow individuals with little or no coding experience to create software applications. These technologies have a significant impact on the global tech industry, as they have accelerated the speed of software development, reduced costs, and allowed for greater collaboration between technical and non-technical teams. They have also led to an increase in innovation and creativity, as individuals and organizations can quickly prototype and test new ideas without having to rely on a dedicated development team. Overall, no-code and low-code technologies are changing the way software is developed and democratizing the tech industry.
Artificial Intelligence - the impact of AI on the tech industry has been significant in recent years, with a range of working bots and tools being developed to automate tasks and improve efficiency. AI is changing the way software is developed, increasing demand for specialists in areas such as machine learning and data science. With the help of AI, companies are also able to speed up development and reduce costs by automating repetitive tasks and improving accuracy.
However, it's important to note that the current wave of tech layoffs cannot be attributed to a single cause. There are two other significant factors to consider. Firstly, the trend started by Elon Musk, which involved layoffs and his direct involvement in company operations. Secondly, there is an issue of inflated budgets and an excessive number of tech workers, particularly during the period of 2020-2021. Let’s dive deeply into them.
# Inflated staff and salaries
In recent years, there has been an issue of overheating in the IT industry, as large companies have had access to a significant amount of money. With the Fed's low key rate - and hence cheap loans, investors have been more willing to invest in risky experimental projects, with the hope of making big profits in the long term.
However, this has led to a situation where the market has become saturated, and there is a risk of inflation. As a result, the industry is now experiencing a self-purifying process, where companies are cutting back on investments and reducing their workforce to avoid further overheating.
While this may lead to short-term challenges for the industry, it is ultimately a necessary process to ensure long-term stability and growth. Companies will need to focus on innovation and to develop sustainable business models to avoid future overheating and maintain their competitiveness in the market.
# Copycat Behavior Among CEOs After Twitter's Layoffs
Twitter was one of the first companies to initiate mass layoffs, and this action prompted other technology companies to follow suit. Many have speculated about Elon Musk's motives behind this decision, with some suggesting that it was an attempt to save the social network from bankruptcy, gain more power, or stand out among other business leaders.
Regardless of Musk's intentions, his decision set a negative example for other CEOs in the industry, who began to imitate this behavior. This is a phenomenon known as social contagion or copycat behavior, where companies mimic the actions of others in their industry.
According to the Stanford Graduate School of Business and its professor Jeffrey Pfeffer, downsizing in the tech industry is largely driven by this kind of imitation. While it may be a tempting strategy for companies looking to cut costs and streamline their operations, it is crucial to take into account the potential long-term effects on both the workforce and the tech industry in its entirety.
# Myths about Layoffs in the Tech Industry
Layoffs are a common occurrence in the tech industry, and they are often accompanied by various myths that may not be entirely true. Here are some of the most common myths about layoffs:
Myth #1: Layoffs are always a sign of financial trouble: While layoffs can certainly be an indication that a company is struggling, they may also be a strategic move to realign resources or focus on a different area of the business. In some cases, companies may even lay off employees as part of a plan to grow and scale in the long term.
Myth #2: Layoffs only affect low-performing employees: In reality, layoffs can impact employees at all levels of the company, including high performers and those in leadership positions. It's not always a reflection of an individual's performance or value to the company; sometimes layoffs are simply a result of larger market forces or strategic decisions.
Myth #3: Layoffs Boost Stock Prices
Another common myth is that layoffs can boost a company's stock prices. However, this is not always true, as investors may see layoffs as a sign that the company is struggling or facing financial difficulties. In fact, layoffs can sometimes have the opposite effect, leading to a decrease in stock prices.
Myth #4: Layoffs Increase Efficiency
It is often assumed that layoffs lead to increased efficiency, as the remaining employees are forced to work harder and become more productive. However, this is not always the case. When employees are laid off, the workload and stress levels of the remaining employees can increase significantly, leading to burnout and decreased productivity. Additionally, layoffs can result in the loss of valuable skills and knowledge, which can impact the company's ability to innovate and compete in the long run.
While layoffs may seem like a quick fix to cut costs or increase efficiency, they are not always the best solution. Companies should carefully consider the potential costs and impacts of layoffs before making such decisions, and explore alternative solutions that prioritize the well-being of employees and the long-term success of the company.
# Managing Layoffs: Tips for Conducting Them Effectively and Compassionately
Layoffs can be a difficult and emotional process for both the employees and the managers involved. However, if they are necessary for the company's survival, it's important that they are handled properly to minimize the negative impact.
Here are some tips for managers to conduct layoffs properly:
Be transparent and honest: Communicate openly and honestly with the affected employees. Be clear about why the layoffs are necessary and what the company is doing to address the situation. Plan and prepare: Have a clear plan in place before announcing the layoffs. This includes deciding who will be laid off, when it will happen, and how it will be communicated to the employees. Show empathy and respect: Layoffs can be a traumatic experience for employees, so it's important to show empathy and respect for their feelings. Provide resources and support to help them through the transition. Follow legal requirements: Make sure that the layoff process complies with all legal requirements, including severance pay and notice periods. Consider alternatives: Before resorting to layoffs, consider alternatives such as reducing hours or salaries, implementing a hiring freeze, or offering voluntary buyouts. Communicate with remaining employees: Be transparent with the remaining employees about the layoffs and how it will affect them. Address their concerns and provide reassurance about the future of the company.
# Tech unemployment rate drops, but small, and mid-sized companies pick up the slack
As tech layoffs continue to rise, it may seem like the tech industry is in a precarious position. However, recent data suggests that the situation may not be as dire as it appears.
According to recent research conducted by Moqod, the majority of small and medium-sized tech startups in the U.S. are looking to expand their tech teams, with a staggering 71% reporting plans to do so. In contrast, only 14% of respondents predicted that they might need to cut staff. A similar trend was observed in the Netherlands, where almost 73% of surveyed companies plan to expand, and only 3% expressed concerns about the possibility of layoffs.
These results suggest that small and medium-sized tech companies are faring better than larger corporations. One possible explanation for this trend is that smaller companies have not recently experienced an inflated workforce, leading to a more streamlined and agile approach to growth.
CompTIA's analysis shows that the tech unemployment rate actually dropped to 1.5% in January, indicating that a significant number of the workers who were laid off were rehired in the tech industry in a short amount of time. Small and mid-size companies that previously couldn't compete with the salaries and perks offered by traditional Silicon Valley tech companies are now doing their best to absorb the skilled workers.
However, the future is uncertain. While small and mid-size companies are currently picking up the slack, they may not be able to sustain this growth indefinitely. As the competition for tech talent continues to intensify, these companies may find it increasingly difficult to attract and retain the workers they need to stay competitive.
Moreover, the ongoing crisis has disrupted the job market and could have long-term effects on the tech industry. As companies continue to adapt to the new normal, they may need to re-evaluate their hiring strategies and consider new ways of attracting and retaining talent.
| 2023-03-16T00:00:00 |
https://moqod.com/blog/the-surge-in-layoffs
|
[
{
"date": "2023/03/16",
"position": 72,
"query": "AI unemployment rate"
}
] |
|
Social Media Recruiting Statistics for 2023
|
Social Media Recruiting Statistics for 2023
|
https://gohire.io
|
[
"Sophie Smith",
"Aaina Bajaj"
] |
Use social media recruiting software to automate mundane tasks and hire candidates faster. This way, you can save a lot of effort and fill company job openings ...
|
Hiring has never been easy. The job market is hypercompetitive, and every recruiter is trying their best to hire top talent for their organisation. Sometimes, finding the right candidate and filling a job position takes months.
However, with time, new technology, processes, and mediums have been created that have somewhat eased the hiring process. Social media recruitment is one such way of hiring qualified candidates faster. Platforms like LinkedIn and Twitter can help you increase your brand’s visibility, expand your reach, and connect with diverse candidates.
But how powerful is social media recruitment? You can find out by looking at the key statistics about social media recruitment we’ve gathered for you.
Let’s begin:
Essential social media recruiting statistics for recruiters
1) 79% of job seekers use social media platforms in their job search
79% of job seekers using social media for their job search is a big deal. If you play your cards right, more job seekers will apply for a role at your company than your competitors. Also, chances are that you can fill these job positions sooner than expected.
Here’s how you can create a positive first impression for job seekers using social media for job search:
Maintain an active and strong social media presence through frequent job posting and actively respond to candidates performing a job search.
Write an engaging job description that captures job seekers’ attention.
💡 Related Read: Social Media Recruiting Tools - 8 Platforms You Should Consider
2) Job seekers rank social media and professional networks as more useful compared to job ads and recruiting agencies
A careerarc survey of over 1000 professionals found that most job seekers find social media sites and professional networking platforms like LinkedIn and Twitter as more efficient job search resources than career sites, job listings, job posts, and other recruitment methods. It shows how effective social recruiting is and what impact it will soon have on the hiring industry.
3) 84% of businesses are using social media for recruitment, and 9% are planning to use it
According to an SHRM survey, social recruiting is rising because of its efficiency. Many organisations use it to find talent, expand their talent pool, and target job seekers they couldn’t target earlier through traditional recruitment methods.
💡 Related Read: 6 Advantages of Social Media Recruitment
4) 73% of millennial candidates found their last job through a social media platform
Lately, you might’ve seen many candidates flaunting “LinkedIn helped me get this job” on their LinkedIn profiles. It shows the power of social media recruitment.
According to a survey by Aberdeen Group, around 3 in 4 candidates in the millennial age group found their last job from social media sites. It has become an effective social recruiting method, and you can no longer ignore it.
5) 80% of employers feel social recruiting helps them find passive candidates
Many organisations focus on recruiting passive candidates. Such people are difficult to recruit but a goldmine if you hire them. Such job candidates have a higher employability rate and a great chance of becoming successful hires. They may also become great referrals for potential hires in the future.
According to a survey by Betterteam, social recruiting helps businesses find and nurture passive job seekers. They can easily reach out to them on social media, strike up a conversation, forge a relationship, identify if they’re satisfied or dissatisfied at their present job, and make a move if there’s an open position in which they can be a good fit. Employers can also reduce hiring time this way.
6) 70% of hiring managers say they’ve successfully found and hired candidates with social media
There’s enough talk about social media recruitment being an effective hiring strategy. But does it work? According to a Betterteam survey, social media recruitment works efficiently, as 70% of recruiting managers say they’ve successfully hired candidates with social media. It’s great proof to start using social media for hiring, right?
7) 67% of recruiters use social media for candidate research
According to HR Magazine, the average cost of a bad hire is three times the salary paid. But you can save your organisation from it by using social media to research potential job candidates. The kind of posts they put on social media and the comments they leave on others’ profiles can give you a lot of insight into whether or not they would be a great fit.
67% of hiring decision-makers use social media to research candidates, and 71% feel it can effectively screen unqualified candidates.
8) 40 million people look for jobs on LinkedIn weekly
Among all social media recruitment platforms, LinkedIn is the most popular. 40 million people use it for a job search in one week. It’s a big number. If you can reach even 10% of these candidates, you can easily find high-quality candidates you’re looking to fill. Hence, LinkedIn can be a great place to start if you want to use social recruiting to fill positions quickly.
9) 79% of job seekers use social media for company research
Not only do recruiters use social media for researching and screening candidates, but job seekers also use social media to screen job applicants. Only after learning that the company is trustworthy do they apply. Hence, ensure you have an active social presence and a positive brand reputation on social media.
Besides, 82% of employees also consider the employer's brand reputation before they apply for the role, and a negative reputation may prevent many employees from applying for the role.
10) 75% of job seekers inform their career decision through LinkedIn
Recruiters love LinkedIn because candidates love sharing everything related to their careers. Whether open to a new role or looking to climb the corporate ladder, they’re vocal about everything. Hence, LinkedIn can be a great way to monitor potential candidates and approach them at the right time.
11) 56% of recruiters find the best candidates through social media
Another reason social media recruiting is the best bet for you is that more than half the recruiters worldwide feel they can hire the best talent compared to other methods like job boards (only 37% of recruiters feel they can find the right talent through social media) or job advertising.
12) LinkedIn is the most popular choice for social media recruiting
90% of recruiters use LinkedIn regularly to search and recruit talent, while only 55% use Facebook, 47% use Twitter, and 11% use Instagram. Hence, LinkedIn is a great place to start if you’re yet to dive into social media recruiting.
13) Around 46 Million students are active on LinkedIn
Looking to hire entry-level college graduates? LinkedIn is your best bet because most students and recent graduates are active on the platform instead of on job websites.
14) Prices of job boards have gone up by 300%, but the performance is declining
This is an alarming stat for anyone believing they can use job boards to attract and hire candidates instead of building a presence on a social media platform. Job websites cost you a fortune, yet it takes 46 applications to hire one person. As a result, most recruiters favour social media recruitment rather than posting a new job on job boards as they find them more efficient and affordable.
15) 70% of candidates worldwide are passive candidates, while only 30% are actively seeking new jobs
Passive candidates are difficult to target but make up a large percentage of job seekers. By not targeting them, you may miss out on many promising candidates. So, don’t just target active job seekers but also take measures to target candidates passively looking for jobs by forging long-term relationships.
What role can your employees play in social media recruitment?
Word-of-mouth marketing and employee advocacy are promising ways to utilise your employees to attract new talent. Here are some stats indicating how getting your employees involved can play a critical role in making your social media recruiting efforts successful:
1) 65% of job seekers would consider a job opportunity if they heard about it from a personal connection.
According to these findings from a Monster survey, if you can get your team members to spread the positive word about your company that it’s a great place to work, they will show more interest in applying. Hence, empower employees to tweet and blog about the company’s work and office culture, share photos on social media, or engage on the company page.
2) Job seekers rank current employees as the most trusted source for information about a company.
According to the stats from CareerArc, candidates trust no one but the employees of a company to enquire about its culture and work environment. Now, if your current employees are not satisfied with your company culture, chances are they will not give a positive review, and the candidate may not consider the job opening. Hence, you must foster a positive work culture in your organisation.
3) 98% of employees use at least one social media site for personal use, and 50% post about their company.
According to this stat from Weber Shandwick, your employees are avid social media users, and there are chances that they are already talking about your brand on social media. Now, it’s time to track those conversations and take necessary measures if the employee posts and reviews are not in your favour.
4) Employee referrals have the highest applicant-to-hire conversion rate. While only 7% of applicants come via employee referrals, they account for 40% of all new hires.
This stat from Jobvite shows that employee referrals are far superior and more effective than other hiring methods. Employees who talk about you attract high-quality talent, which takes your organisation to a new level.
Also, 47% of candidates hired through employee referrals have greater job satisfaction and stay longer in your company. Hence, you must encourage employees to spread great things about your brand on social media so that they bring great talent.
In short
Social media recruiting is an affordable and highly efficient way of talent acquisition. While most companies still use traditional methods to hire candidates, social recruiting can generate more reach while keeping hiring costs low. Hence, you must adapt social media recruiting to amp your hiring efforts.
Hopefully, the above social recruiting stats gave you enough idea why social media recruiting is still a big deal. Despite this, most organisations lack an effective social recruiting strategy. Only 39% of businesses target specific audiences, and only 39% involve employees sharing branded content.
However, you can stand out and take the necessary measures to hone your social media recruiting strategy. Here are a few tips to make the most out of social media recruiting:
| 2023-03-16T00:00:00 |
https://gohire.io/blog/social-media-recruiting-statistics
|
[
{
"date": "2023/03/16",
"position": 64,
"query": "job automation statistics"
}
] |
|
Will AI Replace Digital Marketing?
|
Will AI Replace Digital Marketing?
|
https://www.wbscodingschool.com
|
[] |
Which marketing jobs will be stolen by AI, will marketing be automated, and ultimately will AI replace digital marketing?
|
Even veteran professionals in the field are nowadays asking themselves, will AI replace digital marketing and steal their jobs? As algorithms learn to interpret data and chatbots become able to write their own content, the question is growing increasingly urgent.
Digital marketing is but one of the many industries in the throes of a revolution sparked by the rise of Artificial Intelligence. Looking at the available career guides, it’s surprising just how few of them choose to factor the question of AI into their recommendations, instead discussing static prospects for jobs as though we were still in the early 2000s.
Is the marketing industry going to shed off thousands of workers in the next few years? Is the outlook good for those who wish to enter – or stay in – the field of marketing?
This article will address all of these questions not by platitudes like ‘AI can do much but it will never replace human creativity’, but by analysing the tangible data and the verifiable reality of the marketing industry in 2023, and putting them in the context of the most recent developments in AI technology.
By the time you have finished this article, you will know exactly what to expect for the marketing industry in the wake of the AI revolution, and most importantly – what prospects it offers to you.
CONTENTS
***
Is Digital Marketing A Good Career?
For those who wish to enter the world of marketing, the year 2023 may be something of a golden age. This statement is not an exaggeration – it’s the natural conclusion that emerges by looking at the hard evidence, in particular the latest CMO Survey.
In spite of the gloomy, inflationary state of the global economy, the marketing industry seems to be experiencing a ‘mini-boom’. The size of its budgets as a percent of overall company budgets reached 13.8%, the highest figure in the history of the survey. The number of marketing jobs has grown by 15.1% in the last year, in an upward trend that is expected to continue throughout 2023. As well, nearly 60% of surveyed marketers report feeling that their job has ‘increased in importance’ since the pandemic.
Adobe Stock / deagreez
A LinkedIn report corroborates this optimism, indicating an almost surreal 374% increase in job postings in the field of marketing last year. That figure should be taken with a grain of salt – it probably says at least as much about trends in the use of LinkedIn as it does about trends in marketing – but as an indicator of growth, it’s pretty unambiguous.
The industry itself is therefore expanding, and looks set to keep expanding – but what opportunities does it offer?
What Sort Of Jobs Can You Do In Digital Marketing?
Marketing is a very wide umbrella, and its teams may sometimes include creative types like designers, writers, and even actors, as well as purely technical roles like web developers and data scientists. For simplicity, we will leave ‘borrowed’ roles outside of this discussion, and focus on those jobs that are related to marketing directly.
While jobs in the marketing industry are growing rapidly, they are changing just as quickly, and the reason is that companies are racing at breakneck speed to keep pace with the digital revolution. Said digital revolution has given rise to an entire new subfield of marketing – what is known as MarTech.
Adobe Stock / Moor Studio
The rise of MarTech is the reason so many new jobs are being created in the field, and also why budget expenditures are so high – they are being pushed up by investment in digital marketing, which according to the CMO survey now takes up 57.9% of all marketing expenditure.
The Top 10 Fastest Growing Marketing Jobs
In a recent article on the most valuable skills in marketing, LinkedIn identified the top 10 fastest growing marketing jobs as the following (notice how many of these are related to digital marketing and new technologies):
Media Coordinator Search Manager Social Media Coordinator Search Engine Marketing Manager Media Manager Marketing Analyst Search Specialist Email Marketing Specialist Search Engine Optimization Analyst Digital Media Manager
The Top 10 Most In-Demand Jobs in Marketing
The report above also provided a list of the top 10 jobs that are most in-demand:
Digital Marketing Specialist Digital Account Executive Social Media Manager Digital Marketing Manager Copywriter Marketing Associate Account Supervisor Marketing Assistant Digital Strategist Marketing Manager
These jobs may look interesting to you… but how many of them will be taken over by AI?
How AI Will Change The Future Of Marketing
Adobe Stock / khunkorn
The impact that Artificial Intelligence will have on our modern industrial and economic systems as a whole is impossible to overstate.
With regards to the marketing industry specifically, AI’s most significant impact on the short to medium term looks set to be in the fields of data processing, content creation, and automation.
Will AI Take Over Data Processing In Marketing?
Data processing means collecting, sorting and to a certain extent interpreting information about your customers and your market. It has become increasingly relevant in the age of the internet as enormous amounts of data have become available, even if the regulations as well as the industry standards regarding said data are changing all the time.
There are technologies that are able to process data efficiently and semi-independently. These include PaveAI, which utilizes Google Analytics to read data, find relevant insights within it, and report them to the user, Wordsmith, which can be fed raw data and will return a written, easy-to-understand narrative, and Adext, which is capable of designing, managing and optimizing online advertisement campaigns.
Data processing is an expansive, important field – enough so that it is possible to build an entire career entirely around Data Science – and the fact that AI can do it so well should not be downplayed. Data processing still needs guidance – someone who knows what they are looking for – meaning jobs in this sector will not be replaced. But they will most certainly be transformed, with Marketing Analysts in particular being in a great position to take advantage of their technical skills to pick up new responsibilities.
Will AI Replace Content Creation In Marketing?
Content creation refers to the creation of (for now) mostly written material, such as emails, newsletters, and blogs. AI applications capable of reliably creating video content for marketing purposes are not very prominent at the moment, although this is probably only a matter of time, as text-to-video generation seems set to be among the big AI trends of 2023.
Adobe Stock / Shafay
The king of this particular jungle is of course ChatGPT, an AI chatbot that has astonished the general public with its ability to instantaneously produce lengthy, functional and relatively complex essays and stories. Launched in November 2022 and just updated to the latest model, it already has content creators everywhere trembling.
In truth though, ChatGPT is only one of a variety of such tools. Other natural language generation (NLG) platforms like Articoolo and the aforementioned Wordsmith offer similar services. More specialized applications like Phrasee can come up with pitches, titles and email subject lines.
No doubt more of these technologies will follow. How exactly the industry will adapt to them is an open question – for example, will search engines be programmed to recognize artificially-generated content, and will they treat it any differently than human content? On that, we will simply have to wait and see.
What Is Automation In Marketing And What Can AI Do?
Automation, the art of getting a machine to perform a task without guidance or oversight, is directly related to both of the above processes, and to a greater or lesser extent is involved in all of the technologies we have already mentioned. There is therefore no need to repeat ourselves.
We should briefly mention chatbots capable of handling interactions with clients, because they are very often associated to the field of marketing. This, however, is a misconception. While this branch of AI will doubtlessly have implications for marketers as well, its primary application is in customer service, which is a different field, and which we will not cover in this article.
Which Marketing Jobs Will Be Stolen By AI?
Adobe Stock / patpitchaya
The question of which marketing positions will go extinct does not have a simple answer. The main problem is that it’s hard to sift jobs that may actually disappear, such as, say, telemarketing, from others that will simply be rebranded.
At a certain point we may well see ads for ‘Digital Marketing Manager’ disappear, but only because digital marketing will become so common and widespread that any ‘Marketing Manager’ will be expected to already incorporate that specialism, and the word ‘digital’ could therefore be dropped from the job title. That is the opposite of extinction!
Articles that come up with a list of ‘endangered job titles’ for this particular industry are therefore speculative at best, and irresponsible at worst. A job title like ‘Marketing Technology Specialist’ may stop existing, but only to be replaced by something like, say, ‘Customer Experience Data Scientist’, which will require many of the same skills.
Adobe Stock / vegefox.com
Indeed, the majority of the innovations brought along by AI appear to overlap with the work of marketers, almost never to directly replace it. Tools for automated advertising or content generation can perform many of the tasks done by marketers, but they always require greater or lesser extents of supervision and direction, particularly due to their notorious inability to think laterally or creatively.
While marketing professionals won’t go extinct, it is inevitable that their work will be transformed. Marketers who are involved with, say, designing an advertising campaign, will have to make use of those AI tools that allow them to optimise their approach, in the same way that all modern designers must now be equipped with the skills to use something like Photoshop.
Does Marketing Pay Well In The Age Of AI?
Marketers in the early phase of their careers are typically not among the highest earners, but their salaries do grow rather rapidly, and jump upwards dramatically once they reach the echelons of management.
Listing every possible job in marketing and its expected salary would be a near-impossible job – the field, as we mentioned, is very broad indeed – and the results would only be confusing. Instead, here is a selection of marketing salary expectations (measured in $1000p/y) for a variety of positions that go from entry-level to management. The source of the data is Salarylist.com, which focuses on the American market, but similar ranges (adapted for local taxation) can be found on European websites such as Glassdoor.de.
copyright WBS CODING SCHOOL
It‘s worth pointing out that the LinkedIn report we quoted in our section on in-demand jobs also looked at which skills recruiters were having most trouble finding. Digital Marketing was first on that list by a distance, and the top five also included Social Media and Data Science.
One reason these skills are so valuable is precisely that they let companies harness the power of AI. Although everyone in marketing may be talking about artificial intelligence and machine learning, very few people are actually using it. Only 8.6% of companies do, according to the CMO figures, although that number is expected to increase to 22.9% by the end of 2025.
This means that any marketer who can use AI effectively as part of their job – and who can instruct the rest of the company on how to use AI tools – will be in a position to negotiate a substantially higher salary than those reported above. (If you’re not sure what this sort of work entails, check out our guide on what it means to work as a marketing analyst).
The future of marketing is digital – it is also bright. As we have seen, the industry is growing and more and more jobs are being created. This is very much the right time to get in – just don’t fumble the entrance.
Adobe Stock / jirsak
Will AI Replace Digital Marketing?
AI is much more likely to increase the number of jobs in marketing than to replace them, as it’s already greatly expanding the technical dimension of digital marketing. As jobs in this field call for more IT skills, qualified digital marketers will be in even greater demand, although more traditional marketers will need to learn new skills.
Take for example Marketing Analytics, which is the subject of our own specialised bootcamp. As a discipline, Marketing Analytics is not itself about AI – but it is interwoven with it, because it involves several technologies which themselves affect and are affected by AI.
As per CMO data, Marketing Analytics went from being used in about a third of all marketing decisions 3 years ago to almost half of them today. Spending on Marketing Analytics as a percentage of total marketing budgets is now at 8.9%, an all-time high which is likely to keep rising.
And yet – do you know how many companies reported full agreement with the statement “I have the right talent in my organization to fully leverage marketing analytics”? A tiny 3.6%.
The talent gap in the field of marketing is huge! Not only is the industry itself growing – it is also opening so many doors to those who have the courage and the initiative to walk past them.
Your skills are the hardest currency you have. Pick up the right ones now, and the world of marketing is yours for the taking.
| 2023-03-16T00:00:00 |
2023/03/16
|
https://www.wbscodingschool.com/blog/is-marketing-a-good-career-in-the-age-of-ai-guide-for-2023/
|
[
{
"date": "2023/03/16",
"position": 84,
"query": "job automation statistics"
}
] |
A Whole New Way of Working
|
A Whole New Way of Working
|
https://www.microsoft.com
|
[
"Nate Boaz",
"Vp Of People Strategy At Microsoft",
"Jared Spataro",
"Cvp",
"Modern Work",
"Business Applications"
] |
This next-generation AI will transform work and augment human capabilities in three ways: It will unleash creativity. It will unlock productivity. And it will ...
|
The big problem at the heart of information overload is relevance. Everyone is inundated with data and information, but only a small sliver of that information contains something that a specific individual needs to know or points to a specific task that person needs to complete. Buried amid a mountain of data is information we can’t afford to miss. With AI, we can unearth what matters in minutes.
To truly focus on the work that matters most, we must first confront information overload—a challenge that only feels more acute in the hybrid era. AI has a powerful part to play here too.
“At its best, work is our expression of how we’re going to shape the world and be shaped by it. But the basic patterns of work today have left our inboxes in charge, not us,” says Jared Spataro, corporate vice president of Modern Work and Business Applications at Microsoft. “AI is going to help us cut through a lot of that and allow us to focus again on the things that matter most.”
This is going to take away the shallow work so that humans can do the deep work that we really crave. ” Nate Boaz VP of People Strategy at Microsoft
Take meetings. Few things in life feel as wasteful as time spent in a meeting you didn’t need to attend. Now, we’ll be able to use AI-powered tools to not just summarize a day’s worth of meetings but highlight and share what’s relevant to a given individual or team. This will help eliminate FOMO and empower people to attend just the meetings that matter, catch up when they’re late, or revisit important points to better address action items. This equals time savings that everyone in an organization can devote to the most impactful work. At scale, this has the potential to drive meaningful productivity gains for every organization.
We’ll also be able to use natural language to distill a week’s worth of emails down to just the salient points—and AI can summarize, remix, and personalize the information in ways that are more useful than ever.
Sumit Chauhan, CVP, Office Product Group, Microsoft: “I think, ultimately, AI will make work more human.”
GitHub research points to the promise of next-generation AI to unlock productivity. In a survey of developers, 88 percent said they were able to get tasks done more quickly using AI-powered GitHub Copilot than without it, and 74 percent said it enabled them to focus on more satisfying work.
“This is going to take away the shallow work so that humans can do the deep work that we really crave,” says Nate Boaz, vice president of people strategy at Microsoft. “This is going to make not only your job better, but you better at your job.”
As these technologies become part of the everyday workflow of individuals and organizations, the world could see a productivity boom on par with the most significant technological disruptions in history.
AI is like... AI is like scissors : one blade is cognition and the other is context. Focusing solely on the technology’s cognitive power, says Microsoft design and AI executive John Maeda, belies the importance of its context—what it knows about the world from the data that goes into it. Only when paired together, he says, are the blades really powerful.
“Bringing the right data to the right place at the right time is something AI excels at,” says Charles Lamanna, corporate vice president of Business Apps and Platform at Microsoft.
“It helps you run your operations more efficiently. It helps you improve your employee experience. It helps you improve the customer experience,” he says. “Those three things define just about every business on earth. And this AI improves all three.”
| 2023-03-16T00:00:00 |
https://www.microsoft.com/en-us/worklab/ai-a-whole-new-way-of-working
|
[
{
"date": "2023/03/16",
"position": 52,
"query": "AI job creation vs elimination"
}
] |
|
The Future of Hiring: How AI is Changing Recruitment
|
The Future of Hiring: How AI is Changing Recruitment
|
https://www.pulserecruitment.com.au
|
[] |
AI-powered tools can eliminate human bias, such as age, gender, and ethnicity, when screening candidates. This ensures that candidates are evaluated solely on ...
|
The recruitment landscape is undergoing a significant transformation, and it’s all thanks to Artificial Intelligence (AI). As companies struggle to find the right talent, AI is making it easier for recruiters to streamline the hiring process and find top-notch candidates. With advancements in technology, AI is now capable of analyzing data, identifying patterns, and making predictions, which makes it an indispensable tool for recruiters. As a result, AI-powered recruitment tools are becoming increasingly popular, and companies that are not leveraging these technologies risk losing out on top talent. In this article, we’ll explore the ways in which AI is changing the recruitment process and how it’s shaping the future of hiring. So, buckle up and get ready to discover the exciting world of AI-powered recruitment!
Benefits of using AI in recruitment
Artificial Intelligence is revolutionizing the recruitment process by providing recruiters with powerful tools to find the right candidate for the right position. Here are some benefits of using AI in recruitment:
Time-saving and cost-effective
One of the most significant advantages of using AI in recruitment is that it saves time and money. Recruiters can automate repetitive tasks, such as resume screening and scheduling interviews, which frees up their time to focus on more important tasks. Additionally, AI-powered recruitment tools can reach a larger pool of candidates, which reduces the time and cost associated with traditional recruitment methods.
Objective and unbiased
Another benefit of using AI in recruitment is that it’s objective and unbiased. AI-powered tools can eliminate human bias, such as age, gender, and ethnicity, when screening candidates. This ensures that candidates are evaluated solely on their skills and qualifications, which improves the diversity and inclusivity of the hiring process.
Accurate and predictive
AI-powered recruitment tools use machine learning algorithms to analyze data and identify patterns. This allows recruiters to make more accurate predictions about a candidate’s job performance and fit within the company culture. Additionally, AI can analyze a candidate’s social media activity to determine their personality traits, interests, and values, which helps recruiters to identify candidates who are the best fit for the job.
Improved candidate experience
AI-powered recruitment tools can enhance the candidate experience by providing real-time feedback and personalized communication. Candidates can receive instant feedback on their application status and schedule interviews at their convenience. This improves the candidate experience and increases the likelihood of attracting top talent.
Streamlined recruitment process
AI-powered recruitment tools can automate the entire recruitment process, from sourcing candidates to onboarding new hires. This streamlines the recruitment process, reduces the time and cost associated with traditional recruitment methods, and ensures that recruiters can focus on more critical tasks.
AI-powered recruitment statistics
AI-powered recruitment is becoming increasingly popular, and it’s not hard to see why. Here are some statistics that highlight the benefits of using AI in recruitment:
AI-powered recruitment tools can reduce the time to hire by up to 90%
AI can screen resumes up to 70% faster than humans
82% of companies using AI in recruitment report a significant improvement in the quality of hires
92% of recruiters believe that AI-powered recruitment tools will become an essential part of the hiring process in the next five years
The global AI recruitment market is expected to reach $1.4 billion by 2027
These statistics show that AI-powered recruitment tools are not just a passing trend but a critical component of the recruitment process.
How AI is transforming the recruitment process
Artificial Intelligence is transforming the recruitment process by providing recruiters with powerful tools to find the right candidate for the right position. Here are some ways in which AI is changing the recruitment process:
Resume screening
AI-powered recruitment tools can screen thousands of resumes in a matter of seconds, which saves recruiters a significant amount of time and effort. These tools can analyze resumes for relevant keywords, experience, and qualifications, and rank candidates based on their fit for the job. This ensures that recruiters can focus on the most qualified candidates, which improves the quality of hire.
Candidate sourcing
AI-powered recruitment tools can use machine learning algorithms to analyze social media profiles, job boards, and other online sources to identify potential candidates. This expands the talent pool and ensures that recruiters can find the best candidates for the job.
Interview scheduling
AI-powered recruitment tools can automate interview scheduling, which saves recruiters time and ensures that candidates are scheduled for the most convenient times. These tools can also send reminders to candidates, which reduces the likelihood of no-shows and ensures that the hiring process runs smoothly.
Candidate assessment
AI-powered recruitment tools can assess candidates’ skills and qualifications using machine learning algorithms. These tools can analyze a candidate’s responses to interview questions, work samples, and other data points to determine their fit for the job. This ensures that recruiters can make more informed hiring decisions and improve the quality of hire.
Onboarding
AI-powered recruitment tools can automate the onboarding process, which saves time and ensures that new hires can get up to speed quickly. These tools can provide new hires with personalised training and resources, which improves their job performance and reduces turnover.
AI-powered recruitment tools and their functions
Artificial Intelligence is powering a wide range of recruitment tools that are transforming the hiring process. Here are some of the most popular AI-powered recruitment tools and their functions:
Applicant Tracking Systems (ATS)
An Applicant Tracking System (ATS) is a software application that allows recruiters to manage the entire recruitment process, from sourcing candidates to onboarding new hires. ATS uses machine learning algorithms to automate the recruitment process, which saves time and ensures that recruiters can focus on more important tasks.
Chatbots
Chatbots are AI-powered tools that can provide candidates with instant feedback and personalised communication. These tools can answer questions, schedule interviews, and provide candidates with updates on their application status. This improves the candidate experience and increases the likelihood of attracting top talent.
Predictive analytics
Predictive analytics is an AI-powered tool that uses machine learning algorithms to analyze data and identify patterns. This allows recruiters to make more accurate predictions about a candidate’s job performance and fit within the company culture. Additionally, predictive analytics can identify candidates who are more likely to accept a job offer, which improves the recruitment process.
Video interviewing
Video interviewing is an AI-powered tool that allows recruiters to interview candidates remotely. These tools use machine learning algorithms to analyze a candidate’s facial expressions, tone of voice, and body language to determine their fit for the job. This ensures that recruiters can make more informed hiring decisions and improve the quality of hire.
Gamification
Gamification is an AI-powered tool that uses game-like elements to assess a candidate’s skills and qualifications. These tools can simulate job tasks and provide candidates with instant feedback on their performance. This ensures that recruiters can make more informed hiring decisions and improve the quality of hire.
Best practices for using AI in recruitment
Artificial Intelligence is a powerful tool that can transform the recruitment process, but it’s important to use it correctly. Here are some best practices for using AI in recruitment:
Start small
It’s important to start small when implementing AI-powered recruitment tools. Begin with a single tool, such as an Applicant Tracking System, and gradually add more tools as you become more familiar with the technology.
Use human oversight
AI-powered recruitment tools are not perfect, and it’s important to use human oversight to ensure that the technology is making accurate decisions. Recruiters should review the results generated by AI-powered tools to ensure that they align with the company’s hiring goals.
Be transparent
It’s important to be transparent with candidates about the use of AI-powered recruitment tools. Candidates should be informed about the data that’s being collected and how it’s being used to make hiring decisions.
Use data responsibly
AI-powered recruitment tools generate a significant amount of data, and it’s important to use this data responsibly. Recruiters should ensure that the data is secure and that it’s being used in compliance with applicable laws and regulations.
Train recruiters
AI-powered recruitment tools require a certain level of expertise to use effectively. Recruiters should be trained on how to use these tools and how to interpret the data generated by them.
Common misconceptions about AI in recruitment
Artificial Intelligence is a relatively new technology, and there are many misconceptions about its use in recruitment. Here are some common misconceptions about AI in recruitment:
AI will replace recruiters
One of the most significant misconceptions about AI in recruitment is that it will replace recruiters. While AI-powered recruitment tools can automate certain tasks, such as resume screening and interview scheduling, recruiters are still needed to make informed hiring decisions.
AI is biased
Another misconception about AI in recruitment is that it’s biased. While AI can be biased, it’s important to note that bias is a result of the data that’s being analyzed, not the technology itself. Recruiters can ensure that AI-powered recruitment tools are unbiased by ensuring that the data being analyzed is diverse and representative.
AI is expensive
Another misconception about AI in recruitment is that it’s expensive. While AI-powered recruitment tools can be costly, they can also save recruiters time and money in the long run. Additionally, there are many affordable AI-powered recruitment tools available on the market.
Case studies of companies successfully using AI in recruitment
Many companies are successfully using AI-powered recruitment tools to streamline the hiring process and find top-notch candidates. Here are some case studies of companies that are using AI in recruitment:
Hilton Worldwide
Hilton Worldwide uses an AI-powered recruitment tool called HireVue to conduct video interviews. This tool uses machine learning algorithms to analyze candidate responses and provide recruiters with real-time feedback. HireVue has reduced Hilton’s time-to-hire by 90% and has improved the quality of hire.
Unilever
Unilever uses an AI-powered recruitment tool called Pymetrics to assess candidates’ skills and qualities. Pymetrics uses game-like tasks to assess a candidate’s cognitive and emotional traits, which has improved the quality of hire and reduced bias in the recruitment process.
L’Oreal
L’Oreal uses an AI-powered recruitment tool called Seedlink to analyze candidate resumes and provide recruiters with a list of the most qualified candidates. Seedlink uses machine learning algorithms to analyze resumes for relevant keywords, which has improved the efficiency of the recruitment process and reduced time to hire.
The future of recruitment with AI
The future of recruitment is exciting, and AI is at the forefront of this transformation. As AI-powered recruitment tools become more advanced, recruiters can expect to see significant improvements in the quality of hire, diversity and inclusivity, and efficiency of the recruitment process. Additionally, AI-powered recruitment tools will become more affordable and accessible, which will enable companies of all sizes to leverage these technologies.
SEEKING INDUSTRY-LEADING TALENT?
| 2023-03-16T00:00:00 |
https://www.pulserecruitment.com.au/the-future-of-hiring-how-ai-is-changing-recruitment/
|
[
{
"date": "2023/03/16",
"position": 62,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/16",
"position": 5,
"query": "artificial intelligence hiring"
}
] |
|
What are skills gap and why it is important in 2025
|
What are skills gap and why it is important in 2025
|
https://gloat.com
|
[] |
Skill gaps refer to the disparity between the skills an employer expects their employees to have and the actual skills employees possess. Learn more here.
|
While most leaders are aware of today’s competitive talent landscape, there’s less discussion about the underlying demand for skills that’s fueling it. Rather than viewing the challenge as a labor shortage issue, HR teams must see beneath the surface and recognize that it’s skills that are actually in short supply.
Skill shortages are reaching crisis levels, with the World Economic Forum estimating that 50% of the global population needs new skills to meet shifts in demand driven by new technologies. By 2030, this figure may grow to as high as 90%.
As digital innovation continues to accelerate, there’s growing fear that skill building isn’t keeping up and that employees won’t have the competencies needed to take advantage of game-changing technologies. Fortunately, it’s possible to catch up to the speed of digital innovation—but only if leaders start identifying skill gaps and developing strategies to bridge them.
What are skill gaps?
A skill gap is the difference between the skills an employee has and the skills required to perform a job effectively. Organizations identify skill gaps to guide training, hiring, and development efforts. Closing these gaps improves performance, productivity, and competitiveness in the workforce.
Another way to think about skills gaps is that they’re the difference between the skills required for a job and the skills employees actually possess. Without the right skills, employees may not be able to complete crucial tasks. Leaders are becoming more familiar with skill gaps because they’re increasingly common; in 2024, nearly 70% of U.S. business leaders said their company had a critical skills gap.
What causes skill gaps?
There are 3 main factors that fuel most skill gaps:
A lack of technological training
According to Gartner, only 9% of employees have a high digital dexterity. As the pace of technological innovation continues to accelerate, employees must build new skills to keep up.
A new wave of retirement
75 million Baby Boomers are expected to retire by 2030, creating a shortage of labor as well as skills.
Shrinking skills half-lives
Due to constant innovation and acceleration, employees constantly need to update their skills and training. The half-life of skills has been shortened from 5 years to 4. And according to IBM, more technical skills have an even shorter span at just 2.5 years half-life
Why do skill gaps pose such a threat to businesses and employees?
Skill gaps can negatively impact both employees and the companies they work for. From an employer’s perspective, if an organization doesn’t have the skills needed to complete high-priority projects efficiently, its bottom line is going to suffer. Leaders will need to either recruit new talent with the skills they’re looking for from outside of their organization or develop talent internally to ensure their organization can acquire the skills needed to achieve success.
At the same time, employees who don’t learn new skills are at risk of falling behind and eventually losing their value to their employer. Given how fast technology is accelerating—especially with recent AI innovations—employees must constantly learn new skills to stay employable. Dynamically learning new competencies is a skill unto itself and it’s something all workers will need to develop in order to be successful in the new world of work.
How can leaders address widening skill gaps?
While skill gaps pose a huge threat to workplaces and have the potential to wreak havoc on bottom lines, there are a few steps leaders can take to prevent this from happening. The foundation for bridging skill gaps is gaining a full picture of the skills within a workforce so leaders can begin identifying what competencies their people should build to remain competitive.
Businesses need skills taxonomies, which are hierarchical systems of classification that break down and organize capabilities into groups and clusters. They can help employers and employees understand what skills they have and what kinds of knowledge they should learn next.
An internal skill gap analysis is another priority for skill-building initiatives, as it helps leaders identify skill gaps within their workforce. The analysis compares the skills an employee needs to the skills they currently have, capturing both soft and hard skills.
3 best practices to bridge skill gaps
There are a few steps leaders can take to help their organization close existing skill gaps and get ahead of new ones.
#1. Prioritize experiential learning opportunities
While educational courses and training are crucial components of upskilling and reskilling, employees can’t just read or watch something to become knowledgeable about it. Instead, L&D content should be paired with opportunities to put the lessons they’re learning into practice, which is where projects and gigs come into play.
Many companies that are committed to bridging skill gaps are harnessing talent marketplaces to match employees to open part-time and full-time projects that align with their career goals and the skills they’re looking to build.
#2. Take advantage of mentoring
Mentoring is a powerful way for employees to expand their networks while building new skills. It’s a learning experience for both parties involved: the mentor learns how to communicate and train peers more effectively while the mentee learns the skills their mentor helps them build.
The most impactful mentorship initiatives don’t just pair people based on what department they’re in or their seniority level; instead, matches are made based on the knowledge employees have and the skills their colleagues are looking to learn. A talent marketplace can generate suggestions for meaningful mentorship pairings at speed and scale.
#3. Know when to build, buy, and borrow capabilities
Another challenge that HR leaders need to conquer to bridge skill gaps is understanding when it makes sense to hire for new capabilities and when they might have talent who can be borrowed to pitch in on a project or upskilled to meet the demands of a new role. Making strategic decisions about when to build, buy, or borrow talent is challenging, which is why many enterprises are harnessing workforce intelligence, which allows talent leaders to compare internal and external candidates side by side.
To learn more about what it takes to bridge knowledge gaps, check out The ultimate guide to the skills-based organization.
| 2023-03-16T00:00:00 |
2023/03/16
|
https://gloat.com/blog/what-are-skills-gaps/
|
[
{
"date": "2023/03/16",
"position": 15,
"query": "AI skills gap"
}
] |
The Role of Workshops and Webinars in AI Training
|
The Role of Workshops and Webinars in AI Training
|
https://profiletree.com
|
[] |
One of the significant limitations in AI training is the existing skills gap. Numerous organisations feel the pressing need to upskill their workforce to ...
|
The Role of Workshops and Webinars in AI Training Updated on: Updated by:
As artificial intelligence (AI) continues to revolutionise diverse industries, it has become increasingly crucial for professionals to stay abreast of the latest developments and acquire the necessary skills to employ AI effectively. Workshops and webinars have emerged as powerful tools in AI training, catering to the ever-growing need for education in this field. They provide practical, hands-on learning opportunities that are often more interactive and specific than traditional educational environments.
The key to maximising the benefits of AI in any organisation lies in understanding how to implement and leverage this technology to its full potential. Engaging in AI workshops offers immersive experiences where participants can learn about cutting-edge applications and dive into machine learning fundamentals with expert guidance. On the flip side, AI webinars make this learning accessible on a global scale, allowing for a broader discussion on topics like AI’s impact on cybersecurity and the broader technological landscape, AI curriculum design, and its role in professional development.
Evolution of AI Training Methods
In the ever-evolving realm of artificial intelligence, the methodologies behind AI training are crucial to unlocking its potential. We’ll explore the transition from traditional methods to cutting-edge approaches and the significant role technology plays in enhancing learning experiences.
Traditional Versus Modern Approaches
Artificial intelligence training has undergone a significant transformation from its inception. Traditional approaches were often rule-based, relying on pre-determined instructions for machines to execute specific tasks. These methods, while foundational, were limited in scalability and complexity. Modern approaches harness generative AI, a breakthrough that allows AI to produce content and solutions on its own, learning from large sets of data. Workshops and webinars have adapted to this shift, focusing on imparting skills necessary to train and manage generative AI systems.
Interactive workshops, for instance, have become essential in addressing common skill gaps. These hands-on sessions are an effective means for individuals to develop their understanding of AI tools and applications. By engaging directly with technology, participants receive practical experience that is beneficial for both personal and professional development.
Impact of Technology on Learning
Technological advancements have dramatically influenced how we learn and train within the AI landscape. The integration of AI and technology in learning environments has not only personalised training experiences but also made them more accessible.
Online platforms facilitate a diverse range of training formats, from webinars to on-demand courses. ProfileTree’s Digital Strategist, Stephen McClelland, highlights, “Technology’s role extends beyond merely a medium for delivering content; it provides a pathway to cater training to individual learning styles, fostering a more efficient acquisition of skills.”
We see a shift towards a blended learning model, where AI complements human instruction to create a more robust, flexible training environment. Innovations like adaptive learning platforms, which adjust content based on learner performance, exemplify the intersection of AI and pedagogy.
Through the use of technology, workshops and webinars can now offer personalised feedback loops and analytics, giving participants a clearer understanding of their progress and areas for improvement. This practical application of technology ensures that individuals are not only consumers of knowledge but active participants in their learning journey.
Understanding Workshops and Webinars in AI
When discussing AI training, workshops stand out as vital because they cultivate essential skills and knowledge through personalised and interactive learning experiences.
Characteristics of Effective Workshops
Effective AI workshops possess certain characteristics that ensure participants leave with a deeper understanding and practical know-how. Firstly, they are interactive, allowing attendees to actively engage with the material. Hands-on exercises and real-world problem-solving scenarios enhance retention and foster practical skills. Secondly, they provide personalised learning opportunities. Tailored content that caters to individual skill levels and learning paces makes the experience more productive for each participant.
Designing AI Workshops for Engagement
Designing AI workshops that captivate and educate requires a strategic approach. Utilising diverse formats such as group activities, peer discussions, and one-to-one mentoring can create a dynamic and inclusive learning environment. Engagement is key; hence, facilitating an atmosphere where questions are encouraged, and experimentation is the norm can significantly increase the effectiveness of a workshop. Employing varied presentation methods, such as case studies, visual aids, and interactive tools, ensures that the workshops remain stimulating and cognitively engaging.
Insights Into AI Webinars
Webinars have emerged as a powerful medium for AI training, providing scalable and convenient learning opportunities that are revolutionising professional education.
Advantages of Webinars for AI Training
Webinars offer flexibility and convenience to participants and organisers alike. They eliminate the need for travel, making it easier for individuals from around the globe to join and interact in real-time or catch up later with recordings. From the perspective of scalability, webinars allow us to train a large number of participants simultaneously, which is especially beneficial given the high demand for AI expertise across various industries.
The interactive nature of webinars also greatly enhances learning. Participants can ask questions and receive immediate clarification, fostering a collaborative learning environment. When exploring the applications of AI, webinars can provide immediate demonstrations, making the theoretical knowledge more tangible and easier to grasp.
Organising Successful AI Webinars
To organise a successful AI webinar, the focus must be on delivering actionable insights that attendees can apply to their own contexts. The content should be tailored to address specific applications of AI within various industries, making the knowledge provided both relevant and immediately applicable.
Here’s a brief checklist for organising an AI webinar:
Identify the target audience and tailor the content to their needs. Ensure the webinar platform is user-friendly and accessible. Highlight practical applications and real-world scenarios of AI. Incorporate interactive elements like Q&A sessions to engage attendees. Provide clear, step-by-step guides on implementing learned AI strategies.
It’s also important to note that the success of a webinar can be enhanced by promoting it effectively—leveraging SEO and content marketing strategies to reach a wider audience. Utilising these techniques ensures that your webinar is visible to those who will benefit the most from the knowledge being shared.
When organising such events, remember to be clear on the outcomes. As ProfileTree’s Founder, Ciaran Connolly, puts it, “Every AI webinar should empower its attendees with new skills or insights, directly contributing to their professional development.”
Curriculum Design for Workshops and Webinars
Crafting an AI curriculum entails meticulously structuring knowledge and practical skills to bridge the gap between theory and real-world application.
Incorporating Practical AI Use Cases
It’s imperative for us to intertwine theoretical learning with practical AI use cases. When we design a curriculum, we frame each module to include hands-on experience that resonates with scenarios employees might face in the workplace. By doing so, we ensure knowledge is not only received but is also retained and applicable. Knowledge of AI fundamentals is crucial, but the magic happens when we translate that into practical skills.
Here are key components to consider:
In-depth exploration of AI algorithms and their applications
Real-world project simulations to solidify conceptual understanding
Analysis of case studies exemplifying AI’s impact across industries
Moreover, we encourage critical thinking and innovative problem-solving among learners by evaluating AI-driven strategies.
The Role of Educators in AI Training
Educators are the linchpins in AI training. Their role extends beyond mere dissemination of knowledge; they create environments conducive to learning, curiosity, and experimentation. We consider the following elements when we prepare educators to lead AI initiatives:
Educator preparation : Our trainers are experts who have a profound understanding of AI and can transfer knowledge effectively within the classroom setting.
: Our trainers are experts who have a profound understanding of AI and can transfer effectively within the classroom setting. Continuous training: The AI landscape is ever-evolving. Therefore, we ensure our educators are up-to-date with the latest advancements and teaching methodologies.
Let’s consider the words of Ciaran Connolly, ProfileTree Founder: “An educator skilled in AI doesn’t just teach; they inspire learners to think deeply about how AI can be ethically and effectively integrated into our daily lives and professional spheres.”
In summary, a meticulously crafted AI curriculum paired with skilled educators can equip individuals with the necessary knowledge and skills to thrive in an AI-driven future. Our focus on this synergy ensures we deliver an educational experience that’s both enriching and practical.
AI Training for Professional Development
Artificial Intelligence (AI) has transformed the landscape of professional development, equipping individuals with the skills needed to thrive in today’s digital world. These advancements particularly resonate in the realm of upskilling employees and enhancing teacher professional development, where AI tools not only foster skill acquisition but also bolster confidence and encourage continuous enhancements.
Upskilling Employees with AI Training
By integrating AI technology in upskilling initiatives, we are witnessing a paradigm shift in how employees develop new competencies. AI-powered platforms offer personalised learning experiences tailored to specific skill gaps, ensuring that training is both efficient and effective. Tools powered by AI, such as virtual labs and simulated environments, allow employees to practise new skills in a risk-free setting, solidifying their knowledge through real-world application.
Teacher Professional Development and AI
In the sphere of teacher professional development, AI serves as a dynamic tool for educators to deepen their understanding of AI applications within the educational context. Teachers are leveraging AI to enhance the classroom experience with tools designed to provide insights into student performance and engagement. Further, educators are engaging in AI-centric webinars and workshops, such as Understanding AI for Educators, to develop strategies for navigating technological innovation and disruption in the educational sector.
To reflect on the profound impact of AI within professional development, we can consider the insights of Ciaran Connolly, ProfileTree Founder, who states, “The use of AI in professional development is not just about technological proficiency. It’s about cultivating a mindset that embraces lifelong learning and adapts to emerging trends with agility and confidence.”
Our focus on AI training in professional development underscores the value we place on being at the forefront of educational innovation, ensuring that both employees and educators are prepared for future advancements.
Machine Learning Fundamentals
In this section, we’ll be exploring the foundational elements of machine learning, focusing on the distinction between supervised and unsupervised learning and how machine learning is practically applied in technology and business settings.
Supervised vs Unsupervised Learning
Supervised learning involves training a model on a labelled dataset, which means that each training example is paired with an output label. This type of learning allows the model to make predictions or decisions, having been trained on a dataset that includes the correct answers. A classic supervised learning task is email spam filtering, where the model is trained to identify ‘spam’ or ‘not spam’ based on a variety of features found in the email.
In contrast, unsupervised learning is used with data that does not have any labels. The system tries to learn the patterns and the structure from the data without any external guidance. An example of unsupervised learning is customer segmentation in the marketing analytics realm, where the goal is to discover hidden patterns or groupings without pre-existing labels.
Machine Learning in Practice
When implementing machine learning in practice, the process begins with an experiment. This involves selecting algorithms, preparing data, and defining metrics for success. It’s similar to a scientific experiment where hypotheses are tested against data.
In technology and analytics, machine learning models are tuned and validated through a series of trials to optimise performance for practical applications, such as predictive maintenance in manufacturing or personalisation algorithms in online retail.
Our approach at ProfileTree is data-driven and iterative; we conduct experiments and analyse the outcomes, ensuring that the technology we employ is not only state-of-the-art but also the most suitable for the task at hand. As said by Ciaran Connolly, ProfileTree Founder, “In the field of AI, it’s not about using the most complex algorithm, but rather the right tool for the right problem.”
By implementing these machine learning fundamentals, organisations can harness the power of data to improve decision-making and gain valuable insights. We encourage the use of analytics to monitor the success of machine learning integration, ensuring a technological advantage in a data-driven world.
The Impact of AI Training on Cybersecurity
In the digital era, AI solutions are transforming how we tackle cybersecurity challenges, offering sophisticated tools for defence while raising critical ethical questions.
Artificial Intelligence (AI) has become an integral part of the cybersecurity toolkit. By leveraging machine learning algorithms, AI solutions can analyse vast datasets to detect patterns indicative of cyber threats, sometimes even before they occur. For instance, intrusion detection systems have evolved to predict and counteract breaches with greater accuracy, facilitating a dynamic and proactive cyber defence strategy.
Key benefits of these AI tools include their ability to:
Automatically identify vulnerabilities in a network, reducing the time it takes to address potential points of entry for cyber attackers. Enhance threat intelligence by sifting through global security reports and logs, extracting actionable insights that empower cybersecurity professionals to respond swiftly and effectively.
ProfileTree’s Digital Strategist, Stephen McClelland, asserts that “AI-driven solutions are revolutionising cybersecurity, with advanced pattern recognition allowing us to anticipate threats and bolster our defences with unprecedented precision.”
Ethical Considerations in AI for Security
While AI presents substantial benefits for cybersecurity, it is imperative to address the ethical implications that arise from its use. Deploying AI for security purposes must be guided by principles of responsible AI, which include transparency, accountability, and fairness. These principles help ensure that AI tools do not inadvertently infringe on individual privacy or become subject to biases, which could undermine the effectiveness and credibility of cybersecurity practices.
Crucial considerations for ethical AI in cybersecurity involve:
Maintaining strict data governance protocols to protect personal information from misuse.
Ensuring AI algorithms are free from bias by rigorously testing and refining these tools.
Our collective responsibility is to foster the development and utilisation of AI in a manner that not only strengthens cybersecurity but also upholds ethical standards. This balance is paramount in maintaining trust and reliability in our digital defences.
Implementing AI in Organisations
Effective AI implementation in organisations hinges upon building trust in AI systems and establishing robust governance with clear guidelines. These steps ensure the responsible deployment of AI while maximising its benefits.
Building Trust in AI Systems
For organisations to fully embrace AI, it’s crucial that both employees and stakeholders have confidence in the technology. We focus on transparency and explainability as cornerstones to foster trust. For example, we might implement ‘black box’ insights, where our AI systems articulate their decision-making process, making it easier for users to understand and trust the outputs.
Governance and AI Implementation Guidelines
Proper governance is key to the successful roll-out of AI technologies. We develop tailored AI implementation guidelines that reflect our commitment to ethical and responsible AI use. This includes regular reviews of AI impact on our operations and adherence to data protection laws, ensuring that our adoption of AI aligns with our organisational values and strategies.
Exploring the AI Technology Landscape
In today’s fast-paced digital world, understanding the ever-evolving AI technology landscape is vital for Small and Medium-sized Enterprises (SMEs). Let’s unpack the core components and opportunities within this dynamic field.
Natural Language Processing (NLP) is changing the way businesses interact with customers and analyse data. By harnessing NLP, machines can understand and respond to human language with increasing sophistication, opening up vast new possibilities for service automation and customer insights.
The AI landscape also includes machine learning, deep learning, and intelligent system integration. Navigating this landscape effectively means identifying tools that not only solve current problems but are also scalable for future challenges.
Exploring these technologies requires a look at the practical applications that can transform operations, such as optimising business processes or enhancing decision-making with predictive analytics.
Considering the possibilities AI offers, we see opportunities in various sectors, from healthcare to finance. For example, AI can tailor educational experiences to individual learning styles or streamline supply chains to improve efficiency and reduce costs.
Let us share insights from “ProfileTree’s Digital Strategist – Stephen McClelland”:
“The palette of AI technologies at our disposal is more vibrant than ever, with each innovation offering a unique shade to colour our digital strategies. SMEs investing in AI now are laying a canvas for future business masterpieces.”
Incorporating AI into our workflow isn’t just about keeping up; it’s about staying ahead. We recommend beginning with pilot projects to measure AI’s impact and then gradually expanding its role across the business. Here are key steps to start your AI journey:
Identify areas where AI can streamline operations. Select AI tools that align with your business goals. Implement AI solutions in phases to measure effectiveness. Train your team on AI applications and tools. Analyse results and adjust your strategy as necessary.
This exploration is not a one-off task but a continuous learning curve. By staying well-informed and agile, we can leverage AI to its fullest potential, ensuring that our businesses are not just surviving but thriving in the modern digital economy.
Challenges and Limitations of AI Training
While AI training workshops and webinars effectively impart knowledge and practical skills, they are not without challenges. Our efforts in closing the skills gap and overcoming technical limitations continue to progress.
Addressing the Skills Gap
One of the significant limitations in AI training is the existing skills gap. Numerous organisations feel the pressing need to upskill their workforce to harness the benefits of AI fully. Analytics often reveal a discrepancy between the skills available within the team and those required to implement and maintain AI systems effectively. To combat this, we must design training programs that not only convey theoretical knowledge but also foster practical, hands-on experience.
Overcoming Technical Limitations
Technical barriers also pose challenges to AI training. From inadequate infrastructure to evolving algorithms, the technical landscape of AI is in constant flux, making it harder for businesses to keep up. It’s crucial to have the latest analytics tools and platforms for training, but accessibility and cost can be prohibitive for smaller enterprises. To mitigate these issues, we recommend leveraging webinars and workshops with scalable and flexible technology solutions that allow for a greater reach and ongoing support post-training.
Specific Insight from ProfileTree
According to ProfileTree’s Digital Strategist – Stephen McClelland, “Faced with the twofold challenge of addressing the skills gap and overcoming technical limitations, SMEs must adopt a strategic approach to AI training. This approach should incorporate in-depth analysis, customised content, and practical exercises that translate into actionable insights for the business.”
Frequently Asked Questions
In the realm of AI training, workshops and webinars play a pivotal role in not only imparting knowledge but also in practical application. They serve as platforms for interactive learning and offer opportunities for professionals to stay abreast of rapid technological advancements.
| 2024-05-13T00:00:00 |
2024/05/13
|
https://profiletree.com/the-role-of-workshops-and-webinars-in-ai-training/
|
[
{
"date": "2023/03/16",
"position": 87,
"query": "AI skills gap"
}
] |
EU's new approach to AI
|
EU's new approach to AI
|
https://www.deloitte.com
|
[] |
To ensure a trustworthy development of AI, EU has proposed a legal framework regulating the use of AI ... Employment, workers management and access to self- ...
|
Three years after the General Data Protection Regulation (GDPR) entered into force the European Commission published the world’s first proposal for a legal framework regulating specific uses of AI. The AI Act introduces a risk-based approach to AI systems with life cycle requirements to ensure the development of trustworthy AI. As with other known regulations, the EU relies on hefty fines to ensure compliance. Fines for violation of the AI Act can as such amount up to 6% of global turnover, or up to 30 million euros, which is an even higher level of fines than those introduced with the GDPR.
But what is it exactly that the EU is attempting to do with this new legal framework and what can companies and organisations do to prepare?
Background
On April 21st, 2021, the EU AI Act proposal was published as a part of the strategy “Shaping Europe’s digital future” The Commissions goal with the AI Act is for the EU to be leading regarding international regulation and in driving innovation while protecting fundamental rights of EU citizens in the age of AI.
The AI Act builds on the Ethics Guideline for Trustworthy AI published by the independent expert High-Level Expert Group on Artificial Intelligence that was set up by the European Commission in June 2018. The Guidelines put forward the following 7 key requirements that AI systems should meet in order to be deemed trustworthy:
Human agency and oversight
Technical Robustness and safety
Privacy and data governance
Transparency
Diversity, non-discrimination and fairness
Societal and environmental well-being
Accountability
These are fundamental principles of lawfulness, ethics, and robustness put forth with the purpose of creating AI systems free of bias that can exist and work within the boundaries of ethical standards.
Risk based approach
The proposal uses a risk-based approach to differentiate between four types of AI systems, based on their potential risks for fundamental rights and safety for EU citizens. The four levels of risks are:
Unacceptable risk
High risk
Limited risk
Minimal or no risk.
Unacceptable risk
All AI systems that can be considered a clear threat to the safety, livelihoods and rights of people will be prohibited within the EU. This includes:
Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with some exceptions). An example would be the use of facial recognition software with surveillance cameras monitoring public spaces.
Manipulation of human behavior opinions and decisions (e.g., Toys using voice assistance that encourages dangerous behavior)
Classification of people based on their social behavior (social scoring)
The narrow exemptions for real-time remote biometric identifications are strictly defined and regulated. They are permitted only when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify, or prosecute a perpetrator or suspect of a serious criminal offence. And only permitted by authorisation of a judicial or other independent body, to appropriate limits in time, geographic reach and the data bases searched.
High-risk
AI systems in the high-risk category are the main focus of the AI Act. The high-risk systems are permitted but subject to strict obligations before being put to market. Included in the high-risk category are AI systems that are used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II and require a third-party conformity assessment.
When high-risk systems are used in the specific areas below, they are deemed high-risk:
Biometric identification and categorization of natural persons
Management and operation of critical infrastructure
Education and vocational training
Employment, workers management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum, and border control management
Administration of justice and democratic processes
Limited risk
AI systems that fall under the limited risk category are permitted but with the requirements of transparency. Transparency requirements apply to AI systems that interact with humans (e.g., Chatbots), are used to detect emotions or determine categories based on biometric data or are used for creating manipulated content (e.g. deepfakes where AI software can be used to manipulate a video by adding a celebrity’s or politicians face to someone else’s body).
Minimal or no risk
AI systems that do not fall in one of the other categories mentioned above are free to be used without requirements. However, the AI Act mentions the possibility to adopt a code of conduct to follow suitable requirements and to ensure that these AI systems are indeed trustworthy.
Who does the AI Act apply to?
The AI Act applies to providers who develop an AI system or have an AI system developed with the intent of placing the AI system on the market or putting AI systems into service in the EU under the providers own name or trademark. This applies irrespective of whether those providers are established within the EU or in a third country. In addition, it applies to users of AI systems located within the EU; and providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.
Importers of AI systems from outside the EU who places the AI system on the market or puts the AI system into service within the EU or distributers who makes an AI system available on the European market are also subject to requirements in the AI Act.
Cradle to grave requirements for high-risk AI systems
The AI Act sets forth requirements to ensure that AI systems are trustworthy throughout their lifecycle. Providers of high-risk AI systems are subject to the biggest part the requirements in the AI Act, including the requirements listed below. Providers are also responsible for ensuring the conformity assessment and CE marking of their AI systems.
User of high-risk systems are to a lesser degree subject to requirements, however the user must among other things still ensure that the high-risk AI systems are being used in accordance with instructions and must continuously monitor the AI systems activity.
Risk Management System
A provider is required to establish, implement, document, and maintain a risk management system to identify risks associated with the high-risk AI system and to adopt suitable risk management measure. The risk management system must be a continuous iterative process running throughout the entire lifecycle of a high-risk AI system and must be systematically updated.
Data and Data Governance
When training models for high-risk AI systems, providers must ensure that the dataset used is of a sufficiently high quality. Hence, training, validation and testing datasets must be subject to appropriate data governance and management practices and the datasets must be relevant, representative, free of errors and complete.
Technical Documentation
Technical documentation to demonstrate that the high-risk AI system complies with the requirements in the AI Act must be drawn up before placing the system on the market or put into service and must be continuously updated.
Record Keeping
High-risk AI systems must be designed with automatic record keeping of events (‘logs’). The logging must be in accordance with recognised standards or common specifications and monitor occurrence of situations that might result in the AI system constituting a risk.
Transparency & Information
High-risk AI systems must be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. Instructions and information for users about the AI system must be concise, complete, and correct and clear.
Human oversight
Design and development of high-risk AI systems must be with integration interface tools to ensure that human oversight is possible while the AI system is in use. The aim is to prevent or minimise the potential risks to health, safety or fundamental rights that may emerge when an AI system is in use. In a worst-case scenario, a human must be able to intervene on the operation or stop the AI system.
Accuracy, Robustness and Cybersecurity
Maintaining an appropriate level of accuracy, robustness and cybersecurity throughout the lifecycle is a requirement for high-risk AI systems.
When will the new rules apply and how to begin preparing for the Act?
The proposal for the AI act published by the European Commission is currently being discussed in the Council and the European Parliament before the final text is to be set and adopted. The expectation is that this will happen during 2023. When the final text is adopted, there will be a two-year implementation period before the regulation enters into force, like we know it from when the GDPR entered into force. Even though two years might seem like a long time to implement the new requirements two years goes by quickly when preparing an organization for new legislative requirements, so Deloitte recommends starting sooner rather than later.
In order to ensure future compliance, eliminate risk and send a clear and trustworthy message into the market, AI providers should start preparing for the AI Act with the following tasks:
Re-examine risk management framework to identify gaps against the regulatory requirements in the AI act and update accordingly. The risk management framework should cover the areas below.
1. Governance
2. Data quality
3. Development and testing
4. Evaluation and deployment
5. Continuous monitoring Identify the risk level and categorise the AI systems accordingly Perform risk assessments (including DPIA’s) Implement necessary control measures Monitor and report
AI Liability Directive
On September 28th 2022 the European Commission published a proposal for the AI Liability Directive. The AI Act and the AI Liability Directive supplement each other, they apply at different moments and reinforce each other. While the AI Act aims at preventing damage, the AI Liability Directive lays down a safety-net for compensation in the event of damage. It is noteworthy that the Directive will apply to damage caused by AI systems, irrespective if they are high-risk or not according to AI Act.
More information?
For more information about EU's AI Act or Privacy, please contact Malene Fagerberg or Daniel Tvangsø via the contact details below.
| 2023-03-16T00:00:00 |
https://www.deloitte.com/dk/en/services/risk-advisory/perspectives/eu-s-new-approach-to-ai.html
|
[
{
"date": "2023/03/16",
"position": 99,
"query": "AI regulation employment"
}
] |
|
Artificial Intelligence in China: Implications and Opportunities
|
Artificial Intelligence in China: Implications and Opportunities
|
https://www.globalxetfs.com
|
[] |
... workforce, supportive government policies, and advanced manufacturing base. What Is Generative Artificial Intelligence? Generative AI refers to artificial ...
|
Since late 2022, when OpenAI’s ChatGPT exploded onto the scene, the generative artificial intelligence (AI) platform has experienced exponential adoption in a way that is extremely rare for new technologies. ChatGPT was estimated to have reached 100 million users in just two months.1 It took Netflix 10 years to reach 100 million users; six and half years for Google Translate; roughly two and a half years for Instagram; and about nine months for TikTok.2,3 Given this backdrop, it is not surprising that expectations regarding generative AI’s potential to create lucrative new business models and nurture new technology companies have surged.
Key Takeaways
The rapid adoption of generative AI platforms like ChatGPT has raised investors’ expectations about the technology’s potential to create lucrative new business models.
US-based tech companies have largely dominated the conversation around generative AI thus far, but that could change as Chinese heavyweights like Baidu enter the fray.
China appears well positioned to benefit from the AI trend, due to the country’s massive amounts of data, skilled workforce, supportive government policies, and advanced manufacturing base.
What Is Generative Artificial Intelligence?
Generative AI refers to artificial intelligence systems that are designed to create new and original content based on the data they are trained on. This can include generating text, images, music, computer code, and even 3D models. Unlike discriminative AI, which is used to classify and categorize data, generative AI creates new data by using probabilistic models to produce outputs based on patterns it has learned from the input data.
China Has a Strong Foundation Based on Research
In the past decade, China has built a solid foundation to support its AI economy and made significant contributions to AI globally. In 2021, China produced the largest share of the world’s AI conference publications at 27.6%, versus 16.9% produced by the US.4 AI research often leads to real-world applications, which have grown rapidly in China. Today, AI adoption has ramped up in many industries beyond tech to include finance, retail, government, and telecommunications. The Chinese government has already released supportive policy guidelines to foster investments in AI-related fields.5
Most of the AI applications that have been widely adopted in China to date have been in consumer-facing industries like internet, propelled by the world’s largest internet user base. Internet platforms have seen phenomenal growth over the past decade, underpinning a strong investment cycle for cloud computing-related infrastructure. This has set a good foundation in terms of the computing power, data, and training models required for AI development.
China has a competitive advantage in its huge engineering talent pool. Over the next five years, 50 million graduates are set to enter the workforce in China, more than the US, Germany, Japan, Korea, and Southeast Asian countries combined.6 The share of the population with a university degree exceeded 15% in 2020, up more than 65% from 10 years ago, with many of those graduates in science, technology, engineering, and math (STEM) fields.7 A large pool of engineers, especially software engineers, could continue to support strong growth in AI development.
Numerous Companies and Industries Appear Poised to Benefit
Baidu, which runs China’s most popular internet search engine, is positioned to be one of the leaders in generative AI in China. Baidu has had an early start on AI technology, particularly on natural language processing (NLP), given the close synergy to its core search business. Baidu’s own generative AI product, ERNIE Bot, is expected to complete beta testing and be open to the public in March, 2023.8
Besides Baidu, other technology companies like ByteDance, Alibaba, and Tencent also have the potential, and have shown ambition, to develop generative AI models, given the sheer amount of data these companies possess. ByteDance has always utilized algorithms to analyze user preferences in its popular TikTok app. Therefore, content generation using AI technology, whether for text, images, or videos, could blend in naturally with its existing business. Additionally, the development of generative AI could help improve the overall monetization/pricing opportunities for Baidu, Alibaba, and Tencent’s cloud services.
China’s semiconductor industries could benefit from the infrastructure investment required for generative AI. The development of advanced AI models and the ensuing blossom of new products will likely require significant investments in computing power to support operations. This would probably benefit China’s (and Asia’s) high-performance computing-related semiconductor supply chains. Specifically, an increase in demand for AI accelerators, such as graphics processing units (GPUs) and application-specific integrated circuits (ASICs), could benefit the foundry industry, as most AI accelerators are produced via fabless business models, which rely on foundries to make their chips. Asian foundries have over 90% market share globally.9 Design service and integrated circuit companies also stand to benefit. There are a number of companies in Asia that provide design services for ASIC chips in AI accelerators. Taiwan-based MediaTek, for example, has an ASIC division focusing on high-performance computing. Additionally, AI and machine learning (ML) applications will likely demand better memory performance and more bits, while equipment makers could benefit from strength in semiconductor demand driven by increasing AI-related workloads.
Conclusion
Recently, US-based tech companies, such as OpenAI, Microsoft, and Alphabet, have dominated the AI conversation. However, that may well change, given developments from the likes of Baidu, as well as China’s strong foundation for continued AI development. Massive amounts of data, a skilled workforce, government support, and an advanced manufacturing base all augur well for the long-term development of China’s AI capabilities.
Related ETFs
KEJI – Global X China Innovation ETF
AIQ - Global X Artificial Intelligence & Technology ETF
CHIK - Global X MSCI China Information Technology ETF
Click the fund name above to view current holdings. Holdings are subject to change. Current and future holdings are subject to risk.
| 2023-03-16T00:00:00 |
https://www.globalxetfs.com/articles/artificial-intelligence-in-china-implications-and-opportunities/
|
[
{
"date": "2023/03/16",
"position": 45,
"query": "government AI workforce policy"
}
] |
|
Regulation
|
Credo AI -
|
https://www.credo.ai
|
[] |
Regulations can take many forms, such as industry standards, codes of conduct, licensing requirements, and AI policies. Regulation can also involve oversight ...
|
Regulation refers to a set of rules, guidelines, or laws that are established by a governing body to govern behavior in a specific industry or sector. Regulations aim to shape an actor toward a particular aim, such as protecting the public interest, ensuring fair competition, and promoting safety, security, and ethical standards.
Regulations can take many forms, such as industry standards, codes of conduct, licensing requirements, and AI policies. Regulation can also involve oversight and enforcement mechanisms, such as inspections, audits, and penalties for non-compliance.
Typically, regulations are developed and enforced by government agencies or regulatory bodies, although they can also be self-regulatory or industry-led. The process of developing regulation usually involves consultation and engagement with key stakeholders to ensure that the regulation is effective, efficient, proportionate to, and appropriate for the intended purpose.
| 2023-03-16T00:00:00 |
https://www.credo.ai/glossary/regulation
|
[
{
"date": "2023/03/16",
"position": 51,
"query": "government AI workforce policy"
}
] |
|
Why reskilling is crucial for government employees
|
Why reskilling is crucial for government employees
|
https://www.totara.com
|
[] |
... policy initiatives, how do learning professionals strike this delicate ... With both an ageing workforce and pressures from technology, it is vital ...
|
The rapid rate of change in politics around the world means that it has never been more important for civil servants to ensure that the skills they need to perform their roles effectively are up-to-date and up-to-scratch.
But is the way government employees learn as good as it could be? And if not, is it time for governments to rethink their approach to learning and development to ensure that learning opportunities keep up with those of the private sector?
With local and central governments under the microscope to manage public spending efficiently while also under pressure to optimise outcomes from their policy initiatives, how do learning professionals strike this delicate balance and ensure that government employees get all the support they need to perform their roles to the best of their ability?
The situation governments today face
The issues governments worldwide face aren’t only political – they are also rooted in the composition of their own organisations. For instance, one in four workers in the US will be aged 55 or older by 2024 – up from just one in ten in 1994. This is coupled with the fact that some jobs are disappearing due to improving technology and automation – in fact, 38% of organisations expect to eliminate certain jobs due to automation in the next few years – but many more are being transformed.
With both an ageing workforce and pressures from technology, it is vital that employees in the notoriously slow-moving, bureaucratic public sector have the power to keep up in a rapidly changing world.
Overcoming bureaucracy to show the value of workplace training
Bureaucracy and red tape will always be an issue for government employees – there are strict protocols for the way things happen, whether that’s the structure of training programmes, securing budgets for skill-enhancing courses or simply getting approval from managers way up in the complex hierarchical structure to take the time to train.
Overcoming the bureaucracy of working in government is no mean feat, but it is important to make the powers that be understand the true value of workplace training and the potential consequences for failing to keep up.
For instance, in the US, the Federal Cybersecurity Reskilling Academy offers federal employees the opportunity to undertake hands-on training in cybersecurity. This forms part of a commitment to developing a 21st-century workforce, demonstrating that the US Administration has identified a need for cybersecurity skills in the future.
It’s not an easy task to convince a government to change the way they do things, but helping them see the benefit of updating their training provisions is the first step to enacting change.
How to reskill government employees quickly and on a budget
No government has a bottomless pit of money for training its employees, and with governments needing to remain accountable for their spending, it’s important that they can deliver effective training programmes on a relatively limited budget.
So what’s the answer?
Fortunately, many of the skills that could help government employees are not specific to government organisations and roles. For instance, bias and discrimination training, leadership development and cybersecurity training, much like the US programme mentioned above, is already widely available.
Totara content partner GO1 provides a catalogue of learning content across sectors, with content tailored to the government sector to provide new training opportunities at great value for money.
The fact that the content already exists means that it can be imported into a government organisation’s learning platform almost instantly, giving employees the ability to upskill and reskill at a much lower cost than creating the training from scratch.
Choosing the right platform for government training
Training for rapid upskilling and reskilling should be quick and easy to implement. This means finding a flexible, cost-effective and scalable learning platform that will grow and mould around any government organisation’s evolving needs.
For instance, it shouldn’t take months to expand the scope of the platform to include another department, or to upload new courses if a new priority arises.
Totara Learn is used by governments worldwide to manage, deliver and track learning. Features such as learning plans help keep government employees moving towards their learning goals, as the US Department of Agriculture discovered when they implemented Totara Learn with GP Strategies.
Functionality such as audiences and hierarchies also allows organisations to reflect complex structures and management lines to ensure that all learning is managed efficiently.
GO1’s learning content library also plugs directly into Totara Learn for a seamless learner experience, meaning that government employees can load content from the course catalogue from within the learning platform.
| 2023-03-16T00:00:00 |
https://www.totara.com/articles/why-reskilling-is-crucial-for-government-employees/
|
[
{
"date": "2023/03/16",
"position": 59,
"query": "government AI workforce policy"
}
] |
|
Executive actions to reduce inequality and improve job ...
|
Executive actions to reduce inequality and improve job quality for U.S. workers
|
https://equitablegrowth.org
|
[
"Maria Monroe",
"Authors",
"Equitable Growth",
"Kathryn Zickuhr",
"Cesar Perez",
"Alix Gould-Werth",
"Carmen Sanchez Cumming",
"Michael Linden",
"David S. Mitchell",
"Shayna Strom"
] |
Require government contractors to provide employees a fair workweek ... Findings from the trial could help inform not just federal workforce policies ...
|
Relevant federal agencies: Federal Trade Commission
Equal Employment Opportunity Commission
Office of Personnel Management Relevant laws: Federal Trade Commission’s Trade Regulation Rule on Commercial Surveillance and Data Security, Commercial Surveillance ANPR, R111004
Seattle, Washington, Municipal Code § 14.22.055 – .150
5 USC § 4703; 5 CFR Part 470
Overview
Even during periods of sustained economic and employment growth, millions of workers across the United States still face economic uncertainty and precarious working conditions. Working conditions such as low pay, unstable schedules, little access to benefits, workplace surveillance, and discrimination and sexual harassment at work impact a considerable portion of the U.S. labor force and decrease job quality for workers.
There are many ways to tackle these problems. Some examples include raising the federal minimum wage, providing workers’ benefits that employers fail to provide, and instituting robust protections for workers. Addressing these problems would not only improve job quality and individual worker well-being, but it would also benefit the broader workforce, employers, and the U.S. economy as a whole.
One municipal-level example is the stable scheduling law passed in Seattle and enacted in 2017, which led to a 10 percentage point decrease in workers’ material hardship. Increasing the minimum wage at the local, state, and federal level increases worker tenure and, in turn, decreases employer costs due to more frequent worker turnover. The flip side of this is also true: Hostile workplaces where sexual harassment and racial discrimination are present increase employee turnover rates.
While topline statistics, such as employment growth and pay rates, are important, federal policymakers must also take steps to enact policies that improve job quality and worker well-being. There is interest in proposals to implement some of these improvements, both in the U.S. Congress, with bills such as the Schedules That Work Act and the Stop Spying Bosses Act, and through efforts from various federal administrative agencies.
Yet administrative agencies have only taken some concrete steps to address these problems, while Congress struggles to pass legislation. Though some progress is being made on these issues, there’s still more the Biden administration can do. This factsheet details several executive actions the administration could enact.
Reinstate the expanded EEO-1 form to include detailed pay information by gender, race, and ethnicity
Data collection is a key function of any administrative agency, especially when it comes to the Equal Employment Opportunity Commission. One of the most important tools the agency uses for data-gathering is the so-called EEO-1 form, which requires all private-sector employers with 100 or more employees, as well as federal contractors with 50 or more employees who meet certain criteria, to submit demographic workforce data along the lines of race and ethnicity, sex, and type of job. Thanks to these data, the EEO-1 form provides an understanding of the mechanisms behind economic inequalities and where those inequalities exist at certain firms.
The Obama administration expanded the EEO-1 form to include more detailed data, but the Trump administration stopped the collection of pay data altogether. Subsequent legal challenges during the Trump administration prevented the EEO-1 form from being properly implemented as initially designed.
Even though the agency is facing internal issues with implementing the EEO-1 form, some data continue to be collected. Collecting additional data, such as pay information broken down by employee demographics, would provide a more detailed picture of where and how economic inequality along demographic lines is perpetuated.
As such, the Equal Employment Opportunity Commission should fully reinstate the EEO-1 form to help provide these important baseline data points. Doing so would not only help stakeholders better understand the inequitable divides in the U.S. labor market, but also help guide federal, state, and local governments to more effectively target programs and policies to meet the needs of historically marginalized workers.
Require government contractors to provide employees a fair workweek
Despite existing workplace protections, many employers—especially those in the retail and service industry—utilize unstable and unpredictable scheduling practices. Unstable scheduling practices, also known as “just-in-time” scheduling, make it harder for families to find child care, increase the likelihood of workers going hungry, and fail to offer more scheduling flexibility for workers.
Policymakers have experimented at different levels of government with fair workweek laws as a way to address just-in-time scheduling. Some common provisions of these laws include:
Ensuring workers have advance notice of their schedules (often 2 weeks’ notice)
Providing compensation to workers for last-minute schedule changes
Guaranteeing 10 hours of rest between working a closing and opening shift
Receiving an offer for additional hours before new employees are hired
Implementing such stable scheduling practices would provide important benefits to both employers and employees. Research by Columbia University’s Elizabeth Ananat, Anna Gassman-Pines at Duke University, Daniel Schneider at Harvard Kennedy School, and the University of California, San Francisco’s Kristen Harknett demonstrates that stable scheduling practices reduce scheduling unpredictability without negatively impacting hours worked, lower turnover rates in low-wage, service-sector industries, and increase workers’ productivity and employer profits in the U.S. retail industry.
The Biden administration should require government contractors to provide their employees with a fair workweek. Such a requirement could be implemented alongside commensurate efforts to ensure that contractors comply with the requirement, meaning that adequate reporting and enforcement measures should be considered. This would improve work quality and worker well-being and provide economic benefits to the employers as well, reducing the high rates of employee turnover that often lead to significant costs for the employer. Furthermore, such a mandate from the Biden administration could serve as a guide for state and local policymakers.
Prevent employers from deploying harmful electronic surveillance on their workers and ensure that monitoring does not result in discrimination
Workers across the country are increasingly vulnerable to invasive monitoring and surveillance practices by their employers. Everything from electronic surveillance tracking workers’ movements and computer use, to facial recognition software, to algorithmic management systems that are used to discipline and/or terminate workers are becoming increasingly prevalent in the United States. Low-wage and hourly workers are especially vulnerable to these practices.
Research by Lisa Kresge at the University of California, Berkeley Labor Center and Aiha Nguyen at Data & Society shows that continuous monitoring—and accompanying punitive actions from employers based on that monitoring—exposes workers to a range of harms, such as increased injuries, reduced wages, and a suppression of the right to organize.
Several federal agencies and regulatory bodies are examining issues related to workplace surveillance, including the Federal Trade Commission. A number of researchers, worker advocates, and civil society groups, as well as Equitable Growth, recently responded to the Federal Trade Commission’s Advanced Notice of Proposing Rulemaking on Commercial Surveillance and Data Security, outlining these harms and priority areas for action.
Other researchers and advocates studying this topic highlight key next steps the agency should take in its rulemaking. These include:
Ensuring that whatever monitoring data employers are allowed to collect is minimal and for narrow purposes that don’t harm workers, and with a goal of maximizing workers’ privacy, including by restricting businesses from deploying specific, harmful forms of electronic monitoring and sensitive data collection, such as facial recognition software, algorithmic surveillance, and biometric surveillance.
Facilitating researchers’ and regulators’ access to the surveillance data that firms collect to understand firms’ actions and potential harms to workers, for oversight, and for accountability. There also should be full notice and transparency of monitoring practices, coupled with other privacy and labor protections, so that workers and unions know what information is being collected and therefore can potentially bargain over these issues.
Making sure that any use of electronic monitoring, surveillance, algorithmic decision-making and/or data-driven worker management software tools are not used to discriminate against historically marginalized groups, including disparate impacts on protected classes.
Going forward, continuing and increased coordination between relevant agencies and regulatory bodies will also be necessary to address the many intertwined issues around workplace surveillance and worker power.
Launch a demonstration project to test the efficacy of paying workers more frequently
Like most workers in the United States, federal employees are paid every 2 weeks. But this practice—an artifact of New Deal era legislation and status quo bias—is not necessarily in workers’ best interest. Despite major advances in financial technology in recent decades, workers are still forced to wait many weeks between completing work and being paid, effectively providing their employer with an interest-free loan in the interim.
There is mixed evidence on the benefits of more frequent pay. Some workers who live paycheck-to-paycheck would surely benefit from having enough cash on hand to meet daily expenses and avoiding the high fees and interest payments associated with short-term loans, credit card debt, and other, sometimes predatory, financial products. Other workers might prefer the forced savings associated with being paid larger amounts less frequently, allowing wage earners to build up large enough sums to purchase durable goods and perhaps helping them combat self-control problems. Indeed, recent research shows that more frequent pay can lead to higher consumption due to the feeling among earners that they are richer than they actually are, while other studies find the opposite.
It is also not clear how altering traditional pay periods might affect employers. Running payroll more often will likely increase their administrative costs, but reducing workers’ financial stress could redound to firms in the form of higher employee productivity and reduced turnover. Indeed, one paper from 2022 found that more frequent pay led workers to be more productive while also increasing their homeownership rates. But much of the research touting more frequent pay has been done by private companies with a vested interest in the use of their advance pay products, some of which come with high fees.
Given these promising but inconclusive findings, there is a major opportunity for the federal government to lead the way in experimenting with long-overdue payday innovations. Using its statutory authority to conduct demonstration projects, the Office of Personnel Management could collaborate with federal agencies to test the efficacy of more frequent or varied paydays for a subset of federal workers. The agency could also investigate the option of allowing employees to pick their own payday to better time their incomes to expected monthly expenses.
The Office of Personnel Management is well-positioned to analyze the effects of these changes on both employee well-being and employer performance, using existing tools such as the Federal Employee Viewpoint Survey to gauge worker satisfaction. Findings from the trial could help inform not just federal workforce policies, but also state pay frequency laws, requirements for federal contractors, potential federal legislation, including minimum wage and overtime regulation, wider labor market practices, and even ways to relieve macroeconomic congestion.
| 2023-03-16T00:00:00 |
2023/03/16
|
https://equitablegrowth.org/executive-actions-to-reduce-inequality-and-improve-job-quality-for-u-s-workers/
|
[
{
"date": "2023/03/16",
"position": 73,
"query": "government AI workforce policy"
}
] |
Workforce Development / CTE
|
Workforce Development / CTE
|
https://www.shankerinstitute.org
|
[] |
... government legislative and policy initiatives on behalf of labor organizing. ... A.I.'s Impact on Jobs, Skills and the Future of Work: the UNESCO ...
|
Stanley Litow by
Our guest author today is Stanley Litow, adjunct professor at Duke and Columbia Universities. At Duke, he also serves as Innovator in Residence. He previously served as Deputy Schools Chancellor for New York City and is President Emeritus of the IBM Foundation and a member of the Albert Shanker Institute Board of Directors.
Over the last 35 years, since the release of A Nation At Risk, the nation has focused on the need for school reform and used high school graduation rates as the single most important benchmark for measuring educational success. This is somewhat ironic, given that high school attendance in the U.S. was not made mandatory until the end of the Second World War. Before that, virtually every state had a requirement for school attendance from grade one through grade eight, but high school attendance, just like college attendance now, was strictly voluntary. Of course, in the first half of the 20th century, significant numbers of well paying jobs in manufacturing and other areas of work only required an eighth grade education. Beginning in the 1970s and into 1984 and over the following three decades, the number of good jobs with competitive wages that were available to those who had only completed eighth grade began a precipitous decline. For many years, it has been clear that a high school diploma or higher is absolutely essential to achieving a pathway to a middle class life. America's response to the challenge of raising the percentage of high school graduates was far from perfect, but with exceptions, we have seen a steady increase in high school graduation rates in most though not all states. Beginning in the early years of the 21st Century, however, changes in the U.S. economy have made it crystal clear that high school diplomas, while still extremely important, are not enough to enable most Americans to achieve the “middle-class dream.”
In this light, the recent report, "Building a Grad Nation," is an important read. It documents the progress that the nation has made in higher high school graduation rates—the overall high school graduation rate showed an increase from 79 percent in 2011 to close to 85 percent by 2017. This statistic represents an increase of 3.5 million U.S. students who graduated from high school instead of dropping out over the last 15 years.
| 2023-03-16T00:00:00 |
https://www.shankerinstitute.org/issue-areas/workforce-development-cte
|
[
{
"date": "2023/03/16",
"position": 85,
"query": "government AI workforce policy"
}
] |
|
Biometric technologies at work: a proposed use-based ...
|
Biometric technologies at work: a proposed use-based taxonomy
|
https://www.bruegel.org
|
[
"Laura Nurski",
"Mia Hoffmann",
"Giuseppe Porcaro",
"Janine Berg",
"Francis Green",
"David Spencer"
] |
Biometric technologies have in principle the potential to significantly improve worker productivity, security and safety. However, they are also a source of ...
|
Biometric technologies have in principle the potential to significantly improve worker productivity, security and safety. However, they are also a source of new risks, including exposure to potential personal data abuse or the psychological distress caused by permanent monitoring. The European Union lacks a coherent regulatory framework on the mitigation of risks arising from the use of biometric technologies in the workplace.
We propose a taxonomy to underpin the use of artificial intelligence-powered biometric technologies in the workplace. Technologies can be classified into four broad categories based on their main function: (1) security, (2) recruitment, (3) monitoring, (4) safety and well-being. We identify the benefits and risks linked to each category.
To be more effective, EU regulation of artificial intelligence (AI) in the workplace should integrate more detail on technology use. It should also address the current scarcity of granular data by sourcing information from users of AI technologies, not only providers.
There is an untapped potential for technology to address workplace health hazards. Policymakers should design incentive mechanisms to encourage adoption of the technologies with the greatest potential to benefit workers.
Artificial intelligence users, in particular bigger companies, should be required to assess the effect of AI adoption on work processes, with the active participation of their workforces.
| 2023-03-16T00:00:00 |
https://www.bruegel.org/policy-brief/biometric-technologies-work-proposed-use-based-taxonomy
|
[
{
"date": "2023/03/16",
"position": 34,
"query": "AI labor union"
}
] |
|
Public Sector Unions
|
Public Sector Unions
|
https://www.shankerinstitute.org
|
[] |
The Institute has attempted to present a balanced and factual picture of the positive role of public services, public employees, and public sector unions.
|
Rachel Wessler by
Society’s youngest members have received some pretty big mentions recently—and for good reason. The United States isn’t heading into a childcare crisis any longer; it is fully in it. The already struggling industry was hit especially hard by the pandemic and has impacted families across the nation. The childcare crisis is so pervasive that President Biden prioritized childcare and prekindergarten stating, “if you want America to have the best-educated workforce, let’s finish the job by providing access to preschool” in his State of the Union address.
In the audience, several U.S. Representatives brought individuals directly impacted by the childcare crisis as their guests of honor. Senator Elizabeth Warren of Massachusetts brought Eugénie Ouedraogo, a mom and nursing student who depends on access to affordable early care and education. Senator Patty Murray of Washington brought Angélica María González, a mother who experienced firsthand the lack or quality care for her children and a Moms Rising advocate. Senator Murray took her statement of support beyond who was sitting with her to what she was wearing. Senator Murray organized Democrats in the House and Senate to wear pins in the shape of tiny crayons to signal support for childcare funding, as President Biden proposed at the beginning of his administration. In an analysis of the State of the State addresses given by governors, First Five Years Fund found that the childcare crisis was an important issue on both sides of the aisle, with 40 percent of Republicans and 60 percent of Democrats talking about it. However, of the governors who specifically mentioned early childhood education as a priority for their states, only one in four governors referenced the childcare workforce and the struggle to find, recruit and retain workers. While these are exciting developments (especially in contrast to Donald Trump’s one 16-word sentence in his State of the Union in 2019) why is so little of the conversation centered around the early care workforce? The priority seems to be getting parents with young children back to work with affordable childcare.
| 2023-03-16T00:00:00 |
https://www.shankerinstitute.org/issue-areas/public-sector-unions
|
[
{
"date": "2023/03/16",
"position": 39,
"query": "AI labor union"
}
] |
|
Human Artistry Campaign
|
Human Artistry Campaign
|
https://www.humanartistrycampaign.com
|
[] |
Algorithmic transparency and clear identification of a work's provenance are foundational to AI trustworthiness. ... UNION DES PRODUCTEURS PHONOGRAPHIQUES ...
|
Core Principles for Artificial Intelligence Applications
in support of human creativity & accomplishment
1. Technology has long empowered human expression, and AI will be no different
For generations, various technologies have been used successfully to support human creativity. Take music, for example... From piano rolls to amplification to guitar pedals to synthesizers to drum machines to digital audio workstations, beat libraries and stems and beyond, musical creators have long used technology to express their visions through different voices, instruments, and devices. AI already is and will increasingly play that role as a tool to assist the creative process, allowing for a wider range of people to express themselves creatively. Moreover, AI has many valuable uses outside of the creative process itself, including those that amplify fan connections, hone personalized recommendations, identify content quickly and accurately, assist with scheduling, automate and enhance efficient payment systems – and more. We embrace these technological advances.
2. Human created works will continue to play an essential role in our lives
Creative works shape our identity, values, and worldview. People relate most deeply to works that embody the lived experience, perceptions, and attitudes of others. Only humans can create and fully realize works written, recorded, created, or performed with such specific meaning. Art cannot exist independent of human culture.
3. Use of copyrighted works, and the use of voices and likenesses of professional performers, requires authorization and free-market licensing from all rightsholders
We fully recognize the immense potential of AI to push the boundaries for knowledge and scientific progress. However, as with predecessor technologies, the use of copyrighted works requires permission from the copyright owner. AI must be subject to free-market licensing for the use of works in the development and training of AI models. Creators and copyright owners must retain exclusive control over determining how their content is used. AI developers must ensure any content used for training purposes is approved and licensed from the copyright owner, including content previously used by any pre-trained AIs they may adopt. Additionally, performers’ and athletes' voices and likenesses must only be used with their consent and fair market compensation for specific uses.
4. Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creators without permission or compensation
AI must not receive exemptions from copyright, right of publicity, or other intellectual property rights and must comply with core principles of fair market competition and compensation. Creating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works.
5. Copyright should only protect the unique value of human intellectual creativity
Copyright protection exists to help incentivize and reward human creativity, skill, labor, and judgment - not output solely created and generated by machines. Human creators, whether they use traditional tools or express their creativity using computers, are the foundation of the creative industries and we must ensure that human creators are paid for their work.
6. Trustworthiness and transparency are essential to the success of AI and protection of creators
Complete recordkeeping of copyrighted works, performances, and likenesses, including the way in which they were used to develop and train any AI system, is essential. Algorithmic transparency and clear identification of a work’s provenance are foundational to AI trustworthiness. Stakeholders should work collaboratively to develop standards for technologies that identify the input used to create AI-generated output. In addition to obtaining appropriate licenses, content generated solely by AI should be labeled describing all inputs and meth odology used to create it – informing consumer choices, and protecting creators and rightsholders.
7. Creators' interests must be represented in policymaking
| 2023-03-16T00:00:00 |
https://www.humanartistrycampaign.com/
|
[
{
"date": "2023/03/16",
"position": 59,
"query": "AI labor union"
}
] |
|
AI Graphic Design- What Is It? What Are the Pros ...
|
AI Graphic Design- What Is It? What Are the Pros? What Are The Cons?
|
https://flocksy.com
|
[
"Rachel Johnson"
] |
AI graphic design or art refers to the images you can generate by using an AI program that can take your prompts and use them to create something.
|
Across a variety of industries, technology continues to grow in every way, particularly in artificial intelligence. Machines and humans are coming together in so many ways, now that AI is coming into the world of graphic design and art.
This “AI art” is a rising trend that’s affecting graphic design and illustration. And it’s becoming more and more a part of the design world, and it’s essential to understand these design trends when you’re looking for the best design services for your brand and business.
With AI technology, algorithms can play an active part in art and design creation, with artists using AI tools to manipulate their work. However, AI art generators can also be used by any layperson to create an image.
While the results can be very interesting and potentially even work to create something like a logo or character, AI graphic design still has its growing pains. There are pros and cons to consider as well as ethical and legal considerations.
But before we get to those, let’s dive into what AI art and design are.
What is AI Art and Design?
AI graphic design or art refers to the images you can generate by using an AI program that can take your prompts and use them to create something. It does this via an algorithm that scours existing sources, “learning” from them and manipulating its software to generate an image.
AI as a field is all about crafting software and machines that imitate human intelligence. It does this “thinking” through a set of pre-programmed algorithms. The algorithm analyzes thousands of images that exist online to learn about creating art and then use those references to replicate the process and conjure up an image.
Using code, an AI designer sets up rules that direct how the AI software will make the artwork, including guiding the AI to generate the work in a particular style and aesthetic and which content it should take inspiration from.
How is AI Used in Graphic Design?
For AI to be able to create an image, a programmer must teach it how to “understand” and replicate what it sees as examples of human art and design. Those in the “biz” refer to this as style transfer. The developers program the AI to identify certain artistic elements of a picture and then use those same elements under the parameters they set to generate an image.
While AI-created images are most commonly used to create illustrations and design, they could potentially be used in other artistic fields like music and video as well.
So, that’s the rough essentials of what makes up AI graphic design. The algorithm takes existing sources on the internet, mashes them up, and makes its own tweaks, to come up with an image that fits the entered prompt.
What are the pros and cons of this type of AI program? There are many to consider, so let’s get started with the pros.
Pros- How We Could Use AI Art Generators
AI could bring a lot to the design world, but for the community to use the tool in the best way, there is a lot we need to break down and understand. To start, it could be better to think of the programs as augmentations and tools, as opposed to replacements, for the hard work graphic designers provide.
Using AI generators in the best way should be about optimization and speed. Designers working with AI can use AI generators to come up with rough drafts and sketches that they’ll then use as a reference to create original work. Analyzing the huge amounts of data out there and suggesting design adjustments can be done a lot quicker with AI, allowing a designer to choose the best adjustments based on that data and create multiple prototypes that both the artists and clients can A/B test.
Speedy design prototyping that’s in alignment with a company’s design system could save an artist a lot of time, and time is a big consideration for both designers and clients. By using the technology to create drafts and options, an artist can save time on actually drawing up those designs that a client may not like. That way, the designer focuses on creating an image based on only the client’s favorite things about the various AI references.
Designers could also potentially use AI for product localization, like creating a graphic in multiple languages. Netflix uses this type of “augmented intelligence system” to translate show banners into several languages. The system “learns” the master version and almost instantly updates the text and localized pieces. All a designer has to do is approve or reject the changes and, if necessary, manually adjust them, saving a lot of time.
In these types of situations, prototyping and mass duplication, AI generators could be very helpful. Though it’s important to note that the artist is still doing the actual creation of the final design asset, which we’ll get into a bit more below.
Cons- The AI Design Controversy
Not everything about AI generators is great. There are issues with the ethics and use of these designs that have become a large talking point as the technology becomes more popular. Let’s take the Colorado State Fair of 2022 as an example.
Quite a bit of controversy and outrage accompanied the fair’s art competition when an AI-generated piece took first prize. The winner, Jason M. Allen, created his piece “Théâtre D’opéra Spatial” using MidJourney, a popular AI generator.
He won the first prize in the “digital arts/digitally-manipulated photography” category. Allen stated that his piece took over 80 hours of manipulating the AI and sorting through thousands of resource images.
While some artists sided with Allen, saying that using the generator was similar to using Photoshop and digital tools, others did not. One artist stated, “This is like letting a robot participate in the Olympics.” Many took a similar stance, stating that the piece was an unfair winner because of the creativity and effort that went into other images.
And thus we find our problem. Is AI imagery art, is it yours, and can you copyright it or sell it?
The Ethics of AI in Art
Allen’s award showcases one of the most significant issues with AI generators from both an ethical and legal standpoint. Many people have pointed out that AI tools are using existing images online and using them without permission from the original artist.
As the world has already seen play out, artists can and do sue when another piece is significantly influenced by their own work to the point where people can say they’ve copied the original design. This is theft of intellectual property.
We don’t currently have laws in place to regulate how original images can be used in combination with AI generators, but many artists do not want a computer to be able to essentially copy their designs. In fact, recently, DeviantArt created a way for artists who upload to the site to mark their work as “NOAI,” which would, in theory, stop a generator from referencing their work.
The idea is that to remain compliant with the website’s terms of service, the AI generators combing their site must disregard pieces with the no-AI tag in their URL.
Other companies, such as Getty Images and the stock photo companies they own, have banned AI entirely. A Getty Images spokesperson told CyberNews, “Getty Images recently announced our decision to not accept AI-generated content across Getty Images, iStock, and Unsplash. There are open questions [regarding] the copyright of outputs from these models, and there are unaddressed rights issues [regarding] the underlying imagery and metadata used to train these models. It is important that we make content available to our customers that is free of these concerns and potential liabilities.”
It’s fairly easy to see why these companies and artists might be concerned about the copyright, intellectual property, and plagiarism issues associated with AI generators when you look at some of the pieces these programs have made. Many can be eerily similar to the original works of the artists, and if little refinement is made, look like copies.
Another concern is the fine print and terms associated with the generators themselves. They often state that while you retain the rights to and in use of your creations, your creations may also be used by anyone else in the app. Prisma Lab’s terms state it has the “perpetual, revocable, nonexclusive, royalty-free, worldwide, fully paid, transferable, sub-licensable license to use, reproduce, modify, adapt, translate, create derivative works” from your images.
Lastly, copyrighting and standing copyrights are a concern. AI-generated images can’t be copyrighted as of now, and how you would do so isn’t clear. Who would the copyright belong to? The person who put in the prompt, the person who designed the AI, or the artist who created the original work that the AI is using?
This brings us to the current copyrighted or protected art that AI generators are using in their creations. Again, artists aren’t typically a fan of these creators not because they are gatekeeping and don’t believe that AI art isn’t art, but because they know that the AI isn’t producing its creations from scratch. It’s pulling information about art from existing works.
Say for instance if the humans behind AI art generators ensured they worked with willing participants who gave their consent to use their art to teach the machine. Then, it would be less concerning. The artists said it was okay to use their art. You need this type of permission to use art that doesn’t belong to you in any situation. That’s why stock photo libraries exist. These are for the express purpose of use by someone or something else.
But that’s not what happens. AI generators pull art from everywhere online it can freely view. But “freely view’ is not the same thing as free to use.
What Impact Will AI Have on Graphic Design?
Graphic design’s future could be affected by advancements in artificial intelligence (AI) and machine learning (ML). It already has been, and there are still several unanswered questions concerning how AI will impact this industry. It’s difficult to know precisely how AI will affect the graphic design profession as a whole because the debate is still going strong.
In the graphic design industry, AI has been disruptive. However, it will be hard for AI generators to replace the work that human designers can do because they’ll need human input and expertise to get the best results.
But the concern is understandable. If AI generators became mainstream, artists may lose the income they depend on to survive. Design itself will change if we switch to all machine-generated options instead of a person, and if the laws follow suit and become more strict, it might be difficult to create works with the originality humans are capable of creating. This would cause us to see overly generic designs.
How we communicate and design in the future will surely be affected by the increasing number of AI generators out there. Prototype generation and mock-up creation have already started within the industry and will likely impact visual design habits.
However, these programs aren’t a substitute for humans yet. To get realistic drawings and unique designs, you still need the skills of a professional designer or artist. The work put out by AI generators still has its issues, and from a commercial standpoint, it’s not safe enough to use in branding and promotion as of now. While the tools are impressive for what they can do, getting exactly what you want and ensuring proper placement and proportions will still take a human eye. (You’ve likely seen AI art that’s almost perfect except for the slanted mouth and crossed eyes.)
Conclusion
AI-powered art generators can streamline the editing and revision process when used as a prototype or mockup tool. You can use them to craft drafts to show clients or artists to cut back on the amount of back and forth necessary to create the desired design. It can be a fun “game” to entertain yourself or use for personal purposes, but the concern about where the AI is getting the work and how to copyright or safely use the designs is still problematic.
AI will likely change the entire graphic design industry but should only be thought of as a supporting tool. A human presence will likely always be necessary because of the unique skills and understanding that they possess, which computers just can’t replicate. So, when it comes to the future of AI graphic design, remember that these tools and services should be supplemental to the overall process and not be thought of as a replacement.
Companies looking to create eye-catching visuals and personalized designs or logos can use this technology to hasten the desired results as long as there is a human doing the actual creation. If you’re looking for a designer to create your illustrations, ads, and more who uses the latest technology while still providing the required human skills, Flocksy’s graphic designers are the way to go.
We can provide incredible results fast, and we never compromise on quality. What’s more, you direct any and all revisions needed. You’re sure to love the final outcome because you can chat directly with your artist about what you do and do not like and the style you’re looking for.
For more information about our unlimited graphic design service, click here.
| 2023-03-16T00:00:00 |
https://flocksy.com/resources/ai-graphic-design/
|
[
{
"date": "2023/03/16",
"position": 6,
"query": "AI graphic design"
}
] |
|
Will AI Actually Mean We'll Be Able to Work Less?
|
Will AI Actually Mean We’ll Be Able to Work Less?
|
https://thewalrus.ca
|
[
"Elizabeth M. Renieris",
"Sexy-Author-Bio",
"Background",
"Ffffff",
"Border-Style",
"Dotted",
"Border-Color",
"Dddddd",
"Color",
"Border-Top-Width"
] |
The AI productivity narrative is a lie. It holds that by automating tasks, AI will make them more efficient and make us, in turn, more productive.
|
This story was originally published as This story was originally published as “Claims That AI Productivity Will Save Us Are Neither New, nor True” by our friends at CIGI. It has been reprinted here with permission.
As artificial intelligence captures the public imagination, while also exhibiting missteps and failures, enthusiasts continue to tout future productivity gains as justification for a lenient approach to its governance. For example, venture fund ARK Invest predicts that “during the next eight years AI software could boost the productivity of the average knowledge worker by nearly 140%, adding approximately $50,000 in value per worker, or $56 trillion globally.” Accenture claims that “AI has the potential to boost labor productivity by up to 40 percent in 2035 . . . enabling people to make more efficient use of their time.” And OpenAI CEO Sam Altman similarly talks about time savings from menial tasks like emailing.
But what if promises around AI productivity do not necessarily translate into benefits to society?
Today, many fears around AI focus on its potential to replace human workers—whether teachers, lawyers, doctors, artists, or writers. In a 1930 essay, the economist John Maynard Keynes made similar predictions, coining the term “technological unemployment” to refer to “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” For Keynes, this was proof positive that “mankind is solving its economic problem.” He predicted that his grandchildren would work fifteen-hour weeks, liberated from economic necessity.
But the recent Global Innovation Index suggests otherwise, raising concerns that “considerable investments in technology, innovation, and entrepreneurship [are] failing to deliver the kind of productivity improvements that improve the lot of people across society.” Indeed, the history of “technological revolutions” paints a different story than the one Keynes anticipated about the benefits of technology-related productivity gains.
Take the example of household appliances in the twentieth century. Sociologist Juliet Schor has examined how so-called labour-saving technologies such as the dishwasher, electric stove, and vacuum cleaner failed to reduce women’s household labour. Instead, “rising standards and expectations of domestic life . . . expanded the hours devoted to cleaning, food preparation, and child rearing.” For example, washing machines and dryers allowed laundry to be done more frequently, “adjusting normative standards of cleanliness to meet efficiencies introduced by these appliances,” Schor notes.
Historian Laine Nooney has chronicled how, despite the personal computer revolution’s promises of efficiency and productivity, people have become chained to their computers to the detriment of the human body. Similar claims were made around how laptops and smartphones would untether us—they haven’t. Indeed, these devices have made it possible to work from anywhere, anytime. Rather than this having a liberating effect, we experience “work metastasizing throughout the rest of life,” as Jenny Odell, author of How to Do Nothing: Resisting the Attention Economy, puts it—a phenomenon that was on heightened display for women and working mothers during the pandemic. In fact, these technologies have so drastically eroded boundaries that some jurisdictions are entertaining right-to-disconnect laws.
And now, argues tech writer Paris Marx, “new technologies like AI are framed as offering us various forms of empowerment and liberation: We’ll be able to work more productively, spend less time doing our chores, and anything we want will be a click or tap away. But those promises never paint an accurate picture of how that tech is transforming the world around us or the true cost of those supposed benefits.”
History has shown us that gains in efficiency or productivity as a result of new technologies rarely liberate those already overburdened in society. Instead, new tech often creates new expectations and norms, heightening standards and the amount of work required to attain them. Known as Parkinson’s law, it’s the idea that “work expands so as to fill the time available for its completion.” We have all experienced how meetings scheduled to last an hour will stretch to fill the time allotted.
Increasingly, our standards exceed human capabilities, both physical and cognitive. Nooney notes that “if computers could change how much data a worker could process, then the human body no longer intervened on profitability with its pesky physiological limits.” Similarly, experts now remark on the benefits of using AI—a worker that doesn’t eat, sleep, or require wages. Just as the computer and smartphone have physically distorted the human nervous system and body, taking a considerable toll on our health and well-being, we are told that we have to adapt to the machines—for example, that we need to develop “machine intelligence”—rather than the other way around.
Not only does new tech often result in more work for people but it also introduces additional kinds of work. Ian Bogost anticipates that AI-powered chatbots such as ChatGPT “will impose new regimes of labor and management atop the labor required to carry out the supposedly labor-saving effort.” Just as computers and software advances have “allowed, and even required, workers to take on tasks that might otherwise have been carried out by specialists as their full-time job,” citing procurement and accounting software as examples, Bogst predicts the “inevitable bureaucratization” of AI.
Who can escape the quantitative and qualitative increase in demands that are likely to result as AI advances? As with earlier technologies, the answer is: likely only those with sufficient economic, social, or political capital. For example, only people who have the privilege and power to refuse or “switch off”—who can afford the “cost of opting out”—may avoid social media altogether. And the benefits of flexibility gained through gig-economy services often accrue at the expense of the growing precarity of workers. Similarly, AI advances that increase productivity are likely to result in increasing the already disproportionate burden on everyone but a privileged elite—a new gig economy is burgeoning around AI labelling and other tasks—unless policies approach productivity claims with a critical eye.
Simply put, the AI productivity narrative is a lie. It holds that by automating tasks, AI will make them more efficient and make us, in turn, more productive. This will free us for more meaningful tasks or for leisurely pursuits such as yoga, painting, or volunteerism, promoting human flourishing and well-being. But if history is any guide, this outcome is highly unlikely, save for a privileged elite. More likely, the rich will only get richer.
Because it’s not technology that can liberate us. To preserve and promote meaningful autonomy in the face of these AI advancements, we must look to our social, political, and economic systems and policies. As Derek Thompson observes in The Atlantic, “Technology only frees people from work if the boss—or the government, or the economic system—allows it.” To allege otherwise is technosolutionism, plain and simple.
Reprinted with permission from the Centre for International Governance Innovation.
| 2023-03-16T00:00:00 |
2023/03/16
|
https://thewalrus.ca/will-ai-actually-mean-well-be-able-to-work-less/
|
[
{
"date": "2023/03/16",
"position": 66,
"query": "artificial intelligence workers"
}
] |
50 Thought Leading Companies on Artificial Intelligence ...
|
50 Thought Leading Companies on Artificial Intelligence 2023
|
https://www.thinkers360.com
|
[
"Estefania V. Sembergman",
"Http",
"Author",
"Senior Associate",
"Community Recognition",
"Development At"
] |
AI Leadership Institute, AI Leadership Institute empowers and inspires organizations globally to begin thinking more deeply about responsible AI.
|
Logo Company Name (Alpha Order) Description
AI Leadership Institute AI Leadership Institute empowers and inspires organizations globally to begin thinking more deeply about responsible AI. They offer executive advisory services and workshops for defining AI Strategy, creating an AI-Ready Culture, and establishing Responsible AI practices within the company.
Aera Technology Aera Technology is the Decision Intelligence company that transforms how enterprises make and execute decisions. Our innovative platform, Aera Decision Cloud™, integrates with existing systems and data sources to enable business decision making in real time, at scale. Trusted by many of the world’s best-known companies and brands, Aera is helping enterprises operate sustainably, intelligently, and efficiently.
Alter Domus A leading provider of integrated solutions for the alternative investment industry. Many leading international asset managers, lenders and asset owners choose Alter Domus as their partner for growth. Whether a stand-alone fund with limited investments, or a large multi-billion-dollar fund with complex investment streams across multiple jurisdictions, they understand your world.
Alteryx Alteryx powers analytics for all by providing the leading Analytics Automation Platform. Alteryx delivers easy end-to-end automation of data engineering, analytics, reporting, machine learning, and data science processes, enabling enterprises every-where to democratize data analytics across their organizations for a broad range of use cases. More than 8,000 customers globally rely on Alteryx to deliver high-impact business outcomes.
Asia MarTech Society Established in 2018, Asia MarTech Society is a community/association that connects MarTech players and industry stakeholders in Asia. It aspires to promote the adoption of MarTech in Asia and facilitate intra-regional trade in MarTech through events, knowledge sharing and management, and strategic partnership with MarTech peer organizations in Europe and the United States.
Babin Business Consulting Unique expertise in the digital workplace. Advice and guidance for your marketing, business development, innovation and digital transformation projects. Babin Business Consulting will help you set your company up and raise funds in Europe, as well as create and deliver your marketing and international development plans.
Banking Reports Banking Reports is a FinTech Consulting, FinTech Training, FinTech Research and FinTech Report-writing company in London.
Botsvadze Marketing Solutions Vladimer Botsvadze is a world-renowned digital transformation and social media influencer, keynote speaker, start-up advisor and internet personality who initiates change, drives growth, and positions brands as market leaders in their industries.
Data Safeguard Inc An artificially intelligent, humanly impossible, previously unsolvable, hyper-accurate approach to comply with data privacy regulations and prevent synthetic fraud losses.
Digital Business Innovation Srl DBI is a digital business transformation consulting firm. DBI’s success comes from the strength of its team and advisory board. They’ve brought together experienced leaders in the realm of digital transformation who can apply their skills to help your business embrace everything transformative technology has to offer.
Digital Salutem Digital Salutem helps healthcare organizations implement digital transformation end-to-end in less than 4 weeks using best in class technology and services. They are digital health experts on the mission to make health uncomplicated by transcending the barriers to human health.
Digital Transformation Leaders Digital Transformation Leaders help business executives achieve digital maturity, outstanding innovation, improved performances, and digital growth for their businesses.
Digitalmehmet Digitalmehmet is a platform created by Dr. Mehmeh Yildiz to connect with his readers, collaborators, mentors, and proteges.His professional background covers technology, leadership, and cognitive science. He works as an Enterprise Architect solving complex digital transformation problems for several large business organizations.
Earley Information Science Founded in 1994, Earley Information Science is a professional services firm focusing on structuring and organizing data – making it more findable, usable, and valuable. They build the information architecture that powers unrivaled customer experience, smart eCommerce, and accelerated business decision-making for Fortune 1000 firms in Manufacturing, Distribution, Retail, and Financial Services.
Ericsson Ericsson is one of the leading providers of Information and Communication Technology (ICT) to service providers. They enable the full value of connectivity by creating game-changing technology and services that are easy to use, adopt, and scale, making their customers successful in a fully connected world.
Hired Brains Research LLC Hired Brains is a knowledge-based consultancy. Improving business performance, turning risk and compliance into opportunities, and enhancing value with data architecture and “AI LAST MILE” consulting.
Humaxa Humaxa offers the first-ever AI Assistant to help you scale workforce culture. It chats with the workforce, predicts what will improve employee engagement & performance, and offers to initiate those actions for you. Humaxa’s technology is based on thousands of conversations with clients and employees.
HUMOLOGY Humology aims to encourage and inspire conversations about the future of humanity and technology, to cultivate a community of human-friendly technologists, products and services.
IBM IBM integrates technology and expertise, providing infrastructure, software (including market-leading Red Hat) and consulting services for clients as they pursue the digital transformation of the world’s mission-critical businesses.
ICOL Group ICOL Group is an international group of companies headquartered in Barcelona, Spain. The global group started business activity in 2017 and competed for its consolidation in 2019. Presently, the group includes six R&D centers in EU. ICOL Group mission is to create an innovative high-tech approach for factory floor automation and develop an AI and Digital Twins based industrial automation platform that will enable customers to integrate robotics, logistics, and processes easily.
Info-Tech Research Group Info-Tech Research Group produces unbiased and highly relevant IT research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. They partner closely with IT teams to provide everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.
Innovation Titan Innovation Titan helps you transform your organization with data-driven decisions. They help executives align their business and data strategy, translate it into sound execution, and realize ROI through rapid business growth.
Intelligence Briefing Intelligence Briefing was created by Andreas Welsch, VP at SAP, Thought Leader, and Speaker to prepare current and future AI leaders to successfully run AI in business.
Intelligent World The Intelligent World is an on-demand and live video content portal where executives and technology experts can come together to share and educate latest technology trends, developments, and processes shaping a digital-first business world.
International Institute of Scientific Research IISR’s mission is to carry out research and studies dealing with issues related to the national and international environment. The organization of national and international conferences, seminars, conferences or congresses and major meetings open to a wide public.
KPMG India KPMG entities in India are established under the laws of India and are owned and managed (as the case may be) by established Indian professionals. Established in September 1993, the KPMG entities have rapidly built a significant competitive presence in the country.
Kozminski University Kozminski University (Akademia Leona Koźmińskiego), founded in 1993, is a private institution of higher education with full academic rights. The university has obtained Polish and international accreditations, as well as excellent results in global and national educational rankings which shows the high quality of its programs and services. The university’s offer includes various Bachelor, Master’s, MBA’s as well as Ph.D. programs. All of these are also offered in English.
Merrick Ventures Merrick Ventures LLC is a PE investment company based in Florida focused on technology companies. Merrick Ventures was founded by Michael Ferro in 2007 after selling one of the companies he founded, Click Commerce, for $292M. One of the notable investments was Merge Healthcare, acquired by IBM for $1B in 2015.
Microsoft Every company has a mission. What’s Microsoft? To empower every person and every organization to achieve more. They believe technology can and should be a force for good and that meaningful innovation contributes to a brighter world in the future and today.
Mind Senses Global Mind Senses Global is a management consultancy, which specialises in Artificial Intelligence. They use advanced data science and research to help businesses improve their decision-making. They are passionate about Artificial Intelligence and their mission is to make AI available to everyone by educating and supporting businesses and organisations in their AI journey.
MyFinB Group A global technology company with specialisation in natural language expert systems with headquarters in Singapore and Malaysia, born through the vision of democratising AI by removing barriers associated with AI learning, adoption and deployment.
Netsync NETSYNC is an NMSDC-certified minority business enterprise (MBE), federally certified woman-owned small business (WOSB), and HUB-certified value-added reseller (VAR), specializing in the implementation of comprehensive IT life cycle solutions for a wide array of organizations.
Penn State University There’s a reason Penn State consistently ranks among the top one percent of the world’s universities. Across 24 campuses, our 100,000 students and 17,000 faculty and staff know the real measure of success goes beyond the classroom—it’s the positive impact made on communities across the world.
Perx Technologies Singapore-based Perx Technologies is a category-creating Lifestyle Marketing SaaS Platform helping large enterprises and digital native businesses transform from being transient and transactional to delivering continuous and meaningful customer engagements in the digital economy.
PwC At PwC, their purpose is to build trust in society and solve important problems. They’re a network of firms in 152 countries with over 327,000 people who are committed to delivering quality in assurance, advisory and tax services. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity.
RPA2AI Research RPA2AI Research is an independent global industry analyst and advisory firm focused on enterprise automation and artificial intelligence. RPA2AI (pronounced as RPA to AI), has extensive experience in advising global organizations on profitable digital transformation. RPA2AI combine hands-on experience with AI and Automation technologies with an in-depth understanding of their business applications and implications.
Saïd Business School, University of Oxford They believe in creating business leaders who lead with purpose. As part of Oxford University, they have a unique perspective on some of the most significant challenges facing business and the world today. Their focus on the role of purpose in business ensures those who learn here are well placed to tackle world-scale problems. To achieve this, they deliver world-class research, teaching and coaching.
Solarix-Ventures Ltd The firm was founded by Nick Ayton and Martin Hammerschmid specifically to invest in #DeepTech and #FrontierTechnologies and assist with #FamilyOffices, #UHNW, #Investors and #Angels with mandates, deal flow and deliver co-investment opportunities.
StarCIO StarCIO guides businesses on driving smarter, faster, and more innovative transformation programs leveraging data, analytics, software, automation, and emerging technology. We have clients across a spectrum of industries and sizes, and our assessment, consulting, advisory programs, workshops, and online guides help businesses become Driving Digital organizations.
Tata Consultancy Services A purpose-led organization that is building a meaningful future through innovation, technology, and collective knowledge. We’re #BuildingOnBelief. A part of the Tata group, India’s largest multinational business group, TCS has over 500,000 of the world’s best-trained consultants in 46 countries.
Techutzpah Techutzpah was founded by Rajashree Rao, a globally acclaimed Industry Thought leader, visionary, advisor, principal consultant, and mentor in next-gen technologies.
The Digital Speaker The Digital Speaker is a strategic futurist and one of the leading voices in technology and known for his candid, educated and balanced views on how technology can benefit but also threaten society. talks about the future of work, digital transformation and how to turn your business into a data organisation. He offers various, magical and futuristic experiences to truly inspire your audience.
Tigon Advisory Corp. Tigon Advisory equips start-ups to tackle go-to-market challenges that must be met before they can scale; among them: capturing the nuances of customer demand, matching products to market segments, engineering “marketability” into every product, and creating processes to systematise success.
Value Inspiration Value Inspiration was founded by Ton Dobbe, a strategic product marketing expert, thought leader, and global influencer, to help mission-driven SaaS CEOs create the SaaS business the world talks about.
Volvo Group The Volvo Group is one of the world’s leading manufacturers of trucks, buses, construction equipment and marine and industrial engines. The Group also provides complete solutions for financing and service. The Volvo Group, with its headquarters in Gothenburg, employs about 100,000 people, has production facilities in 18 countries and sells its products in more than 190 markets.
Wipro Wipro is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging their holistic portfolio of capabilities in consulting, design, engineering, and operations, they help clients realize their boldest ambitions and build future-ready, sustainable businesses.
Wisdom Works Group Emerging Technology Consultancy firm helping clients with their digital transformation and adoption of disruptive and innovative solutions. AI & Deep Tech is a core specialty of theirs, delivery of business differentiation and industry disruption as their focus.
WoWExp WoWExp Technologies was founded in 2019 by Navin Manaswi, an alumnus of IIT Kanpur with the aim of bringing affordable disruptive AR, VR & AI solutions to the market for solving real world challenges. WoWExp is designed to cover the marketing and visualization needs of different industries through their comprehensive product portfolio.
zblocks Accelerating blockchain adoption for enterprises since 2022; bringing the power of Web3 to Web2 using a SaaS platform. Experience the power of trust, transparency, immutability, security, and communities working with the existing developer base and operating models.
| 2023-03-16T00:00:00 |
2023/03/16
|
https://www.thinkers360.com/50-thought-leading-companies-on-artificial-intelligence-2023/
|
[
{
"date": "2023/03/16",
"position": 3,
"query": "artificial intelligence business leaders"
}
] |
The Rise of the AI CEO
|
The Rise of the AI CEO
|
https://chiefexecutive.net
|
[
"Seán Earley",
"Sparky Zivin",
"Seán Earley Is Managing Director At Teneo",
"Specializing In Digital Communications",
"Social Strategy. Sparky Zivin Is Senior Managing Director",
"Global Head Of Teneo Research.",
"View More This Author"
] |
Half (48%) of CEOs from the world's leading public companies have already adopted AI, with another 58% actively investing to strengthen their AI capabilities.
|
At what point does the input of AI go beyond workplace efficiencies and help us better understand the psychology of the workforce? Could a well-read AI represent the voice of the employee body at a board level, advocating for employee interests and avoiding the need for lengthy consultation? A devolution of representation to automated standard bearers would be a lot more streamlined. Thinking further, a learned AI trained on how activist investors behave could even act as a “Red Team” for executive teams to stress-test their growth and investment strategies.
A cautionary tale
Before we fill the boardroom with AIs, it’s worth reminding ourselves that all technology is susceptible to in-built biases. Tech companies ran into trouble using AIs in their hiring process when these systems showed a stark preference for white men, a prejudice rooted in bad training data that enshrined legacy biases in how
the algorithm made decisions. Feeding an AI privileged corporate information would be a risky bet without proprietary knowledge of data processing and handling. As researchers at DeepMind recently put it, “Despite incredible performance in a variety of domains, almost all [AI] systems are completely unable to provide a satisfying answer to the simple question, ‘Why did you do that?’”
The new reality is that AI tools are, and will continue to be, leveraged by individuals in the workplace and academic institutions globally to reduce time spent on tasks. Considering ChatGPT achieved 100 million users in two months, it has been cemented as the fastest growing app of all time. That’s a bell that can’t be unrung.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://chiefexecutive.net/the-rise-of-the-ai-ceo/
|
[
{
"date": "2023/03/16",
"position": 7,
"query": "artificial intelligence business leaders"
}
] |
Applying AI to Legal Recruiting: New Tools for Efficiently ...
|
Applying AI to Legal Recruiting: New Tools for Efficiently Matching Firms and Candidates
|
https://laterallink.com
|
[
"Lateral Link"
] |
AI is well suited to help recruiters—both within firms and outside them—to more efficiently identify high-potential candidates ...
|
With everyone talking about ChatGPT and the implications of AI tools for the future of various professions, now is an opportune time to consider how AI might change legal recruiting. We at Lateral Link have been actively engaging with this question for years: in fact, we have a sister company called Haistack.AI that is developing AI products for the legal recruiting industry.
So for the latest episode of the Movers, Shakers & Rainmakers podcast, we invited Haistack.AI Chief Technology Officer Michael Heise to discuss the possibilities and limitations of AI for law firms and legal recruiters. Mike educated us on the likely implications of AI for our industry and described the logic behind the product that Haistack.AI is currently building.
Mike is a seasoned legal tech innovator with a deep understanding of Biglaw firms. Prior to joining Haistack.AI, Mike held software leadership roles at Cooley and Covington & Burling. As he explained on the podcast, he is married to an attorney, and it was his wife who first sparked his interest in legal sector innovation.
AI can be a valuable tool
Mike explained that AI has the potential to assist lawyers with a broad range of tasks. For example, a litigator could rely on AI tools to set out the basic structure of a brief, allowing the lawyer to dedicate more of her time to the higher-value tasks of refining arguments and tailoring them to be most persuasive based on the unique facts of the case. As Mike puts it: “AI is not going to replace you. The person who knows how to effectively use AI is going to replace you.” AI tools will become increasingly sophisticated, but human judgment will remain essential for crafting the strongest and most original arguments.
Similarly, AI is well suited to help recruiters—both within firms and outside them—to more efficiently identify high-potential candidates. By reducing the time a recruiter spends on manually trawling through candidate profiles, AI can enable the recruiter to gain a deeper understanding of the high-potential candidate pool and the relative strengths of the candidates within that pool.
The Haistack.AI vision
As an example of how AI promises to make recruiting more targeted and efficient, Mike described the product that the team at Haistack.AI is building. It entails creating three essential models: (1) profiles of lawyers currently working at the firm that is using Haistack.AI in its hiring process; (2) profiles of lawyers working outside that firm; and (3) profiles corresponding to the specific roles for which the firm is recruiting. By comparing the profiles of lawyers previously hired by the relevant practice group and office with the profiles of external lawyers, the algorithm can instantly generate a list of high-potential candidates and an explanation for why those candidates appear to be a good fit. Moreover, the AI will use Lateral Link data to screen out candidates whom the firm has previously considered and determined not to be a fit. Finally, the tool will give some indication of the extent to which the leading candidates are likely to be in demand at other firms seeking to fill similar vacancies, alerting the hiring firm to the need to move quickly where a candidate is likely to be in especially high demand.
With the assistance of the Haistack.AI tool, the recruiter managing the search will immediately see how the algorithm matched a candidate’s qualifications and experience to those of current members of the group. This is where human judgment comes in. The AI accelerates the first step of identifying a shortlist, but the law firm’s recruiting and attorney professionals must assess whether the shortlist fits their needs, through interviews and other more traditional evaluations.
Mike noted that in addition to generating lists of promising candidates, the Haistack.AI tool could also help identify current members of a firm who are in especially high demand relative to what the broader lateral market is seeking. In alerting a firm to attorneys who are at greater risk of leaving, the tool can help nudge a practice group to be more proactive about taking steps to keep valuable team members happy.
AI is not a panacea
Mike also explained the importance of recognizing the limitations of AI and of not buying into the excessive hype that frequently surrounds promising technologies in their early stages. AI will not solve all hiring problems. To take just one example, the inputs for AI models like the ones that Haistack.AI are building are composed of historical data — the models are designed to replicate the firm’s past hiring decisions. To the extent the past hiring was suboptimal, such as through failing to hire qualified diverse candidates, the AI tool will not correct the problem. Instead, it is important for the human users to be thoughtful about patterns in past hiring that they do not wish to replicate and make an active effort to change them.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://laterallink.com/applying-ai-to-legal-recruiting-new-tools-for-efficiently-matching-firms-and-candidates/
|
[
{
"date": "2023/03/16",
"position": 73,
"query": "artificial intelligence hiring"
}
] |
The role of artificial intelligence in online learning
|
The role of artificial intelligence in online learning
|
https://www.captechu.edu
|
[] |
Among the most significant benefits of AI in online learning is its ability to personalize instructional materials, automate routine tasks, and create adaptive ...
|
Artificial intelligence (AI) has become an essential part of modern life, offering increased efficiency and convenience across a variety of industries. In education, the integration of AI has led to exciting opportunities and valid concerns about how to effectively incorporate these tools into the classroom, particularly in remote learning environments.
Among the most significant benefits of AI in online learning is its ability to personalize instructional materials, automate routine tasks, and create adaptive assessments. These tools have played a crucial role in supporting online learning, making virtual and often large courses more interactive and individualized. Virtual tutors, for example, can use natural language processing to understand student questions and provide guidance on specific topics, either through linear modules or conversational interactions that mimic human interaction.
Adaptive learning platforms, such as those provided by Century and Educake, use data analysis to understand an individual student's learning style, strengths, and weaknesses in a subject. This information is then used to provide personalized feedback and instruction that is tailored to the student's needs, such as asking more questions about a topic where the student is struggling.
AI can also be beneficial for teachers, as it can be used to grade assignments, write more relevant objectives, and even create courses. Through AI, manual and time-consuming processes like taking attendance and managing permissions can be automated, leaving more time for the instructors to teach and develop relevant materials.
One of the most exciting recent advancements in AI is the emergence of virtual chatbots, like Microsoft's Bing, Google's Bard, and Jasper.ai. Open AI's Chat GPT, trained using a text database of more than 300 billion words, is one of the most pervasive of these new tools. A new version of Chat GPT, which was trained on trillions of words and images and can produce text outputs from both text and image inputs, was released earlier this week.
While these chatbots offer comprehensive knowledge and human-like responses to written prompts, some are concerned that they may be a barrier to genuine learning, as students may rely too heavily on these tools to complete assignments without critically exploring their own ideas.
The perception of AI tools and their uses varies widely, with some seeing them as disruptive technologies that must be understood and embraced to enhance the educational experience, while others have significant concerns about data security and privacy risks, including unexpected surveillance. In the latter cases, ensuring transparency about what data is being collected and protecting against breaches is critical in addressing these concerns.
As AI-enabled tools become increasingly prevalent in education, it is essential that students and teachers alike learn how to incorporate these technologies into the learning process while using them appropriately. There are valid concerns about the potential for these tools to widen gaps in fairness, access, and learning. However, when used as a supplement to a student's own thinking and a teacher's workload, AI-enabled tools can help to broaden understanding, lower educational barriers, and stimulate engagement in online learning. Intent and context are key.
Or, as Chat GPT states if you ask if it’s a help or hinderance to online education:
“It's worth noting that while I can provide information and guidance, I cannot replace the expertise and guidance of a human teacher or mentor. It's important to use my responses as a starting point and to engage in further research and critical thinking to fully understand and apply the information presented. Ultimately, I believe that I can be a valuable tool in the pursuit of education, but it's up to the user to determine how best to utilize that tool.”
If you want to explore how AI technologies like these can be used to transform online learning, check out Capitol Tech’s programs in Computer Science, Artificial Intelligence, and Data Science, and contact [email protected].
| 2023-03-16T00:00:00 |
https://www.captechu.edu/blog/role-of-artificial-intelligence-online-learning
|
[
{
"date": "2023/03/16",
"position": 6,
"query": "artificial intelligence education"
}
] |
|
The Impact of Robotics and Automation on Jobs: Opportunities and ...
|
The Impact of Robotics and Automation on Jobs: Opportunities and Challenges
|
https://www.linkedin.com
|
[
"Elena Dobreva",
"Teach At Foxborough Regional Charter School At Foxborough Regional Charter School",
"Aqsa Khan",
"Graphic Designer Ui Ux Designer At Tanbits",
"Illustrator",
"Graphic Designs",
"Logo Design",
"Web Designer",
"After Effects",
"Video Editing"
] |
Job Displacement. The introduction of robots and automation systems has led to the displacement of jobs traditionally performed by humans.
|
The introduction of robotics and automation has revolutionized the manufacturing and production industry. With advancements in technology, robots have become increasingly sophisticated and versatile, and their use has expanded to various other industries. While automation has provided several benefits, including improved efficiency, reduced costs, and increased productivity, its impact on jobs has been a source of concern for many. In this blog post, we will discuss the opportunities and challenges of robotics and automation in the job market.
Introduction
The introduction of robotics and automation has brought about significant changes in the job market. While it has created new job opportunities, it has also led to the displacement of several jobs. The use of robots and automation has the potential to impact several industries, including manufacturing, healthcare, agriculture, and logistics.
Opportunities
Increased Productivity
Robots and automation can perform tasks more efficiently and accurately than humans, resulting in increased productivity. Robots can work non-stop without breaks, reducing the need for human workers to work long hours. Automation can also streamline processes, resulting in faster production times and reduced lead times.
Safer Working Environments
Robots can perform dangerous and repetitive tasks that are hazardous to human workers, leading to a safer working environment. For instance, robots can work in environments that are too hot, too cold, or too toxic for humans, minimizing the risk of accidents and injuries.
Creation of New Jobs
While automation has led to job displacement in some industries, it has created new job opportunities in others. The development, production, and maintenance of robots and automation systems require skilled workers, leading to the creation of new jobs.
Challenges
Job Displacement
The introduction of robots and automation systems has led to the displacement of jobs traditionally performed by humans. Robots can perform tasks more efficiently and accurately than humans, leading to a reduction in the number of workers required to perform those tasks.
Need for Reskilling
The introduction of robots and automation systems has created a need for reskilling and upskilling of the workforce. The workers displaced by automation may not have the skills required for the new job opportunities created by automation. Therefore, the workforce needs to be retrained to adapt to the changing job market.
Increased Cost of Implementation
The implementation of robotics and automation systems can be expensive, requiring significant investment in capital and technology. This can be a challenge for smaller businesses and companies that may not have the financial resources to invest in automation.
Future of Robotics and Automation in the Job Market
The use of robotics and automation systems is likely to continue to grow in the future, with new technologies and advancements in robotics leading to increased automation in various industries. While there may be job displacement in some industries, new job opportunities are also expected to emerge.
Collaboration between Humans and Robots
The future of the job market is likely to involve collaboration between humans and robots. Robots and automation systems will perform tasks that are repetitive and require precision, while humans will focus on tasks that require creativity, decision-making, and emotional intelligence.
Development of New Skills
The development of new technologies and advancements in robotics will require the development of new skills in the workforce. Workers will need to be trained in areas such as robotics engineering, data analysis, and programming, among others.
Increased Emphasis on Reskilling
The increasing use of robotics and automation systems will lead to a greater emphasis on reskilling and upskilling the workforce. Companies will need to invest in training programs to ensure that their workers have the necessary skills to adapt to the changing job market.
Conclusion
In conclusion, the impact of robotics and automation on the job market is a complex issue that requires careful consideration of the economic, social, and ethical implications. While the adoption of robotics and automation is expected to create new job opportunities and improve productivity, it also poses significant challenges such as job displacement and the need for upskilling and reskilling. To mitigate these challenges, policymakers, businesses, and workers must work together to ensure a smooth transition to the new world of work. This may involve investing in education and training programs, promoting entrepreneurship and innovation, and developing policies and regulations that protect workers' rights and ensure a fair distribution of the benefits of automation. By taking a proactive approach to addressing the opportunities and challenges of robotics and automation, we can build a more inclusive and prosperous future for all.
| 2023-03-17T00:00:00 |
https://www.linkedin.com/pulse/impact-robotics-automation-jobs-opportunities-challenges-tanbits
|
[
{
"date": "2023/03/17",
"position": 28,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 28,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 30,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 27,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 28,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 29,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 29,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 38,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 28,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 27,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 27,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 29,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 31,
"query": "robotics job displacement"
},
{
"date": "2023/03/17",
"position": 37,
"query": "robotics job displacement"
}
] |
|
5 Predictions on How AI and Automation Will Impact HR in ...
|
5 Predictions on How AI and Automation Will Impact HR in 2023 and Beyond
|
https://hirebee.ai
|
[
"Hirebee.Ai Team"
] |
In the future, companies will integrate AI into every aspect of human resource management, resulting in a great deal of change for both employees and employers.
|
There is a rapid change in the HR technology landscape. In the future, companies will integrate AI into every aspect of human resource management, resulting in a great deal of change for both employees and employers. In order to understand how AI and automation will affect the world of human resources, we asked industry experts for their predictions. Josh Bersin predicts that the year 2023 will be one of transition as work, the workforce, and human resources are redefined.
Below is a list of Josh Bersin’s predictions that we believe are important.
A new, multifaceted workforce—diverse, aging, and scarce—will emerge
As demographic trends shape the future of companies, the workforce is undergoing a transformation. The three most important trends are diversity, longevity, and scarcity. As the workforce becomes more diverse in terms of racial, ethnic, gender, age, language, and cognitive abilities, diversity is increasing. As diverse teams outperform their peers in terms of ideas, skills, and understanding of customer needs, corporations must understand the business need for diversity. As people live longer, the definition of careers, work, and jobs must change in order to keep up with the aging workforce. All generations of employees, including older workers, must be accommodated by organizations, and age discrimination must be addressed. In addition to the scarcity of workers, developed economies are experiencing a demographic drought as the size of their workforce shrinks. The only way for economies to expand is through immigration, with countries with low immigration rates facing difficulties. In order to meet the demands of a changing workforce, companies must build diverse organizations, accommodate the aging workforce, and address the shortage of skilled workers.
At Hirebee, we’re dedicated to equipping you with the necessary resources for your hiring needs, no matter where you are in the world. We offer seamless integrations with both free and premium job boards worldwide, as well as access to a vast candidate database boasting millions of active job seekers. With our advanced tools, you can easily source and hire candidates from diverse backgrounds.
Jobs and careers will be redefined by the convergence of industries.
Every industry is moving beyond digital transformation to new adjacent industries and business models, which results in the need for new skills, job titles, and organizational structures. According to the Global Workforce Intelligence (GWI) Project, high-performing or advanced companies in healthcare and banking have different job titles, skill profiles, employment models, and job families than their peers. To be successful in this new era of work, companies must adapt their job architecture and structure. It has become increasingly difficult for companies to hire their way to grow as the war for talent has become fiercer. As a result of a shortage of high-demand skills, companies must simultaneously recruit, retain, and redesign their workforces.
Every company will get serious and pragmatic about skills.
Businesses are increasingly focusing on skills-based models rather than traditional competency models. Companies are becoming flatter, more team-based, and more focused on internal mobility because of the changing nature of work. There are now multiple roles and projects that employees are responsible for, each requiring different capabilities and skills from the employee. In the past, competency models were selected from a book and matched to specific jobs. The rapid pace of technological advancements and changes in the workforce, however, has resulted in an ever-changing match between skills and jobs.
As a result, companies are paying for skills and encouraging employees to develop in-demand skills in order to remain competitive. As the technology market consolidates, skills-based management will become more prevalent in 2023. It is recommended that companies focus on capabilities first, and specific skills second, and assign owners to each skill cluster. Many systems collect and utilize skills data, including human capital management platforms, recruitment tools, talent marketplace tools, learning and development applications, and employee experience applications. The lack of a single location for skills data, however, means that companies must determine which vendor’s skills database to use.
At Hirebee, we recognize that skill-focused hiring is crucial to success. That’s why we’ve built our platform to prioritize the skills required for each job. Our job description structure, internal search, and candidate matching all center around skill sets. Ready to see the Hirebee difference for yourself? Sign up for 14-day free trial today!
Employee experience will be put to the test by hybrid work.
Companies are rethinking their workplace and HR strategies in response to the trend toward hybrid work, with a combination of remote and in-office work. There are three key areas being addressed: collaboration and workspace tools, team management and HR technology, and culture. Companies are refining their collaboration and workspace tools in order to accommodate part-time employees and hotel workers. Teams engaged in hybrid work require agile models for teamwork, performance management, alignment, and multi-functionality.
At Hirebee, we go above and beyond to support employers in managing remote work and improving the employee experience. That’s why we offer all-inclusive tools for internal collaboration, ensuring your hiring teams can work seamlessly across different time zones. With our advanced collaboration tools, you can streamline your workflow and boost productivity. Don’t let distance hinder your team’s success. Click here to learn more about our internal collaboration tools.
Organizations will move beyond employee experience and focus on “people sustainability.”
The focus of HR departments over the last decade has shifted to employee experience and well-being, resulting in a $51 billion market for corporate well-being. These include industries such as online coaching, on-demand counseling, fitness and exercise apps, as well as AI-enabled training and support. Employees’ financial well-being has also become a growing concern for companies. In addition, new social sustainability issues have arisen, such as nondiscrimination, child protection, and employee rights. These issues all fall under the umbrella of “people sustainability”, a new concept that looks at employees as a core asset and infrastructure in the company, rather than just a source of innovation and growth. This trend is expected to continue in 2023, as companies consider employee well-being and sustainability as long-term investments for the future.
In conclusion, the HR industry is expected to undergo significant changes in the next three years. The adoption of technology and artificial intelligence, the emphasis on workplace wellness and employee engagement, and the changing attitudes toward diversity, equity, and inclusion are some of the trends that are expected to shape the future of HR. As organizations prepare for the future of work, HR professionals will play a critical role in ensuring that employees are well-equipped to succeed in the new work environment. Therefore, it is essential for HR professionals to stay informed and adapt to the changing trends in order to drive positive outcomes for their organizations.
At Hirebee, we understand that talent acquisition is the foundation of Human Resources. That’s why we’re committed to providing all the necessary tools, integrations, and resources for a more diverse, productive, people-centric, and sustainable recruitment process. Elevate and modernize your hiring with Hirebee. Sign up for a free trial today and experience the difference for yourself.
| 2023-07-14T00:00:00 |
2023/07/14
|
https://hirebee.ai/blog/5-predictions-on-how-ai-and-automation-will-impact-hr-in-2023-and-beyond/
|
[
{
"date": "2023/03/17",
"position": 55,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/17",
"position": 12,
"query": "AI labor market trends"
},
{
"date": "2023/03/17",
"position": 28,
"query": "artificial intelligence workers"
},
{
"date": "2023/03/17",
"position": 80,
"query": "artificial intelligence hiring"
}
] |
3 Questions: How automation and good jobs can co-exist
|
3 Questions: How automation and good jobs can co-exist
|
https://news.mit.edu
|
[
"Peter Dizikes"
] |
“Positive-sum automation” in manufacturing, in which robots and automation co-exist with worker-driven input, rather than wipe out workers.
|
In 2018, MIT convened its Task Force on the Work of the Future, which concluded in a 2020 report that while new technologies were not necessarily going to massively wipe out employment, smart practices and policies would be necessary to let automation complement good jobs. Today a successor group is continuing the task force’s effort: The Work of the Future Initiative, whose co-directors are Julie Shah, the H.N. Slater Professor of Aeronautics and Astronautics at MIT, and Ben Armstrong, executive director and research scientist at MIT’s Industrial Performance Center.
The Work of the Future Initiative is conducting research onsite at manufacturing firms and generating collaborative work on campus. Meanwhile, in a recent Harvard Business Review article, Shah and Armstrong outlined their vision of “positive-sum automation” in manufacturing, in which robots and automation co-exist with worker-driven input, rather than wipe out workers. They spoke with MIT News about their ideas.
Q: Let’s start with your perspective about how technologies and workers can complement each other. What is “positive-sum automation,” this core idea of the Work of the Future Initiative?
Ben Armstrong: One thing Julie and I both noticed from visiting factories and studying manufacturers, and that Julie noticed from her work developing robotics technologies, is the tradeoff between productivity advances, which is often the goal of automation, and flexibility. When firms become more productive in repetitive processes, they often lose flexibility. It becomes costlier to change production processes, or make adjustments for workers, even on the level of ergonomics. In short, “zero-sum automation” is a tradeoff, while “positive-sum automation” is using different technology design and strategy to get both productivity and flexibility.
This isn’t just important for firm performance, but for workers. A lot of firms adopting robots actually hire more workers. It’s an open question whether those jobs become better. So, by promoting flexibility as part of the automation process, that can be better for workers, including more worker input.
Julie Shah: I develop AI-enabled robots and have worked for much of my career in manufacturing, trying to cut against this paradigm where you make a choice between either a human doing the job or a robot doing the job, which is by definition zero-sum. It requires a very intentional effort in shaping the technology to make flexible sytems that improve productivity.
Q: How often do firms not realize that automation can lead to this kind of tradeoff?
Shah: The mistake is nearly ubiquitous. But as we toured firms for our research, we saw the ones that are successful at adopting and scaling the use of the robots have a very different mindset. The traditional way you think of labor displacement is, if I put this robot in, I take this person out. We were just in a factory where a worker is overseeing multiple robots, and he said, “Because my job got easier, I can now timeshare between multiple machines, and instead of being crazy busy, I can spend 20 percent of my time thinking about how to improve all of this.” The learning curve in the factory is driven by people and their ability to innovate.
Armstrong: It’s sometimes hard to measure the impact of a technology before it’s deployed. You don’t really know what hidden costs or benefits might emerge. Workers spending time more creatively on problems becomes a downstream benefit. In health care, for instance, automating administrative tasks might meet resistance, but in our interviews, workers talked about how they could now focus on the most interesting parts of their jobs, so we see an outcome that’s good for workers and also potentially good for continuous improvement at these firms.
The focus of the [Harvard Business Review] piece was hardware technologies, but firms can be very creative in how they connect their front-office software used to sell their product with the software that controls their machines. Another piece I’ve been interested in is logistics and warehousing, which in some ways has seen far greater advances in robotics and automation, and where there’s a lot of potential to improve job quality for people.
Q: In its current incarnation, what does the Work of the Future Initiative consist of?
Shah: The Work of the Future Initiative has what we call an “automation clinic,” where we bring researchers and students out to firms in manufacturing, to look at how companies might break out of their zero-sum choices and to showcase those success stories. But the initiative is broader than that. There are seed research efforts and other ways we engage faculty and research across the Institute.
Armstrong: We’re developing an open library of case studies, and we’re always looking for new places to visit and new industry partners to learn from. And we are looking for more structured opportunities for campus discussions. The Work of the Future Initiative is not a closed community, and we would very much like to reach out to people at MIT. It’s exciting and challenging to have people who run a robotics lab working with social scientists. It happens at MIT but might not happen at other places. We’re trying to spur more collaborations among people who look at the same questions in different ways.
Shah: When the Work of the Future task force started in 2018, there were billboards on I-90 telling people they’d better retire now [due to robots]. But what’s happening is much more nuanced. There are all these different possible futures as you deploy these technologies. It’s a large and long-term research agenda to ask about the organizational decisions that produce positives outcomes for firms and workers. That’s very motivating, I think, for people doing the engineering work, and involves broad engagement, and that’s what we’re aiming for.
| 2023-03-14T00:00:00 |
2023/03/14
|
https://news.mit.edu/2023/3-questions-automation-and-good-jobs-can-co-exist-0317
|
[
{
"date": "2023/03/17",
"position": 21,
"query": "automation job displacement"
}
] |
How many jobs need to be sacrificed to tame inflation?
|
How many jobs need to be sacrificed to tame inflation? The cost is rising
|
https://rsmus.com
|
[] |
Under our base case, a loss of 2.5 million jobs is consistent with a 5.1% unemployment rate, 3% inflation and a likely recession under current economic ...
|
Our research indicates that the Fed, facing persistent inflation, will for now have to accept a de facto inflation target of 3% to avoid the destruction of millions of jobs that would accompany achieving a 2% target.
This means a difference of somewhere between 1.3 million and 7.3 million jobs lost, depending on how aggressively the Fed ends up fighting inflation, according to our research.
Under our base case, a loss of 2.5 million jobs is consistent with a 5.1% unemployment rate, 3% inflation and a likely recession under current economic conditions.
To achieve this, though, the Federal Reserve faces a difficult proposition: how to achieve price stability while maintaining financial stability during a modest financial panic and bank crisis.
Both the European Central Bank and the Federal Reserve have provided prodigious injections of liquidity recently to stem bank runs and financial panics. If anything, those efforts are in their initial stages and more will most likely be needed as financial institutions absorb interest rate hikes.
And the central banks still face elevated inflation. The European Central Bank recently hiked its policy rate by 50 basis points to 3.5%, despite a rescue of Credit Suisse and a warning by the ECB to the European Union that other euro-area banks may be at risk.
The Federal Reserve followed that with a quarter-point rate hike at its March meeting.
With both inflation and unemployment rates remaining sticky, we believe it is important to update our estimates ahead of the next set of forecasts released by the Federal Reserve.
While the Fed hiked its policy rate by 25 basis points to a range of 4.75% to 5%, it is important to revisit the price stability side of the equation and update our estimate of the sacrifice ratio, or the slope of our augmented Phillips curve, to identify the trade-off between inflation and unemployment.
| 2023-03-17T00:00:00 |
https://rsmus.com/insights/economics/how-many-jobs-need-to-be-sacrificed-to-tame-inflation.html
|
[
{
"date": "2023/03/17",
"position": 69,
"query": "AI unemployment rate"
}
] |
|
Industrial Automation Engineering for Smart Manufacturing
|
Industrial Automation Engineering: The Key to Smart Manufacturing
|
https://binmile.com
|
[
"Binmile Technologies",
"Sunit Agrawal",
"Avp - Technology"
] |
60% of all occupations could have 30% of automation activities. Industrial Automation Engineering: What It Is? Industrial automation engineering can be defined ...
|
The manufacturing industry is witnessing the entry of automation control systems as a technological attempt to boost the accuracy and performance of industrial processes. This underlines the virtue of industrial automation engineering as one of the most fundamental growth drivers for companies to stay ahead of the curve and achieve long-term profitable growth of their business.
Market Overview of Industrial Automation Engineering
The market size of industrial automation worldwide reached nearly $175 billion in 2020 and is projected to grow by around 9% until 2025. (Source: Statista)
The global market size of industrial automation and control systems was valued at USD 172.26 billion in 2022 and is expected to grow at a CAGR of nearly 10.5% in the forecast period between 2023 to 2030. (Source: Grand View Research)
60% of all occupations could have 30% of automation activities.
Industrial Automation Engineering: What It Is?
Industrial automation engineering can be defined as a discipline incorporating designing, implementing, and maintaining automated systems in an industrial plant. It employs computers and software for controlling machines and procedures along with designing their architectures. It relies on the involvement of different departments, including chemical, machinery, electronics, electricity, computer, and software engineering. It carries a heavy load in its quality of work, embracing the rapid pace of technological transitions, and reducing the need for manual and mental efforts while improving simultaneous performance.
Most organizations worldwide depend on industrial automation engineers and make use of custom software development services to design and implement systems capable of automating tasks in replacement of manual work previously done by humans. The early adopters of automation engineering techniques have gone through successful results in terms of making their internal business processes more efficient, reliable, and cost-effectively productive.
Types of Industrial Automation Enginerring System for Smart Manufaturing
1. Fixed Automation
Fixed or hard automation generally carries out a single task or process and so is an ideal recommendation for repetitive and well-defined tasks. This type of automation system generally involves a process dictated by programmed commands and is mostly used in manufacturing settings with equipment set for particular functions and unchangeable operations.
To say otherwise, fixed automation is used in manufacturing settings where the use of equipment is fixed for particular functions and there are virtually no changes expected in the operations. As it turns out, accommodating changes in a product design in a fixed automation process is difficult. Benefits of the system include low unit cost and high production rates, whereas inflexibility in accommodating product variety, obsolescence, and being prone to technical fallibility are some of its downsides.
2. Programmable Automation
This type of automation system finds its application when there is a requirement for manufacturing products in batches. It is designed to alter the sequence of operations or facilitate changeable operation sequences and machine configuration by applying electronic controls throughout the manufacturing process. To control the operation, a set of programmed instructions, readable and interpretable by the system is applied. Also required in this form of automation system is a non-trivial programming effort when it comes to reprograming sequence and machine operations.
Remember that the system results in greater productivity in the long run due to being a less expensive automation solution in medium-to-high product volume settings. It is also cost-effectively viable in mass production settings, such as steel rolling mills and paper mills. These systems are flexible, suitable for batch production, and can deal with design variations. However, they yield a lower production rate compared to the fixed automation system and usually involve high investment in general-purpose equipment.
3. Flexible Automation
The utilization of a flexible automation system is mostly noted in computer-controlled flexible manufacturing systems, resulting in rapid and smooth transitions in products and processes. Implementation of a flexible automation system where the product varies more often than not involves human operators, who instruct the computer in coded commands to identify products and their location in the system’s sequence to prompt automatic micro-level changes.
The instructions given by the operator trigger the machine to acquire the necessary tools and equipment to execute the production. Flexible automation systems are used in situations involving batch processes and job shops with high product varieties and low-to-medium job volume. A flexible automation system enables companies to produce a variable mixture of products, offer a medium production rate, and are flexible to deal with product design variation. However, it also requires heavy investment and results in high unit costs compared to fixed automation systems.
4. Integrated Automation
Integrated automation systems consist of two or more types of automation or involve a set of independent machines, processes, and data working in sync with the command of an individual control system. The custom solution created by the system is used to manage and streamline the application of tools and processes to drive better results without involving much human effort. Also, the integrated automation system is used in complex manufacturing settings requiring coordination of multiple processes, and in large factories with expensive machines to improve productivity.
In this form of system, a common database can be used to integrate a business system, supporting the full integration of management operations and processes. Application of this system should be determined based on your business’ labor conditions, competitive pressure, and work requirements, as well as the cost of labor.
Also Read: AI in Manufacturing
Top 8 Benefits of Industrial Automation Engineering
Better Productivity: Industrial automation solutions have resulted in continuous mass production today, enabling the manufacturing industry to run 24/7 with nominal downtime. Optimized Costs: One of the most important upsides of implementing industrial automation solutions is driving a substantial reduction in costs. Emerging technological advancements in robotics, smart machinery, and AI systems have largely attributed to significantly reducing production costs, helping the manufacturing industry enhance the value of business assets and push forward toward more profitable growth. Improved Quality: By eliminating human errors and driving greater consistency leading to improved quality of the products, the automation solution is vindicating its viability and efficacy for industrial application. Some tasks that are humanly impossible to be done perfectly can be achieved through automation, including the creation of more complex items, such as electronic goods and pharmaceuticals. Reliable Safety: Industrial automation solutions carry the efficacy of improving workplace safety and employee protection, along with reducing human errors and deploying robots and machines to lead reduction in workplace accidents and injuries. Better Flexibility: Automation solutions drive more flexibility to the industrial processes and machinery, thereby allowing businesses to adapt strongly to the evolving market demands. More Added Value: Automation results in added value by relieving employees of working on mundane and repetitive tasks, thus allowing them to focus on more creative-demanding tasks. Enhanced Data Support: Automation allows for automated data collection, particularly during The Fourth Industrial Revolution (or Industry 4.0). With real-time collection and analysis of segmented data, it allows companies to experience enhanced data support and improved product traceability, and to continuously optimize work processes. Real-time Monitoring and Maintenance: Industrial automation systems pave real-time monitoring of all the key functions using highly sensitive sensors in cutting-edge industrial machinery to detect and address issues and errors in production processes.
Boost manufacturing efficiency with tailored automation solutions. Connect with experts to optimize your processes today! Thanks for contacting us. We'll get back to you shortly.
Potential of Industrial Automation Engineering for Maximizing Manufacturing Business: 4 Key Tips
If you are planning to introduce automation solutions to your business, one of the most crucial things to follow is to deploy the right team to help you walk through opportunities and navigate the challenges that come with them. Here is a quick rundown on leveraging the team of automation specialists.
1. A Solution Roadmap To The Problems
Before using an automation solution, make a list of the problems you want to solve. This means having a clear vision of your goals and objectives since the initial phase of automating anything, to develop an effective solution. Assistance from an industrial automation engineer will be crucial in spotting the loopholes in your existing processes and their relative impacts on your business. This way, you will get a solution roadmap from the engineer to the problems you want to solve through automation.
2. A Rundown On The Business Case
Once you have understood the areas for improvement, the automation engineer will help you understand the prospective advantages you will gain by implementing automation solutions in the processes. Benefits like decreased labor costs, profitable growth, and quality control are some pointers to consider when it comes to evaluating the business case for automation.
3. Assistance With The Right Technology
When it comes to choosing the right technology, the options are very, ranging from programmable logic controllers to industrial robots and data handling. Since you don’t want to court problems in choosing the unwanted technology for automating processes, it makes sense to pay heed to the automation engineer, helping you choose the right technology best suited to your business needs.
4. Implement The Solution
After choosing the right technology, work with the team of automation specialists to get the most out of your invested money. The experience and expertise of the engineers will come in handy for you to be able to implement automation solutions to smoothen the process and maximize the benefits.
Ready to optimize your manufacturing? Reach out to us for tailored software solutions to streamline your processes and increase operational efficiency. Thanks for contacting us. We'll get back to you shortly.
Conclusion
The automation control system witnessed by the manufacturing industry is a technological attempt to increase the accuracy and performance of industrial processes, drive improved quality, productivity, and optimized costs for businesses. Companies looking to automate processes are advised to work with a team of experienced automation specialists. This will result in working with experts to understand various stages of implementation strategies for an effective automation solution. Work with an experienced company offering manufacturing custom software solutions to help you automate entire business processes for greater performance efficiency.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://binmile.com/blog/industrial-automation-engineering-does-smarter-manufacturing/
|
[
{
"date": "2023/03/17",
"position": 99,
"query": "job automation statistics"
}
] |
Eliminate Manual And Labor-Intensive Processes
|
Eliminate Manual And Labor-Intensive Processes
|
https://www.hrfuture.net
|
[] |
With automation, recruiters and applicants can view the available interview slots and book accordingly, making the process easier and faster. Creating job ...
|
As technology advances, businesses are increasingly turning towards automation to eliminate manual and labor-intensive processes. Over 75 percent of HR teams expect to increase their investment in workflow automation technologies by 2024.
They are confident that it will help reduce the amount of time spent on manual paperwork, which currently takes up to 5 to 12 hours per week.
Digital transformation has paved the way for an efficient workflow that can help automate both simple and complex tasks and free up resources – specifically in Human Resources (HR) teams. With a focus on talent acquisition, recruitment, and onboarding, HR teams are finding ways to maximize their output by leveraging automation.
Let’s explore how HR teams can leverage workflow automation to optimize their hiring process.
Workflow automation for HR teams
Every department in an organization leverages automation, but one area that’s seeing especially great success is Human Resources (HR). With technology integrated into many aspects of recruiting and employee relations, organizations no longer need to dedicate time or money to manual hiring processes.
Workflow automation allows recruiters to create job postings easily, post them on job search sites, screen resumes from qualified candidates quickly, schedule interviews efficiently, and send feedback notifications to different stakeholders, among other activities.
Five scenarios – when and where can workflow automation come in handy?
Tracking applicant information: HR departments can easily track applicants throughout the hiring process using automated workflows. They can keep tabs on each candidate easier and ensure no applications are overlooked or lost. Scheduling interviews: Interview scheduling is traditionally time-consuming as emails are sent back and forth between recruiters. Applicants are monitored continuously until an availability slot is identified and finalized. With automation, recruiters and applicants can view the available interview slots and book accordingly, making the process easier and faster. Creating job advertisements: Advanced recruitment platforms provide templates and guidelines for creating effective job postings that attract the best talent out there. Filling out information about job roles followed by posting ads on relevant external websites could be done with a few clicks, ensuring accuracy across channels and drastically reducing human error/manual effort. Applicant tracking system (ATS): ATS has become a major recruitment component in large organizations receiving thousands of monthly applications.
ATS allows companies to track the progress of each applicant through the entire process, from resume submission to final steps like pre-employment tests or background checks remaining completed without fail – helping companies stick to timelines while maintaining the highest level of accuracy throughout the journey.
Onboarding new employees: Often, new employees may feel overwhelmed during onboarding, including completing paperwork, email setup, access control management, etc. Leveraging workflow automation here again helps reduce manual efforts allowing organizations onboard new starters faster, thus resulting in better productivity from day one! Off-boarding: Finally, workflow automation can also be useful when off-boarding employees from a company. It can help with everything from collecting the required paperwork to returning company property and ensuring a smooth transition.
Benefits of leveraging automation for HR teams
There are many advantages of leveraging workflow automation for HR teams. It helps streamline operations, ensures all necessary steps are taken during each process, eliminates errors, and saves time spent on manual tasks.
Improved Efficiency: Workflow automation can optimize hiring processes by removing manual tasks, saving time and effort for HR teams. As a result, companies can expedite their hiring process and increase their chances of hiring the best candidates before they get recruited by competitors. Enhanced Candidate Experience: Automating workflows has the potential to offer candidates a smoother and more tailored experience, keeping them informed promptly and minimizing the need for manual check-ins. As a result, this can foster a favorable perception of the organization and increase the level of engagement among the pool of candidates. Better Data Management: The ability of automated workflows to gather and organize candidate data, including resumes, interview feedback, and assessments, in a centralized location can assist HR teams in making more informed hiring decisions and ensuring compliance with data privacy laws. Increased Collaboration: Creating workflows online or on other words digitizing the workflows, can promote better collaboration among HR team members and hiring managers, fostering more effective communication and feedback sharing. As a result, everyone involved can be better aligned and on the same page throughout the hiring process. Cost Savings: Workflow automation can reduce hiring costs by diminishing the time and effort expended on manual tasks. These savings can encompass various expenses, such as recruitment fees, advertising costs, and HR staff time.
Conclusion
Workflow automation is an invaluable tool for HR teams and holds a special place today’s digital transformation framework . By leveraging this technology, HR teams can streamline their operations and ensure that all processes within the organization run smoothly and efficiently.
With so many benefits, it’s no wonder that more organizations are turning towards workflow automation for optimal hiring experiences.
HR Future Staff Writer
| 2023-03-17T00:00:00 |
2023/03/17
|
https://www.hrfuture.net/talent-management/technology/eliminate-manual-and-labor-intensive-processes/
|
[
{
"date": "2023/03/17",
"position": 78,
"query": "AI job creation vs elimination"
}
] |
Addressing Gender Bias to Achieve Ethical AI
|
Addressing Gender Bias to Achieve Ethical AI
|
https://theglobalobservatory.org
|
[
"Administrator G.O.",
"Ardra Manasi",
"Subadra Panchanadeswaran",
"Emily Sours"
] |
... and mathematics) education and careers has been pointed out. Studies show ... For AI to be ethical and be a vehicle for the common good, it needs to eliminate any ...
|
The sixty-seventh session of the Commission on the Status of Women (CSW67) is currently taking place at the cusp of many intertwining realities—a world still reeling under the adverse impact of the COVID-19 pandemic, worsening climate change crisis, rising inflation, emerging authoritarianism, armed conflicts. Meanwhile, new and fast-moving technological advancements in Artificial Intelligence (AI)—such as ChatGPT—are expected to transform many aspects of our lives and livelihoods, with consequences that are hard to predict. Amid this confluence of forces, some old and stubborn realities persist. According to the World Economic Forum (WEF), it will take another 132 years to achieve gender equality on a global scale. The COVID-19 pandemic, along with other concurrent shocks, has reversed the gains for gender equality from the previous years.
In the context of CSW67’s priority theme of “innovation and technological change,” closing the gender gap seems even more pertinent, owing to the inherent gender inequities in the technology landscape. For the past 20 years, a substantive gender gap relating to women’s participation in STEM (science, technology, engineering, and mathematics) education and careers has been pointed out. Studies show that women are still largely underrepresented in fields such as computing, digital information technology, engineering, mathematics, and physics. Labor market economists attribute this to the disproportionate burden that women bear in terms of differences in human capital, domestic responsibilities, and employment-related discrimination.
The field of AI is no different. According to 2019 estimates from UNESCO, only 12 percent of AI researchers are women, and they “represent only six percent of software developers and are 13 times less likely to file an ICT (information, communication, and technology) patent than men.” These facts lead one to pose a natural question: how does this gap in representation manifest in the very technologies that are built?
Understanding Bias in AI
While there are different uses of artificial intelligence, such as scanning imaging for signs of cancer, the most commonly known are through AI devices that are ubiquitous in homes and workplaces. Through increased digitization, the COVID-19 pandemic has further contributed to their all-pervading nature. Due to COVID-19, 55 percent of companies have accelerated their AI adoption plans to address the skills deficiencies in various industries. However, the impact of such developments on women and their labor force participation, among other considerations, is yet to be carefully studied and documented.
Gender bias in AI can occur at various stages: in the algorithm’s development process; in the training of datasets; and in AI-generated decision-making. AI applications are run via algorithms, which are a set of instructions for problem solving. Computationally, this process involves transforming input data into output data. To this end, the kind of data that gets inputted can directly influence the subsequent decision-making in algorithms. Thus, if the data originally contain certain biases, this can get replicated by the algorithms, which in turn when used for prolonged periods of time can reinforce these biases in decision-making. To this end, the subjective choices made during the selection, collection, or preparation of data sets can play a role in determining potential biases. This is evident in many branches of AI such as Natural Language Processing (NLP), where “word embeddings” can lead to linguistic biases resulting from sexism, racism, or ableism. For example, while selecting top candidates during a hiring process, Amazon’s automated resume screening system discriminated against women. The data used to train the recruitment model was informed by resume samples from a 10-year period, where women were underrepresented. The resume screening model thus used “linguistic signals” associated with successful male candidates. Once the bias was discovered, Amazon discarded this model.
Feminization and Domestication of AI
Past research shows how gendered divisions are naturalized and reproduced through technology. To begin with, technology often gets equated with “men’s power,” while women and girls are portrayed as less technologically skilled and less interested than their male counterparts. Such stereotypes can contribute to the gender gap in women’s participation in related fields.
The tendency to feminize AI tools mimics, and reinforces, the structural hierarchies and stereotypes in society, which is premised on preassigned gender roles. The gendering of AI can occur in multiple ways—through voice, appearance, or the use of female names or pronouns. Home-based virtual assistants such as Amazon’s Alexa, Microsoft’s Cortana, and Apple’s Siri were given default feminine voices (Apple and Google have since offered alternatives aimed at “diversification” or “neutrality.”) As UNESCO points out, these devices were designed to have “submissive personalities” and stereotypically feminine attributes, such as being “helpful, intelligent, intuitive.” However, as evident from the case of IBM’s Watson, which used a masculine voice while working with physicians on cancer treatment, male voices have been preferred for tasks that involved teaching and instruction, as they were perceived to be “authoritarian and assertive.” Among these applications, Google Assistant is the only one that did not bear a “gendered name,” however its default voice is female.
Gender stereotypes and inequalities can be further reinforced through divisions in occupational roles taken up by these new technologies, as evident in the case of robots. “Male” robots have been deemed more appropriate for security-related jobs, whereas, when Japan launched its first hotel with robot receptionists in 2015 , most of these robots were “female.” And while much has been made of the threat that robots and, most recently, emerging AI chatbots present to employment, the sectors that have witnessed increased “robotization” in recent years are those dominated by women—and disproportionately impacted by the pandemic—such as hospitality and tourism, retail, healthcare, and education. In 2022, UNESCO published a report on the Effects of AI on the Working Lives of Wome n , which stated how AI is changing the landscape of work leading to new skill demands, and how women should not be left behind due to these advancements.
Domesticated and feminized forms of AI are also increasingly performing the “affective labor” that is conventionally expected of women. The gendered (and often invisible) work of “affective labor” involves producing, managing, or modifying emotions in others, and can comprise a range of activities such as “caring, listening, comforting, reassuring, [and] smiling.” Past research has shed light on how affective labor has been particularly associated with women of color and migrant domestic workers, where patterns of “domination and invisibility” exist in their relationship with respect to a white household. Home-based virtual assistants such as Amazon’s Alexa, Microsoft’s Cortana, and Apple’s Siri perform affective labor by managing both data and emotions. And voice interfaces are primarily designed to perform tasks such as “scheduling, reminding, making lists, seeking information, taking notes, making calls, and sending messages.” Moreover, as opposed to the lived realities of women, the fact that virtual assistants are unaffected by stress or other external factors turns them into a product of “fantasy.”
Similarly, the humanization of virtual assistants and robots can also allow for the dehumanization and objectification of women. For example, the humanoid robot “Sophia” was made to look “exceptionally attractive,” evoking a sense of “mechanico-eroticism.” Similarly, humanoid robots are often given “Asian or Caucasian features” and are “hypersexualized.” As a result, the various forms of gender-based violence (GBV) and harassment that women and girls face in both private and public spheres are also mirrored in how feminized forms of AI are treated. In a stark example, the developers of Alexa had to enable a “disengagement mode” as a result of the tool being subjected to verbal harassment within households.
Addressing Gender Bias in AI: The Way Forward
The adoption of AI is currently occurring at an unprecedented pace. In the absence of normative frameworks to guide the development and use of AI, breakthroughs like ChatGPT pose ethical concerns. In 2020, at the virtually held UNESCO’s Global Dialogue on Gender Equality and AI, participants observed that AI normative instruments or principles that successfully address gender equality as a standalone issue were “either inexistent or current practices were insufficient.” Though it is promising to see conversations at the UN on the need to develop a framework for ethical AI around its own AI use, the extent to which gender bias figures in these policy discussions is yet to be studied and explored.
To address these policy gaps, it is critical to identify where gender bias in AI shows up. As algorithms are heavily influenced by the data it uses, biases can stem from how data is collected, stored, and processed. Another possible source is whoever is writing the algorithms, and what guidelines the AI is following, as AI tends to reflect the inherent assumptions and prejudices of its software programmers. Since AIs featuring female characteristics are predominantly developed by men, they mirror their ideas about women, underscoring the need for increasing women’s participation in STEM education and careers. However, studies show how women in STEM careers face unique challenges. For example, globally, half of women scientists are subjected to sexual harassment in the workplace. Thus, alongside increasing women’s participation in STEM careers (including in AI), employers should develop support structures to address these challenges, including putting in place zero-tolerance policies for gender-based violence in the workplace and ways to monitor and enforce them.
There are promising practices on how AI can be used to address gender inequalities. For example, AI-powered gender decoders can inform gender-sensitive employment hiring processes. There are also examples of how developers are increasingly conscious of the gendered impact of AI, especially among young users. These kinds of successes can be achieved through a “human-centered AI” approach which is developed with the user in mind. In addition, “fundamental rights impact assessments” can be employed during the application of output from algorithms to identify biases, which result in discrimination around “gender, age, ethnic origin, religion and sexual or political orientation.” According to the European Union Agency for Fundamental Rights (FRA), algorithms should be audited in the form of “discrimination testing” or “situation testing in real-life situations” to eliminate any form of discrimination. In 2017, the European Parliament adopted a resolution that outlined a normative framework to address biases resulting from the use of technology, including the threat of discrimination resulting from the use of algorithms.
Ethical AI involves taking an intersectional approach when addressing questions around gender, race, ethnicity, socioeconomic status, and other determinants, in addition to adopting a human rights-based approach to AI governance premised on transparency, accountability, and human dignity. To this end, different stakeholders, including business and corporate entities, tech companies, academia, UN entities, civil society organizations, media, and other relevant actors should come together and explore joint solutions. The UN secretary-general’s proposal for a Global Digital Compact to be agreed at the Summit of the Future in September 2024 is a right step in this direction. The Global Digital Compact should also delve into potential gender biases perpetuated by AI and solutions to address this. For AI to be ethical and be a vehicle for the common good, it needs to eliminate any explicit and implicit biases, including on the gender front.
Ardra Manasi works as the Global Program Manager at The Center for Innovation in Worker Organization (CIWO) at Rutgers University. Dr. Subadra Panchanadeswaran is a Professor at the Adelphi University School of Social Work. Emily Sours works to advance the rights of women and girls, LGBTQIA+ persons, and marginalized groups.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://theglobalobservatory.org/2023/03/gender-bias-ethical-artificial-intelligence/
|
[
{
"date": "2023/03/17",
"position": 81,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/17",
"position": 92,
"query": "AI regulation employment"
},
{
"date": "2023/03/17",
"position": 49,
"query": "AI labor union"
}
] |
10 Best and High-Paying AI Jobs for 2023
|
10 Best and High-Paying AI Jobs for 2023
|
https://community.nasscom.in
|
[] |
Artificial intelligence (AI) has created new opportunities in recent years. Industries are being affected, making previously unthinkable activities like ...
|
Artificial intelligence (AI) has created new opportunities in recent years. Industries are being affected, making previously unthinkable activities like space travel and melanoma diagnosis viable. As a result, there has also been a continuous rise in AI careers. LinkedIn said AI professionals would be among the 'jobs on the rise' in 2023. The 10 fantastic and well-paying AI jobs you can pursue in 2023 and beyond are discussed in this blog post. Also, do have a look at the online data science certification course if you want to start a career in data science and AI.
Is a Career in Artificial Intelligence a Good Fit?
Many AI careers are available, with hiring increasing by 32% in recent years.
A significant talent shortage exists because insufficient qualified applicants are submitting applications for open positions.
AI professionals get hefty salaries—well over $100,000.
Since the field of AI is constantly growing, there are many opportunities for career advancement.
AI careers are many; you might become a researcher, practitioner, consultant, independent contractor, or even create your own AI products.
What Is the Future of AI Jobs?
The future of AI employment is really bright. The US Department of Labor Statistics anticipates an 11% increase in computer science and information technology employment between 2019 and 2029. The industry will gain roughly 531,200 new employees as a result. It would seem that this is a conservative estimate. "AI and Machine Learning Specialists" are the second most in-demand profession according to the World Economic Forum.
As the industry matures, AI jobs will diversify, increase in quantity, and become more complicated. This will open up opportunities for experts, including novice and senior researchers, statisticians, practitioners, experimental scientists, etc. The future of moral AI is also brightening.
What AI Jobs Can You Pursue?
Artificial intelligence is a young and specialized field, yet many different careers exist. There are many different types of careers in AI, each requiring a unique set of qualifications. Let's examine each of the top ten in turn.
Machine Learning Engineer:
Data scientists and software engineers work together to create the field of machine learning engineering. Using big data technologies and programming frameworks, they produce scalable data science models capable of handling terabytes of real-time data and production-ready.
Qualifications suitable for work as a machine learning engineer include data science, applied research, and software engineering. Applicants for AI positions should have a strong mathematical foundation and be knowledgeable in deep learning, neural networks, cloud applications, and Java, Python, and Scala programming. Understanding IDE software development tools like Eclipse and IntelliJ is also beneficial.
Data Scientist:
Data scientists collect data, analyze it, and make judgments for a number of purposes. They utilize a variety of technical procedures, tools, and algorithms to draw information from data and identify significant patterns. This could entail anything as simple as spotting anomalies in time-series data or something more complicated like making predictions about the future, giving advice. The following qualifications are crucial for a data scientist:
For example, a graduate degree in statistics, computer science, or mathematics.
Statistical analysis and the unstructured data comprehension
Having familiarity with cloud-based solutions like Hadoop and Amazon S3
Proficiency with coding languages like SQL, Python, Perl, and Scala.
Working knowledge of MapReduce, Spark, Pig, Hadoop, and Hive.
Business Intelligence Developer:
BI developers look into complex internal and external data to find trends. For instance, a business that provides financial services could be a person who keeps track of stock market data to aid in investment selection. This might be someone who keeps an eye on sales patterns for a product company to help with distribution plans.
Business intelligence developers, unlike data analysts, do not construct reports. In order to use dashboards, business users are usually in charge of creating, modeling, and managing complex data on readily accessible cloud-based data platforms. Specific skills are expected of a BI developer:
Possessing expertise in SQL, data mining, and similar areas.
BI tool knowledge, such as Tableau, Power BI, etc.
Powerful technical and analytical abilities
Research scientist:
A research scientist is one of the AI occupations with the highest academic demands. They pose original, thought-provoking queries for AI to respond to. They are specialists in various fields related to artificial intelligence, such as statistics, machine learning, deep learning, and mathematics. Like data scientists, researchers also require a computer science doctorate.
Organizations looking to hire research scientists to anticipate them to be well-versed in these fields, as well as in graphical models, reinforcement learning, and natural language processing. Benchmarking expertise and familiarity with parallel computing, distributed computing, machine learning, and artificial intelligence are preferred.
Big Data Engineer/Architect:
Big data engineers and architects build ecosystems allowing efficient connectivity among many business verticals and technologies. Although big data engineers and architects are often entrusted with planning, creating, and developing big data environments on Hadoop and Spark systems, this profession may feel more complicated than that of a data scientist.
Most employers prefer professionals with a Ph.D. degree in mathematics, computer science, or a closely related field. Yet, because this is a more practical role than, for example, a research scientist, practical experience is generally regarded as a vital substitute for a lack of academic degrees. Big data engineers require programming knowledge in C++, Java, Python, or Scala. Also, they must learn about data migration, visualization, and mining.
Software Engineer:
For AI applications, software engineers create software. They combine development operations such as writing code, continuous integration, quality control, API management, and so on for AI employment. They design and administer the software that architects and data scientists utilize. They stay up to date on the latest advances in artificial intelligence technologies.
An AI software engineer must be proficient in software engineering and artificial intelligence. They must have programming capabilities in addition to statistical and analytical abilities. Employers frequently require a bachelor's degree in computer science, engineering, physics, mathematics, or statistics.
Software Architect:
Software architects develop and maintain technical standards, platforms, and tools. AI software architects do this for AI technology. They design and maintain the AI architecture, organize and carry out the solutions, select the tools, and ensure the data flow is seamless.
AI-driven organizations require software architects to have a bachelor's degree in computer science, information systems, or software engineering. Experience is just as important as knowledge in terms of actual application. You will be well-positioned if you have practical experience with cloud platforms, data operations, software development, statistical analysis, and so on.
Data Analyst:
A data analyst collects, cleans, processes, and analyzes data to draw conclusions. In the past, these were primarily routine, monotonous chores. The advent of AI has led to the automation of many routine tasks. As a result, a data analyst must be more familiar with data analytics than just spreadsheets. They must be knowledgeable about the following:
SQL and other database languages are used to extract and process data.
Python for analysis and cleaning
Dashboards for analytics and visualization software like Tableau and PowerBI
Using business intelligence to comprehend the market and organizational environment
Mechatronics Engineer:
When industrial robots initially gained prominence in the 1950s, robotics engineers were possibly among the first jobs in artificial intelligence. Robotics has come a long way, from manufacturing lines to teaching English. Robotic-assisted surgery is used in healthcare. Personal assistants are being produced using robotic humans. A robotics engineer can do all of this and more.
Robotics engineers design and maintain AI-powered robots. Organizations usually require graduate degrees in engineering, computer science, or a related field for these positions.
NLP Engineer:
Natural language processing (NLP) specialists are artificial intelligence (AI) engineers specializing in spoken and written human language. Engineers who work on voice assistants, speech recognition, document processing, and other projects employ NLP technology. For NLP engineering, organizations require a particular degree in computational linguistics. Companies may also be interested in hiring applicants with computer science, mathematics, or statistics backgrounds.
An NLP engineer would need knowledge of, among other things, sentiment analysis, n-grams, modeling, general statistical analysis, computer capabilities, data structures, modeling, and sentiment analysis. Prior knowledge of Python, ElasticSearch, web programming, and so on can be useful.
Can Someone Without Experience Break Into AI?
The vast majority of contemporary technology jobs are not in AI. Because artificial intelligence is a continuously evolving profession, experts in the area must constantly update themselves and keep up with new advances. AI/ML experts must keep up with the latest research and understand new algorithms on a regular basis; simply learning skills is no longer sufficient.
Furthermore, AI is under severe social and governmental scrutiny. AI professionals must
address the social, cultural, political, and economic consequences of AI and its technical components. The capacity to complete projects distinguishes an AI specialist in the real world. Experience is the only source for this
| 2023-03-17T00:00:00 |
https://community.nasscom.in/communities/data-science-ai-community/10-best-and-high-paying-ai-jobs-2023
|
[
{
"date": "2023/03/17",
"position": 85,
"query": "AI job creation vs elimination"
}
] |
|
Use of Artificial Intelligence (AI) in Performance Reviews
|
Use of Artificial Intelligence (AI) in Performance Reviews
|
https://oorwin.com
|
[] |
AI can prevent and remove biases from this, ensuring equality. AI provides equal opportunities, eliminating prejudices based on race, age, nationality, and ...
|
Use of AI In Performance Reviews
One of the industries that have been revolutionized by artificial intelligence (AI) is performance management. It is now used in the performance review process, which managers have traditionally carried out manually. Research conducted by McKinsey shows almost 50% of businesses today have implemented Artificial Intelligence in one of their operation sectors. AI in performance review has proved to be a game-changer, enabling organizations to automate the process, reduce errors, and provide real-time analysis.
Reasons for Using AI in Performance Review
Artificial Intelligence automates the performance management process by collecting data on employee performance from various sources, such as emails, calendars, and project management tools. The data is analyzed and processed to assess an employee’s performance objectively. AI algorithms are trained to identify patterns in the data and provide insights into areas where employees need improvement.
Streamlining Data Analysis: AI, in conjunction with data science, aids managers in cleaning, analyzing, and modeling employee data. It can extract performance-related information from various sources, such as calendars, emails, and project management tools, providing a comprehensive view of an employee’s performance.
Facilitating Objective Evaluations: AI tools are increasingly integrated with diverse data sources, enabling the incorporation of objective data into performance evaluations. Managers can leverage AI to compare employee performance against established metrics, ensuring fair and unbiased assessments.
Pinpointing Improvement Opportunities: AI algorithms detect patterns that affect employee productivity, retention, and satisfaction. By analyzing these patterns, managers can gain insights into potential areas for improvement and take proactive steps to enhance overall performance.
Delivering Customized Feedback: With real-time analysis capabilities, AI empowers managers to provide immediate and relevant employee feedback. This approach helps address issues promptly and fosters a more dynamic and responsive work environment.
How to Utilize Generative AI to Write Performance Reviews
Generative AI is transforming the landscape of performance reviews, offering a practical solution to the time-intensive nature of this process. In a landscape where managers traditionally spend upwards of 17 hours per employee on reviews, AI can be a significant time-saver. By generating specific phrases and suggestions, AI helps write feedback more effectively, ensuring each employee feels uniquely appreciated.
This technology speeds up the drafting of reviews, enriches vocabulary, and introduces variety in language, making communication more effective. It aids in setting tailored employee goals, suggests development plans, and prompts ideas for key competencies based on predefined criteria. AI also compiles and summarizes performance data collected over the year, providing a comprehensive view of an employee’s contributions and areas for growth.
Moreover, AI assists in developing personalized learning plans, aiding career progression, and helping employees articulate their achievements and goals. However, it’s crucial to remember that AI should be a tool for enhancement, not a replacement for human insight. While AI can offer a starting point, especially in overcoming writer’s block or structuring reviews, the human touch remains indispensable. Nowadays, even there are seemingly AI-powered ATS that help ease recruitment processes. Performance reviews require genuine, personalized interactions. AI should support and streamline this process, not replace the nuanced understanding and empathy human managers bring.
Suggested AI Prompts For Writing Performance Reviews
For AI to effectively assist in performance reviews, precise and detailed input is crucial. Managers should maintain an up-to-date list of each employee’s achievements and challenges, which can serve as a foundation for AI-generated insights.
Begin by assessing how employees could improve their teamwork, especially considering any specific feedback they’ve received over the years. Consider these prompts using popular AI tools like ChatGPT, Claude 2, and Bing Chat.
Request AI to suggest alternative, positive phrasing for a set of employee traits [list traits].
Use AI to scan comments about five employees [insert comments] for unconscious bias.
Have AI analyze an employee’s key achievements [list them], identify prominent themes, and assess which hard and soft skills are significant and which need development.
Life, for example-
Assessment of Overall Performance
Summarize [Employee Name]’s overall performance for the past year, highlighting key achievements and areas for improvement.
Teamwork and Collaboration
Evaluate [Employee Name]’s contributions to team projects and their effectiveness in collaborating with colleagues.
Skill Development
Analyze the development of [Employee Name]’s professional skills, including technical and soft skills, and suggest areas for further growth.
Goal Achievement
Review the goals set for [Employee Name] at the start of the year. Assess their success in meeting these goals and discuss any challenges encountered.
Feedback on Specific Projects
Provide detailed feedback on [Employee Name]’s role and performance in specific projects, such as [Project Name].”
These prompts guide AI to provide targeted and actionable feedback, which managers can further refine and personalize. This approach ensures that the AI’s role is to augment and streamline the review process, not to replace the essential human element of personal understanding and connection.
Benefits of Using AI for Performance Reviews
Some of the most significant benefits of performance reviews are:
No human errors
AI eliminates human errors and biases that can creep into performance review processes. Unlike humans, AI algorithms do not have personal preferences that could affect performance review ratings. This ensures that the performance review process is objective and unbiased, leading to fair and accurate assessments.
Projections based on more comprehensive data
AI algorithms analyze vast amounts of data from multiple sources, providing a more comprehensive view of employee performance. This allows managers to make informed decisions about employees’ career development, identify areas of improvement, and provide actionable feedback.
Continuous assessment and real-time analysis
Continuous assessment of employees is driven by artificial intelligence performance management, as it constantly collects data on employee activity. This ensures managers can identify and address performance issues in real-time rather than waiting for the annual performance review cycle. The real-time analysis allows managers to provide timely feedback and coaching, leading to better employee performance.
Better managers
AI gives managers a data-driven approach to performance management, allowing them to make informed decisions about employee development and identify potential organizational leaders. With AI-powered insights, managers can take a more strategic approach to talent management, ensuring that employees are developed and retained.
Employee engagement
AI enhances employee engagement by providing personalized feedback and career development recommendations. By providing real-time feedback, employees can address performance issues proactively, improving their performance and job satisfaction. This leads to increased employee engagement and productivity.
Resolving Bias by Utilizing AI
When supervisors frequently show bias towards an employee, it is time to put aside long-held prejudices. AI can prevent and remove biases from this, ensuring equality. AI provides equal opportunities, eliminating prejudices based on race, age, nationality, and other factors.
In contrast to machines that take a straight line, human nature may become biased and cause behavior. As a result, machine learning and artificial intelligence can establish an impartial setting that can offer equitable opportunities for evaluating or promoting employees.
Recognizing Shortcomings and Enhancing Performance
An organization should concentrate on developing a collaborative workspace to spot ineptitude and make adjustments. It should encourage collaboration, regardless of hierarchy or bias.
Data and AI are valuable corporate assets, and using AI in performance management will free up executives’ time to focus on innovative new ideas and essential business operations. It can also assist people in creating a realistic schedule and setting achievable objectives to fulfill deadlines. Individual performance ought to reflect it, and production ought to increase. AI can also aid in predictive performance appraisal so that there are no unfair practices.
Training and Development
By analyzing career growth using information from previous performance assessments, interests, and skill sets, using Artificial intelligence performance management may assist managers better in identifying the gaps in the talent pool and providing personalized training suggestions for individuals. Performance management includes a significant amount of identifying employee competencies and determining where individuals may improve.
An employee’s performance review may be more accurate if AI fuels this part of the performance. Employees can learn more effectively and quickly by integrating AI technology into learning programs.
Challenges of AI in Performance Reviews
While there are numerous benefits to using AI in performance reviews, there are also several challenges that organizations must overcome. These challenges include:
Cost
Implementing AI in the performance review can be costly, requiring specialized software and training. Organizations must invest in the right technology and infrastructure to ensure the AI-powered performance review process is effective and efficient.
Lack of Human Element
AI lacks the human element essential for building relationships using personalized elements of communication and providing emotional support. The performance review process allows managers to provide feedback, support, and guidance to employees. AI may need to replace the human connection, leading to a less personalized experience for employees.
Loss of Crucial Human Potential
AI can only provide insights based on data and algorithms. It may be unable to identify employee potential that may not be evident in the data. This could lead to losing crucial human potential within the organization, limiting its ability to innovate and grow.
Risk of Misinterpretation
When AI is employed for performance evaluations, misreading data is dangerous, leading to inaccurate assessments. For instance, an AI system might mistakenly flag an employee as underperforming if it observes a decrease in their usual work output or working days. However, this change might be due to legitimate reasons such as handling larger tasks, personal emergencies, or transitioning roles. Consequently, the AI-generated review may need to reflect the employee’s productivity and commitment accurately.
Reliance on Data Quality
Integrating AI into performance review processes necessitates access to comprehensive and high-quality employee data. Gathering this data, a blend of quantitative metrics and qualitative insights is a time-intensive process. Managers often need several months or even years to compile enough data for AI to analyze and interpret employee performance effectively.
Tips for HRs for ensuring Fairness while Using AI for Performance Reviews
As performance reviews rely more on AI, HR leaders must step up. They’ve got to make sure everything is fair and on the level. Since AI is becoming a big part of evaluating performance, HR folks must set clear rules for using it responsibly.
1.Familiarize yourself with and adhere to your organization’s policies regarding AI usage. These policies may outline specific limitations or protocols for AI utilization, ensuring its use aligns with company standards and ethics.
2.Engage in open discussions with your team about the responsible application of AI in performance evaluations. This conversation should explore how AI can be effectively and ethically integrated into the review process, fostering a shared understanding among team members.
3.Use AI as a supplementary tool to enhance your feedback, not as a replacement. The aim is to leverage AI to add depth and insight to your evaluations rather than relying on it to generate generic or unspecific feedback.
4.Conduct a thorough human review of all outputs generated by AI. This step is crucial to ensure accuracy, relevance, and the elimination of potential biases that might be present in the AI-generated content.
5.Actively seek and consider feedback from employees about the AI-assisted review process. Regularly soliciting their thoughts and opinions can provide valuable insights into the effectiveness and fairness of the system.
6.Continuously evaluate and refine your AI integration process. Based on feedback and observations, make necessary adjustments to improve AI’s fairness, accuracy, and effectiveness in your performance reviews.
Conclusion
AI has transformed the performance review process for managers, providing organizations with a data-driven approach to performance management. With AI-powered insights, managers can identify areas of improvement, provide personalized feedback and coaching, and develop employees’ careers.
While there are challenges to implementing AI in performance reviews, the benefits far outweigh them. Organizations that embrace AI-powered performance management will gain a competitive advantage, leading to better employee engagement, increased productivity, and improved business outcomes.
Frequently Asked Questions
What is AI?
AI stands for Artificial Intelligence. It refers to developing computer systems that can perform tasks that typically require human intelligence, such as perception, reasoning, learning, and problem-solving.
How can AI be used in performance reviews?
AI can be used in performance reviews by automating collection and analysis of data on employee performance from various sources. This data is then used to provide objective assessments, identify areas of improvement, and provide personalized feedback and coaching to employees.
What are the benefits of AI in Performance Reviews?
The benefits of using AI in performance reviews include the following:
Eliminating human errors and biases.
Providing projections based on comprehensive data.
Enabling continuous assessment and real-time analysis.
Helping managers make better decisions.
Enhancing employee engagement.
Providing a data-driven approach to talent management.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://oorwin.com/blog/use-of-ai-in-performance-reviews-oorwin.html
|
[
{
"date": "2023/03/17",
"position": 97,
"query": "AI job creation vs elimination"
}
] |
Here's What the Coming AI Act Means for Your Business
|
Here’s What the AI Act Means
|
https://www.fticonsulting.com
|
[] |
The goal of the AI Act is to provide a regulatory framework around the development, commodification and use of AI-driven products.
|
It is nearly impossible to find some aspect of work or life that artificial intelligence has not impacted. Consider ChatGPT, the AI chatbot software that debuted last November and quickly disrupted higher education in the United States.1 Already, some learning institutions have revamped their curriculums to accommodate its use by students.2 Others have outright banned it.3
As AI technology grows more sophisticated and more organizations adopt it in their operations, issues are sure to follow. Already, the European Union (“EU”) is preparing to roll out a set of policies that will have broad implications across the global business spectrum.
The Artificial Intelligence Act (“AI Act”) — referred to by some as the “mother of all AI laws” — is the first major attempt to regulate the use of AI by businesses.4 The goal of the AI Act is to provide a regulatory framework around the development, commodification and use of AI-driven products. The new policies, which were first proposed in 2021, are still being drafted, so it may be a few years before businesses are expected to comply with them.
The AI Act’s proposed restrictions on companies’ using facial recognition would apply to approximately 447 million people across 27 countries.5
Here’s a glimpse of what organizations can expect as the new policies become law.
A Risk-Based Approach
The first question on the minds of many business leaders is how the AI Act will differ from existing regulations like those in the General Data Protection Regulation (“GDPR”). It’s expected that the regulation will expand on the GDPR, which focuses on individual privacy rights, as well as the concepts of fairness and transparency.6
A defining aspect of the AI Act is the adoption of a risk-based approach dividing AI into three areas of potential risk: unacceptable, high, and low or minimal.7 Unacceptable risk is defined as systems that manipulate human behavior, assign social scores, perform mass surveillance and more.8 These are strictly banned.9 High risk systems are those that provide access to employment, education or employment services, and that must meet strict requirements.10 Systems with low or minimal risk, like chatbots or spam filters, are largely unregulated — although in some instances these are held to specific transparency standards.11
In December 2022, the Council of the European Union adopted a proposed amendment that reexamines the definitions of these categories.12 The amendment awaits finalization by the European Parliament.
Authorities Take Action
The impact of the AI Act will be far reaching, and regulatory authorities around the globe are already following its lead. In the United Kingdom, for instance, committees like the Digital Regulation Cooperation Forum (“DRCF”) are leading the charge by bringing several regulators together to define common areas of interest and concern. The DRCF — which comprises the Information Commissioner’s Office, the Competition and Markets Authority, and the Financial Conduct Authority and Ofcom — has already taken a stance on algorithmic processing, a practice likely to come under scrutiny with the new regulations.13
There are several exclusionary practices that organizations like the DRCF will seek to stamp out once the AI Act comes into effect.14 One such practice is self-referencing, which occurs when algorithms apply preferential treatment to a firm’s own competing service or product when presented on a platform.15 Practices like this will attract law enforcement authorities, so it is important that organizations work with regulators to ensure they are operating in accordance with the new policies.
An October study by Cambridge University showed that artificially intelligent hiring tools remain subject to variability and are not yet sophisticated enough to produce results free of bias risk.16
New Risks around Non-Compliance
Research shows that brand loyalty and consumers’ willingness to share data with companies directly correlate to trust.17 If an organization is perceived as too relaxed when it comes to data, for example, it can directly affect the organization’s bottom line. In countless data breaches, organizations have faced share price losses and leadership turnover due to mismanagement.
Companies may now also run the risk of additional scrutiny if pricing and other practices are considered discriminatory by the AI Act. Imagine an organization owns a taxi-hailing app that is able to detect that a user's smartphone battery is low. The app’s AI may charge customers more due to the urgency of the user’s situation. In such a situation, the company will need to address bias that may be underlying within its AI application to ensure policies and business models are compliant and fair to end users.18
Organizations must find the right balance between supporting innovation and managing compliance and ethical requirements. Data science is only one aspect of that dichotomy, however. To develop and adopt policies that manage compliance without stifling innovation, cross-functional collaboration (for instance, among legal teams, developers and data scientists) is crucial. This may include creating opportunities to automate tasks related to regulatory monitoring and reporting.
An Updated Toolkit for Monitoring AI
Regulators will expect organizations to perform a reasonable and proportionate risk assessment to ensure they are aligned with the new laws as they come into force. This includes maintaining records that demonstrate and explain rigorous testing of AI applications.
While there is no one-size-fits-all approach to demonstrating compliance, there are three areas where organizations should direct their focus:
Workflows and Approvals: These should be well organized and defined.
These should be well organized and defined. Quality of Data: Leaders must be confident in the data and that it comes from a trusted source.
Leaders must be confident in the data and that it comes from a trusted source. Model Monitoring and Documentation: Adopting model monitoring and automated documentation tools signals the company is taking the new policies seriously.
There is still time to take action before the AI Act comes into full effect. However, if there is one thing organizations have learned from GDPR compliance, it’s that preparedness and proactive actions are critical to staying on the right side of the law.
| 2023-03-17T00:00:00 |
https://www.fticonsulting.com/insights/fti-journal/heres-what-coming-ai-act-means-business
|
[
{
"date": "2023/03/17",
"position": 10,
"query": "AI regulation employment"
}
] |
|
AI Life Cycle Core Principles - CodeX - Stanford Law School
|
AI Life Cycle Core Principles
|
https://law.stanford.edu
|
[
"Stanford Law School"
] |
The Permit principle is as a key feature of the AI regulatory framework. It helps mitigate harm by rendering more efficient the assignment of liability and ...
|
Core Principle What it means and aims to promote 1 Accessibility Affordable; embraces user friendly interface and experience (UI/UX) methods; facilitates end user understanding of the algorithm and outcomes; maps to Explainability (XAI). 2 Accountability Examines output (decision-making or prediction); identifies gaps between predicted and achieved outcomes; reveals degree of compliance with the Data Stewardship Framework; subject to periodic audit to identify vulnerabilities; output traceable to the appropriate responsible party; responsive to legal demands; respectful of intellectual property rights; zero-gap between application behavior and deployer’s liability; development, provision, or use follow ISO/IEC 23053:2022 , ISO 42001:2023 , ISO/IEC AWI 42005 or similar standard; implementation has leadership approval; maps to Governance. 3 Accuracy Uses credible data (timely, non-repudiated, protected from unauthorized modification); data set is derived by following reasonable selection criteria to minimize harm; data is determined to be valid for the purpose for which it is intended and used; input and output can be measured; data input and output practice is consistent with the Data Stewardship Framework; application performance aligns with marketing claims; references ISO/IEC TR 29119-11:2020 and ISO/IEC AWI TS 29119-11 . 4 Bias Protects against disparate impact, the increase of discrimination against protected classes, unjust outcome; protects against inaccurate results; maps to Ethics; development and use reference ISO/IEC TR 24027:2021 and ISO/IEC CD TS 12791 . 5 Big Data Uses data compliant with the Data Stewardship Framework; respectful of intellectual property rights; compliant with decreasing dependence on labeled data architectures; maintains contextual relevance throughout application life cycle; promotes data accessibility; references ISO/IEC 20546:2019, 20547, and 24668 . 6 Consent Application functionality continuously maintains alignment with the end user’s consent; consent is obtained in a legally valid manner. 7 Cooperation Facilitates global development; compatible with governance framework interoperability; facilitates internal and external information sharing (see discussion below on ISAOs) which maps also to Transparency. 8 Efficiency Supports a cost-effective training:time ratio; makes optimal decisions with respect to achieving objective and resource utilization. 9 Enabling Compliant with government sponsored controlled environments for testing and scaling AI (sandboxing). 10 Equity Protects against widening gender and protected class gaps; maps to Bias. 11 Ethics Encompasses a broad range of values that aim to eliminate or reduce risk to human life; promotes privacy; implements safeguards against AI-washing; protects property; respectful of intellectual property rights; enhances and maintains stakeholder trust; manifests emphasis on socially-beneficial development and use; compatible with Right to be Forgotten laws; compatible with Do No Harm initiatives; incorporates data deletion methodologies such as machine unlearning (MU); responsive to legal demands; makes available an AI system card; makes reference to relevant procurement frameworks; strongly related to Accountability, XAI, Reliability, Privacy, Fairness, Human-Centered, Security, and Transparency; references ISO/IEC TR 24368:2022 . 12 Explainability (XAI) Enables understanding of algorithmic outcomes and operation; enhances the principles of Accountability, Reliability, Fairness, Ethics, Trustworthy, and Transparency; reduces black-box challenges; enables app recalibration; output report is designed to be useful for relevant stakeholders; output is not deceptive; output is interpretable; aligns with ISO/IEC CD TS 6254 . 13 Fairness Supports policies, and procedures to manage against unintended disparate treatment; reduces unexpected outcomes; uses anonymized or pseudonymized data; application aligns with marketing claims; respectful of intellectual property rights; maps to Trustworthy, Consent, Transparency, XAI, Accountability, Bias, and Metrics. 14 Fidelity Supports measuring of the application’s performance relative to its code and across the deployment population; supports measure of ongoing compliance with the Core Principles; supports assessment of degree of compliance with the AI Data Stewardship Framework; references ISO 9001 ; maps to Transparency and XAI. 15 Fundamental Rights Open data access compliant, in contrast to use of closed (proprietary) models that inhibit access; maps to Accessibility, Transparency, and Consent. 16 Governance Developed and used within an environment that follows documented policies, processes, and procedures that are compatible with the Data Stewardship Framework; developed and used within an environment where policies, processes, procedures, and practices are implemented to regularly monitor the organization’s regulatory, legal, risk, environmental, and operational requirements and compliance and serve to inform senior leadership accordingly; senior leadership takes documented responsibility for ensuring ongoing compliance with all relevant policies, processes, procedures, and agreements; system development complies with relevant contractual agreements; manifests a commitment to continuous improvement; policies, procedures, and processes reference ISO/IEC 31000:2018 and 38507:2022 , ISO/IEC CD 42006, ISO/IEC AWI TR 42106 , and ISO/IEC CD TR 17903. 17 Human-Centered Compatible with law, privacy, human rights, democratic values, and diversity; contains safeguards to ensure a fair and just society; protects against augmenting and perpetuating social disparity, promotes equality, social justice and consumer rights; prevents toxicity; aligns with best practices in user interface and experience (UI/UX); human-collaborative and human-intervention (control) compatible; compatible with experiential AI (human-in-the-loop); development cycle takes into account human-like dexterity and operational adaptability to the operator of the robotic application; responsive to legal demands; maps to Consent and Fairness; measures application benefits across multiple dimensions in reference to ISO/IEC AWI TR 21221 . 18 Inclusive Widespread contribution to society; does not exclude certain parts of society; maps to Ethics. 19 Interpretability Complementary to XAI; the meaning of the system’s output corresponds with its design; enables app recalibration; reduces black-box challenges; mechanistic interpretability is used; maps to Trust. 20 Metrics Capable of measuring degree of compliance and effectiveness with the Core Principles; promotes alignment with relevant standards; enables alignment with Governance and Trustworthy principles. 21 Permit The application development and end user use of the application are subject to and compliant with a government issued permit; developer maintains applicable certification from a recognized body (e.g., ISO, IEEE); respectful of intellectual property rights; AI application training employs data that is subject to the AI Data Stewardship Framework; Consideration should be given to requiring developer and/or end user possess a bond (financial guarantee to ensure the fulfillment of specific obligations). 22 Predictable Maintains compatibility with select Core Principles throughout its lifecycle; the potential for deviation from relevant Core Principles is measurable; application performance aligns with marketing claims; Minimizes the capability of jumps “in the wild;” maps to Consent and Fairness. 23 Privacy The application’s development and use are compatible with the AI Data Stewardship Framework; The application’s design and use are compatible with FIPs; The application uses and is compatible with methods that create and maintain the state of unidentifiable data (anonymized, pseudonymized, or encrypted); The application employs a differential privacy framework; Manifests alignment with a commitment to continuous improvement; The design is based on processes that ensure compliance with laws, regulations, and standards such as state privacy laws, HIPAA, GLBA, COPPA, GDPR, the NIST Privacy Framework, and NIST AI Risk Management Framework. 24 R&D Promotes on going research and development in alignment with current best practices; demonstrates a continuous improvement mindset; regularly employs information sharing and other collaboration best practices; maps to Human-Centered principle. 25 Relevant Application lifecycle management adheres to policies and procedures that promote intended outcomes; application conforms with applicable laws; application development conforms with the AI Data Stewardship Framework. 26 Reliability Design, development, and deployment follow best practices and promote compliance with relevant Core Principles; deployment takes a lifecycle perspective and includes patching AI; application is subject to continuous validation using proven risk assessment methodology (red teaming); maintains data credibility; follows a compliance by design methodology; application performance aligns with marketing claims; does not materially deviate from coded objective; algorithmic recidivism is accounted for, monitored, and corrected; undergoes routine and periodic guardrail testing; protects from toxic output; references life cycle processes ISO/IEC FDIS 42001, ISO/IEC DIS 5259 , 5338, 8183, and ISO/IEC AWI 42005 ; maps to Fidelity and Predictability. 27 Resilience Failure recovery capable; integrates best in class redundancy; the greater the capability to autonomously recover (i.e., without manual patching) the more resilient the application; the model is resistant to attack vectors that pollute learning sets; resistant to misinformation prompts; application remains aligned with human values; Manifests alignment with a commitment to continuous improvement; maps to Reliability and Wherewithal; references
ISO/IEC CD TS 8200.; references ISO/IEC CD TS 8200 . 28 Robust Operates with minimum downtime; resistant to adversarial, prompt injection attacks; maintains operational integrity throughout its lifecycle; able to identify and handle input/output unreliability; resistant to unintended behavior from the end user; exhibits high degree of problem flexibility; autonomous behavior maintains line of sight with human developer and end user; accommodates information sharing best practices; uses sophisticated learning techniques to minimize bias; references ISO/IEC 24029 , ISO/IEC 24029-2 , and ISO/IEC TS 4213:2022. 29 Safety Minimizes unintended behavior; aligns with Permit-related policies and procedures; incorporates Robust principles; subjected to RSP (Responsible Scaling Policy) through all appropriate ASL (AI Safety Level), namely levels 1 through 4 and higher; compatible with real-time monitoring to prevent harm; designed with a Do No Harm approach; development gating incorporates methods for measuring application risk on an ongoing basis; implements application hardening methods; application default settings are set to highest safety; Manifests alignment with a commitment to continuous improvement; references ISO/IEC CD TR 5469 . 30 Security Resistant to adversarial, inference, and prompt injection attacks; compatible with information sharing best practices; timely detection and response of threats and incidents of compromise; supply chain vetting and monitoring policies and procedures are used to continuously manage and minimize the model’s risk profile; Manifests alignment with a commitment to continuous improvement; references ISO 31000: 2018; ISO 31000:2023, IEC 31010:2019 and ISO/IEC 23894:2023; data security principles follow the AI Data Stewardship Framework. 31 Sustainable Promotes long-term growth capabilities for the developer; compatible with information sharing best practices; model development aligns with and enables execution of broader organizational commitments (e.g., data privacy); respectful of intellectual property rights; application performance aligns with marketing claims; maps to Wherewithal. 32 Track Record Application is the product of a developer known for designing AI compatible with the Core Principles; developer demonstrates adherence to risk assessment standards and best practices mapping to IEC 31010:2019 ; Manifests alignment with a commitment to continuous improvement; maps to Permit and Wherewithal. 33 Transparency Development and deployment remains consistent with disclosure (e.g., reporting and publication), discovery, and non-discriminatory methodology and output; compliant with disclosure of dataset provenance and legal status; enables end user understanding of the what, how, and why of the output; application performance aligns with marketing claims; information used in model creation is accurate, sufficient, and useful; promotes stakeholder consensus; facilitates audit by third parties; compatible with experiential AI; contains controls to protect against use of opaque and complex neural networks for language models; complies with the AI Data Stewardship Framework; development adheres to coding documentation and annotation best practices; employs effective notice and explanation (e.g. AI Fact Label); Manifests alignment with a commitment to continuous improvement; developer takes part in an information sharing organization (see Note 6); public interaction with AI must be disclosed; maps to XAI and Accountability. 34 Trustworthy A catchall for multiple Core Principles, such as Accountability, Accuracy, Ethics, XAI, Fairness, Privacy, Metrics, Safety, and Security; development practices comply with the AI Data Stewardship Framework; a principle promoted through engagement with regulatory and non-regulatory frameworks, technical standards and assurance techniques such as auditing and certification schemes; application performance aligns with marketing claims; Manifests alignment with a commitment to continuous improvement. 35 Truth Does not cause unfair or deceptive outcomes; application performance aligns with marketing claims; facilitates audit; maps to Accuracy. 36 Wherewithal Developer is financially sound, exhibits multi-year operational resilience; developer has sufficient financial resources and/or insurance (as determined by end user and other stakeholders such as investors) to sustain operations and contractual obligations; developer demonstrates use of policies and procedures to fully support AI development in compliance with relevant Core Principles; Manifests alignment with a commitment to continuous improvement; references I SO/IEC 25059 and ISO/IEC WD TS 25058. 37 Workforce Compatible Considerate of issues relative to worker displacement; promotes effective worker use, interaction, and training with AI.
Purpose
Many of the Core Principles (second column) are compiled from work done by the G7, OECD, UNESCO, IEEE, ISO, NIST, FTC, G20, and APEC. Other Core Principles, such as Big Data, Consent, Fidelity, Metrics, Permit, Track Record, and Wherewithal are my additions. The third column (What it means and aims to promote) is comprised of mostly of my analysis. My objective in this third column is to diminish the inherent ambiguity in these Core Principles.
While ambiguity may initially seem like (to borrow from software parlance) “a feature, not a bug” in that it accommodates more latitude for interpretation, it is not; it is a bug. Ambiguity around the Core Principles fuels a persistent and stubborn lack of precision, a definitional vacuum. It destabilizes stakeholder ability to develop and maintain a cohesive and rational discussion around the core principles. This, in turn, hampers outcome predictability in the sense that laws, regulations, standards, and best practices that refer to the Core Principles become more vulnerable to ambiguity which then renders them less, or entirely, ineffective. Wiping away the distortive effects of ambiguity allows for more efficient, universal use of the Core Principles. They can serve as practical attributes that all stakeholders can hone-in on and leverage for virtually all aspects in which they engage with AI. For example: Developers can select Core Principles that apply to their application and measure where their work is aligned and where it isn’t; end users can reference the Core Principles in their application licensing efforts, in their due diligence, and maintenance of the application; regulators can use the Core Principles to better guide their enforcement activities; and law makers can use them in drafting laws that are more relevant, clear and practical.
Finally, the term “life cycle” here is intended to emphasize the continual assessment and management character of these principles. The OECD’s Framework for the Classification of AI Systems defines the “life cycle” as “planning and design; collecting and processing data; building and using the model; verifying and validating; deployment; and operating and monitoring.” (The NIST AI Risk Management Framework also closely follows this definition.) My approach here is a bit broader and includes the all activities related to model decommissioning. A life cycle approach to the application of the Core Principles is essential for their efficient application. Applications, after all, are not static. They typically undergo numerous updates and upgrades, not all of which are always beneficial. Similarly, end user environments are not static. The scope of acceptable use can change as can its leadership team. (For more on this, take a look at the Governance principle.)
Standards identified in green are published and those identified in orange are in development.
“AI actors” refers to as “those who play an active role in the AI system life cycle, including organizations and individuals that deploy or operate AI.” For more see the OECD’s Artificial Intelligence & Responsible Business Conduct.
Notes
This section is devoted to an on-going discussion that aims to promote substantive understanding of the Core Principles.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://law.stanford.edu/2023/03/17/ai-life-cycle-core-principles/
|
[
{
"date": "2023/03/17",
"position": 51,
"query": "AI regulation employment"
}
] |
Canadian government releases companion document to ...
|
Canadian government releases companion document to proposed AI law
|
https://www.dentonsdata.com
|
[
"Kirsten Thompson",
"All Posts",
"About Kirsten Thompson",
"Kirsten Thompson Is A Partner",
"The National Lead Of Dentons",
"Privacy",
"Cybersecurity Group. She Has Both An Advisory",
"Advocacy Practice",
"Provides Privacy",
"Data Security"
] |
Further, businesses making a high-impact AI system available for use will need to closely consider probable uses for the system and work to confirm that users ...
|
Introduction
On March 13, 2023 Innovation, Science and Economic Development Canada (“ISED”) released a companion document (the “Companion Document”), seeking to provide better clarity respecting the Artificial Intelligence and Data Act (the “AIDA”), the AIDA’s path to enactment, and how the AIDA is expected to function
In June of 2022 the Minister of ISED tabled Bill C-27 introducing a new law on artificial intelligence (“AI”). If and when passed, the will be Canada’s first national regulatory scheme for AI systems.
As currently drafted, however, the AIDA lacks substance, leaving much of its essential content to be settled in yet-to-be created regulations. Recognizing the significance the AIDA will have on Canadians and Canadian businesses, and cognizant of the fact that both individuals and businesses will need to adequately prepare themselves for compliance with the AIDA and its related regulations, Innovation, Science and Economic Development Canada.
Timelines to enactment
Bill C-27 is currently at second reading in the House of Commons. The Companion Document highlights that the development and assessment of the regulations to accompany AIDA, in effect where its most important substantive content will be fleshed out, is expected to be completed on a 24 month timeline. This development and assessment timeline is approximately broken down as follows:
consultation on regulations (six months);
development of draft regulations (twelve months);
consultation on draft regulations (three months); and
coming into force of initial set of regulations (three months).
It is therefore expected that AIDA will come into force in or around 2025 (assuming Bill C-27 passes this year, which is increasingly looking unlikely). Nonetheless, this timetable should not deter individuals and businesses from better understanding AIDA and its implications, as well as taking steps to ready for its debut.
The focus of the statute
The AIDA concentrates on three primary objectives:
aligning high-impact AI systems with commonly held Canadian expectations for safety and human rights;
establishing the office of AI and Data Commissioner (the “Commissioner”), tasked to advance AI policies, educate the public with respect to the AIDA, and carry out a compliance and enforcement role.
restricting uses of AI that cause serious harm by way of the establishment of new criminal sanctions.
Recall that the AIDA is limited to designated activities “carried out in the course of international or interprovincial trade and commerce”, squarely positioning the AIDA within the competence of the federal government. There is no mention in the Companion Document of how the AIDA proposes to work in respect with provincially-regulated activities, or its approach to provincial AI legislation, should any be contemplated.
Defining high-impact AI systems
Bill C-27 does not provide a definition for a “high-impact system”, noting that such an AI system will meet certain criteria to be settled in the regulations to the statute. Organizations in the AI ecosystem have been left struggling to understand whether they will be affected by the AIDA. The Companion Document assists in clarifying organizations’ status by identifying the key factors that will be among the assessment criteria for determining whether a system is “high impact” (and thus regulated):
evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
the severity of potential harms;
the scale of use;
the nature of harms or adverse impacts that have already taken place;
the extent to which for practical or legal reasons it is not reasonably possible to opt-out from the AI system;
imbalances of economic or social circumstances, or age of impacted persons; and
the degree to which the risks are adequately regulated under another law.
Types of systems likely to be the focus of regulation
In addition to these insights into how a high-impact system will be assessed, the Companion Document gives individuals and businesses in Canada a better awareness of the particular systems that the current Government considers to be of most interest, given their possible impacts:
Screening systems Systems that are designed to make decisions, recommendations, or predictions for purposes linked to the accessing of services as they can potentially lead to discriminatory outcomes and economic harm. Biometric systems Systems that are designed to make predictions about individuals using biometric data as they can have consequential impacts on mental health. Influencing systems Systems that are designed to make online content recommendations as they can negatively impact psychological and physical health. Health and safety systems Systems that are designed to make decisions or recommendations in reliance on data collected from sensors, which could include self-driving vehicles or body sensors designed to monitor certain health issues. These systems can potentially precipitate physical harm.
Pursuant to the AIDA, Canadian businesses operating in these spaces will need to establish suitable processes and procedures to assure compliance with the statute. This will include the crafting and implementation of appropriate policies and governance practices aligned with established international norms for the administration of AI systems.
Notably, organizations offering employment, student or other applicant-based screening systems will be impacted by the first category and should be considering making submissions during the consultations and on the draft AIDA regulations when they are released.
“Predictions based on biometric systems” is less clear. It would be helpful to have a definition of what constitutes a “biometric” or “biometric system” as that will be the initial threshold and there is currently a great deal of ambiguity in case law and Canadian statues, as well as the Reports of Findings by the Office of the Privacy Commissioner of Canada. There is no definition in the current draft of Bill C-27. The lack of consistent understanding creates uncertainty as to what is being regulated. Furthermore, the restriction of the scope to “predictions” is curious – expanding this to “predictions, recommendations or decisions” would parallel the proposed language in Bill C-27 regarding automated decision systems. As drafted, the AIDA could potentially apply to AI-based filters on social media that “predict” users’ future career or celebrity lifestyle. Presumably it is the more insidious uses of biometrics that are intended to be captured (for instance, the use by employers or advertisers to assess areas such as an individual’s stress levels, engagement, and excitement, allowing an employer or advertiser to make a range of calculated assumptions about an individual based upon on an image or a live stream during an interview or interaction).
“Influencing systems” is equally broad, potentially capturing song and movie recommendation engines. Notably, because the AIDA is not limited to personal information, it potentially captures the use of AI by political parties (and others in the political ecosystem) to influence voters, even though the use of personal information by political parties is not currently covered by PIPEDA/Bill C-27. However, AIDA “regulated activities” are limited to those “carried out in the course of international or interprovincial trade and commerce” so political parties (and others) may escape regulation. The scope of the trade and commerce power is increasingly being challenged (for instance, in areas such as climate change) and it seems likely this will be challenged as well.
The identification of “health and safety systems” as systems of interest will have a broad impact on manufactures of items such as wearables and connected vehicles (for instance, vehicle safety systems that monitor excessive breakdown or driver fatigue). Certain smart home functions may also be implicated.
There is a specific call out for open source software or open access AI systems, likely in response to initial concerns by organizations that open source, in many respects the lifeblood of innovation, would be stifled by regulation. The Companion Document specifically addresses this, saying:
It is common for researchers to publish models or other tools as open source software, which can then be used by anyone to develop AI systems based on their own data and objectives. As these models alone do not constitute a complete AI system, the distribution of open source software would not be subject to obligations regarding “making available for use.” However, these obligations would apply to a person making available for use a fully-functioning high-impact AI system, including if it was made available through open access.
This should allay initial concerns raised by the AIDA, although considerable refinement is still required.
AI design and development norms
The Companion Document also sets out “norms” related to AI systems. These norms include designing and developing systems that enable meaningful human oversight and monitoring, the provision of applicable information to the public respecting how a particular system is being used, being aware of the possibility for discriminatory outcomes when designing and constructing an AI system (and working to mitigate against such possibilities), routinely and proactively appraising high-impact AI systems to identify harms, making regulatory compliance a primary operational focus, and ensuring high-impact AI systems are performing in a manner consistent with expected objectives.
More specifically, businesses designing or developing a high-impact AI system will need to analyze for and attend to risks with respect to harm and bias, taking corrective actions as and when needed.
Further, businesses making a high-impact AI system available for use will need to closely consider probable uses for the system and work to confirm that users are aware of restrictions on how the system is supposed to be used, and its limitations.
Businesses that manage the operations of an AI system will be required to use such systems in accordance with their specifications, and routinely monitor such systems to identify and mitigate risk.
Oversight and enforcement
The initial approach to enforcement will be a soft one – the Companion Document states that the Commissioner will concentrate on educational initiatives, aiming to help businesses voluntarily comply with the AIDA. The Companion Document makes note of the Government of Canada’s understanding that there needs to be an adequate period of adjustment for businesses operating within this new regulatory structure.
Over time however, it is anticipated that administrative monetary penalties will be used to spur compliance with the AIDA. The AIDA provides for the creation of a such a system, but the same will need to be built out in regulations, following the consultation process.
For more egregious offences, the Commissioner may look to prosecute offenders for the commission of regulatory offences, and where it can be proved that intentional behaviour has caused harm, a criminal prosecution may be undertaken.
This is good news for businesses, which often struggle to understand what the regulatory expectation is (particularly in respect to novel technologies or business models) and often underestimate the resources required to achieve compliance. However, this was the approach taken to privacy legislation (e.g., ombudsman model, limited enforcement powers, etc.) and it took over two decades to meaningfully change the legislation to reflect global norms.
This soft approach – particularly in light of the minimum 2 year timeline to enactment of the AIDA – is out of touch with the pace of technology and global measures. There already exist numerous (and rigorous) AI regimes elsewhere, as well as international standards and other regulatory frameworks (for instance, see the European Union AI framework).
Businesses, particularly multi-national businesses, would be well advised to look to these global norms when designing and implementing AI systems and not rely on the proposed softer Canadian approach there is a risk that their systems and business models may not be portable outside Canada.
Absent from the Companion Document (and the AIDA itself) is any mention of a private right of action. There is a private right of action contemplated in Bill C-27 but it is limited to the privacy protection of the Bill, the proposed Consumer Privacy Protection Act (and Quebec’s Law 25). This is somewhat unusual in the that the private right of action seems to have become a popular bogeyman to insert into legislation, particularly legislation , as a way of indicating that non-compliance will have serious consequences such as litigation, in particular class actions (see, for instance, Canada’s Anti-Spam Law, where the private right of action remains in the text but was never declared in force). This approach effects compliance but spares the need for (and the expense of) the government regulatory body stepping in. Unfortunately, it can also mean that the law develops in unusual and unanticipated directions. Businesses should pay attention during the Committee hearing on Bill C-27 (and AIDA in particular) to see if this approach is being discussed.
Takeaways for business
The Companion Document makes clear that significant work lies ahead in order to craft supporting regulations to the AIDA that adequately and appropriately meet the expectations of Canadians for a strong framework regulating AI systems in this country. That work is not going to be completed rapidly, as the Companion Document points to a relatively lengthy process that will be advanced and refined through several rounds of consultation with stakeholders. This is not inappropriate for a piece of legislation that deals with a complex technology with systemic impacts.
The Companion Document nonetheless does provide Canadians and Canadian businesses with valuable insights into what are likely to be the criteria for assessing a high-impact AI system, what AI systems are of the greatest concern to the current Government, and what the key practice and process actions will need to be for persons involved in the design, development, use or management of such systems.
However, Canada has chosen to be a “fast follower” in this area rather than an early adopter, and organizations developing, designing, or implementing this technology (or using data sets in conjunction with this technology) should consider closely the developments in other jurisdictions that are further along in the regulatory process.
For more information about this and other topics and how we can help, please see our unique Dentons Data suite of data solutions for every business, including enterprise privacy audits, privacy program reviews and implementation, data mapping and gap analysis, and training in respect of personal information.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://www.dentonsdata.com/canadian-government-releases-companion-document-to-proposed-ai-law/
|
[
{
"date": "2023/03/17",
"position": 66,
"query": "government AI workforce policy"
}
] |
Interview Of The Week: Arun Sundararajan, AI And Future ...
|
Interview Of The Week: Arun Sundararajan, AI And Future Of Work Expert
|
https://theinnovator.news
|
[
"Jennifer L. Schenker",
"View All Posts",
".Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow",
"Class",
"Wp-Block-Co-Authors-Plus",
"Display Inline",
".Wp-Block-Co-Authors-Plus-Avatar",
"Where Img",
"Height Auto Max-Width",
"Vertical-Align Bottom .Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow .Wp-Block-Co-Authors-Plus-Avatar"
] |
That said, to ensure minimal performance quality in any meaningful government or business application of generative AI, the current near-term state of the ...
|
Arun Sundararajan is the Harold Price Professor of Entrepreneurship and Professor of Technology, Operations and Statistics at New York University’s Stern School of Business. His research studies how digital technologies transform business, government and civil society His best-selling and award-winning book, “The Sharing Economy,” published by the MIT Press, has been translated into Japanese, Mandarin Chinese, Korean, Vietnamese and Portuguese. Sundararajan has been a member of the World Economic Forum’s Global Future Councils on Technology, Values and Policy, and on the New Economic Agenda, and an advisor to or board member of organizations that include Fortune 100 companies, tech startups, nonprofits and city governments. He has provided expert input about the digital economy to the United States Congress, the European Parliament, the United Nations and dozens of government agencies and regulators globally. Sundararajan holds degrees in electrical engineering, applied economics and management science from the Indian Institute of Technology, Madras and the University of Rochester. He recently spoke to The Innovator about ChatGPT, generative AI and the future of work.
Q: What will the impact of ChatGPT and other forms of generative AI on the workforce?
AS: ChatGPT and, more broadly, the new generation of generative AI, are technologically similar to other kinds of enabling AI that we have encountered in the last 10 years, but have reached a new performance threshold. Primarily, this is because generative AI can actually finally generate realistic things that are normally created by humans such as poems, articles, art, movies, and songs in a seemingly unsupervised way. That said, to ensure minimal performance quality in any meaningful government or business application of generative AI, the current near-term state of the technology is going to require active human oversight. One of the things that has become quite clear in early use is that the technology is rife with inaccuracy, in part because accuracy was not explicitly designed in as an objective. Accuracy will increase over time, but for now, despite its advanced poetic abilities, it’s best to think of ChatGPT as a semi-skilled draft generator. It’s analogous to the way AI has been used in drug discovery for a few years to generate potential combinations of molecules that have the highest likelihood of leading to something therapeutic because humans don’t have the bandwidth to consider so many – but the humans have to nevertheless take over after this step before the drug is actually made.
What will this mean for the future of work? There is a long history of machines starting to get good at doing specific things at a much faster pace than humans, so this is not a new development. It’s important to remember that most work comprises a wide range of tasks, and so dramatic improvements in the capability of computers doing one task does not mean a whole bunch of people will be put out of work instantly.
However, there are segments of the work force – like the people who generate simple computer code that someone assembles into a more complex system – which are under immediate threat from generative AI. ChatGPT and its generative AI peers have gotten incredibly good at writing increasingly complex snippets of computer code. So the people who write simple code are certainly threatened. For visual art and music and video entertainment, generative AI will create completely new categories of AI-generated products.
But we are not being replaced by machines because the economy is not a zero-sum system. It’s an evolving system which grows with technological progress, and as the machines increase our productivity, this also expands the need for humans.
For example, a generative technology that can create computer code based on intelligent direction enhances the capabilities of non-tech savvy humans who need to make decisions based on data, and thus raises the overall quality of human output. Most of the business workforce depends on another human to make sense of their data. Generative AI will give people in the workforce the power to write simple computer programs to analyze their own data without having to rely on another human being. This dramatically expands the use of computer code for data analysis.
Similarly, in medical diagnostics, generative AI will allow an increasingly large number of people to be empowered to make assessments. This is not going to take away human work, but will instead dramatically increase the number of human ailments we have the bandwidth to analyze and deal with. In many parts of the world, access to quality healthcare via generative AI will not be used to replace physicians, it will instead make up for the absence of available physicians. Think about it – two hundred years ago we had no healthcare industry to speak of and today, and today it’s a double-digit percentage of GDP, but despite so much progress, we are still just scratching the surface of fulfilling the healthcare needs of humans. Generative AI can help fill that gap.
It is my belief that generative AI will have a similar augmentation effect on a wide range of other work processes. Machines don’t replace humans, they create new partnerships and allow up to re-imagine how work is done.
Q: Beyond coding and the arts what do you see as the business applications for generative AI?
The family of technologies enabling ChatGPT to be so good are called large language models[LLMs] or pre-trained language models. The LLM that ChatGPT is using is GPT-4, released very recently as a successor to GPT-3.5. These are massive models that has been trained on all information that is publicly available on the Internet to do one primary task – to predict what is the next word when given a set of words. ChatGPT is a separate layer on top of the LLM, trained on actual human conversations and trained to intelligently send the right prompts to the GPT models. This additional training and engineering investment induces the LLM generate a sequence of words that makes it seem like we are talking to a human being.
I believe there will be an explosion of other layers similar to ChatGPT on top of GPT-4. For example, applications on extremely well-defined topics that are specialized such as analyzing U.S. case law, figuring out gluten-free diets, and providing specialized customer support for particular products. What has been hard up to now has not been so much the ability to train for accurate support but more the inability to easily create human-like conversations. Generative AI overcomes this hurdle. Think of LLMs as having been taught some grammar and having been given a whole range of general knowledge. The layers on top then are teaching them to specialize in a particular topic or style of interaction.
The most promising area where I see generative AI being applied is for knowledge management within organizations. For the last 20 years, knowledge management has been coveted by organizations. In the early days of the Internet there was a lot of talk about intranets and how companies were going to index and organize all of their data. Let’s just say that the results have not been breathtaking to say the least – but if you can combine GPT-4’s capabilities with a specialized corpus of human generated documents that are proprietary to an organization you can create extremely effective supplements to human beings to the point where if someone is on vacation and you want to ask them a question you may be able to get those answers from a generative AI that has trained on a corpus of documents, emails and contracts generated by that employee. That is the world that large language models are leading us into.
Q: Are you saying that generative AI will create the equivalent of digital twins of employees?
AS: It could create static twins. These will not be a long-term substitute for human beings but rather more like a short-term band-aid. The world is constantly evolving. No company will want a version of an employee that only has last year’s knowledge.
Q: Will generative AI accelerate the need to reskill the workforce?
AS: We are currently preparing the work force in one early burst of education – in high school and technical schools and colleges – for careers that someone imagines will last 50 years. This no longer bears any reasonable connection to reality. The pace of change in the division of machine-human labor – what machines do and what humans do – has picked up. Over the next 20 years, tens of millions of Americans and hundreds of millions of people globally will find themselves needing a new occupation mid-career.
Granted, it’s not like society is sitting on its hands. Governments I talk to are spending substantial sums on reskilling and companies are investing in life-long learning programs. The real trouble is that 80% to 90% of transitions to new occupations will also require transition into new companies or organizations. So we need reskilling to be wrapped into a bundle of other forms of support, the way your college major is bundled with critical thinking, networking, recruiting help, a branded degree, and the rite of passage to adulthood. Similarly, you can’t just teach someone a new skill like data science mid-career and consider that your job is done and send them on their way. You need to wrap other things around that and imagine how they are going to actually make the transition. Can this be done in a viable time frame if they have a family to support? Can you connect them with peers that have made the same transition? How can you build their confidence? Can you give them mentoring? Placement services? A branded credential? Creating these new mid-career transition bundles is, to me, the most important public policy challenge of the next 20 years.
Q: How can companies help ease the process?
AS: First, invest in skill development that doesn’t assume the person will remain in your company and set up avenues for people to transition out with dignity. Next, help workers to continuously think about what else they can do given the capabilities they have. Companies need to build this into how they work, even on the factory floor. They should encourage workers to think about what they are good at and rotate them into different roles so that they are in the mindset to eventually transition either within or outside of the company.
Q: Why should business care about this?
AS: To me this is more than just a business problem. The polarization around the world over the last 20 years has been caused by mismanaging the manufacturing automation transition in the 1980s and 1990s. We did not create significant economic opportunities for laid-off factory workers. We did not prepare them for today’s world of work. As a result, there are large groups of people who simply lack economic opportunity and are unable to succeed. The divisions between cities and rural areas have led to far more than economic distress. It has led to large groups who are susceptible to social media filter bubbles and ideas of tribalism and revolution because they don’t see opportunity for themselves. This has destabilized the U.S. and many Western European countries.
The magnitude of the coming AI and robotics transition is at least twice as big as the 20th century manufacturing automation transition. So if we want our societies to remain stable, we really, really need to come up with a way for people to transition with dignity. It is such an important need and plays out over such a long period of time. The trouble is our political systems are based around short-term promises, and the polarization that resulted from the last wave of automation is ironically further stymieing our ability to address this wave.
Q: How should companies prepare for generative AI?
AS; First don’t panic. Stay calm. Think of generative AI as a source of opportunity. It is not a substitute for workers but a supplemental way of creating specialized pools of expertise for clients,and its technology can be used to improve knowledge management within your company or work force.
Next, accelerate the pace of preparing your company for the workforce transition. Any sensible organization should put transition into two buckets: transition within – training people for new roles in your company – and transition out – preparing people to leave the company and start new occupations elsewhere. Transition within is a good place to start because the shareholder argument is stronger. To help make the transitions out work better, I encourage companies to try and form networks or alliances with other organizations to help lower the barriers for each other’s workers who are transitioning out. I also encourage them to create peer networks, connecting people who have successfully transitioned with those in process.
I am optimistic that doing a good job of transitioning out will be favorably viewed by society in the years to come, much like sustainability investments which were once seen as a cost center are now seen as a positive by consumers. Workforce transition planning is about more than making the argument that “it is the ethical thing to do” or that “it is good for society.” Sure it is, but it will also be aligned with any company’s business interests in future so it is good to start doing the groundwork now for what will be a complex multi-year process.
This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.
To access more of The Innovator’s Interview Of The Week articles click here.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://theinnovator.news/interview-of-the-week-arun-sundararajan-ai-and-future-of-work-expert/
|
[
{
"date": "2023/03/17",
"position": 90,
"query": "government AI workforce policy"
}
] |
Workday's Response To AI and Machine Learning
|
Workday’s Response To AI and Machine Learning: Moving Faster Than Ever
|
https://joshbersin.com
|
[] |
And this is where Workday is focused. The new Workforce Management system (labor optimization), for example, can predict hiring and staffing needs based on ...
|
This week we met with Workday at the company’s annual Innovation Summit and I walked away very impressed. Not only is Workday clear-eyed and definitive about its AI product strategy, the company is entering one of its strongest product cycles in years. I have never seen so many Workday features reach maturity and it’s clear to me that the platform is hitting on all cylinders.
Let me start with an overview: the ERP market is big, important, and changing. Every company needs a financial and human capital system, and these platforms are being asked to do hundreds of things at once. We expect them to be easy to use, fast, and instantly configurable for our company. But we also want them to be easy to extend, open to integration with many other systems, and built on a modern architecture. How can Workday, a company founded 18 years ago, stay ahead in all these areas?
It’s actually pretty simple. Workday is not an ERP or software applications company: it’s a technology company that builds platforms for business solutions. In other words, Workday thinks “architecture first, applications second,” and this was reinforced again and again as we went through Workday’s offerings. Let me give you a few insights on what we learned, and I encourage you to contact us or read more from Workday on many of the things below.
First, Workday is quite clear that AI and Machine Learning will, over time, reinvent what business systems do. The traditional ERP world was a set of core business applications which include Financials, Human Capital (HCM), Supply Chain, Manufacturing, and later Marketing, Customer Analysis, and others. Almost every vendor who starts in one of these areas tries to move into adjacencies, primarily with the goal of “selling more software to existing customers.”
Today, while companies want to consolidate these applications (a big opportunity for Workday), the bigger goal is reinventing how these applications work together. As Workday describes it, their goal is to help businesses improve planning, execution, and analysis. When it’s hard to hire, like it will likely continue to be for years, we want the HCM system to help us find contractors, look at alternative work arrangements, and arrange financial and billing solutions to outsource work or tasks, and also find and develop internal candidates. So the “red lines” between these applications is blurring, and Workday understands this well.
In a sense this is the core of our new Systemic HR Operating Model. We want these various HCM systems, for example, to look at all four of these elements and help us manage them together. Workday’s new HCM demo actually showed some of this in action.
Beyond ERP To AI And ML At The Core
But the platform market is even moving faster. Not only do companies want a suite of apps that work together (Workday, Oracle, SAP, and others do this), they want AI and machine learning to operate across the company. And this will change what ERP systems do. Workday listed more than 50 different “machine learning” experiences the company is already delivering, and the take the form of “recommendations” or “forms pre-filled out” or “workflows pre-designed” that don’t look like magic, they just look like intelligent systems that help you run your company better. And this is where Workday is focused.
The new Workforce Management system (labor optimization), for example, can predict hiring and staffing needs based on month, weather, and other external inputs. It can then schedule workers based on their availability, skills, and wages. And it can automatically create a workforce schedule, decide when contract labor is needed, and then automatically create hiring portals and candidate experiences to find people. This is really “AI-enabled ERP” not a fancy demo of Generative AI to make emails easier to write.
Workday HCM Continues To Mature
The Workday HCM suite is in the strongest shape I’ve seen in years. The Workday Skills Cloud is maturing into a “skills intelligence platform” and it now has features that make it almost essential for a Workday customer. It can import data from any vertical or specialized skills database, it gives companies multiply ways to infer or assess skills, and it gives you dozens of ways to report on skills gaps, predict skills deficiencies, and create upskilling pathways for each employee or workforce group. I’ve watched this technology grow over the years and never before have I seen it so well put together and positioned to do what companies want.
This is not to say, by the way, that companies still need specialized skills systems for recruiting (Eightfold, Beamery, Phenom, Seekout, Paradox, iCims, others), mobility (Gloat, Fuel50), learning (Cornerstone, Docebo, Degreed), pay equity (Syndio, Trusaic, Salary.com), and many more. In some sense every HR tech platform now has a skills engine under the covers (remember, a “skill” is a series of words that describes attributes of a person) and these systems leverage these data elements for very unique purposes. Skills Cloud, in its more mature position in the market, is intended to be a “consolidation point” to bring the taxonomy into one place. (And it’s the skills engine that the Workday HCM tools rely upon.)
I know, by the way, that all Workday customers have a multitude of other HCM systems. Given the innovation cycle taking place (vendors are getting on the AI bandwagon in very creative ways), this is going to continue. But Workday’s role as the “core” remains strong, particularly because of my next point.
Workday Is Now Truly Open
I was also impressed with Workday’s progress with Extend and Orchestrate, the external APIs and development tools that enable customers and partners to build add-on applications. Workday as a company is not planning on building a lot of vertical solutions, rather they are now pushing partners (Accenture, PwC, and clients) to contribute to the app ecosystem. This creates a “force multiplier” effect where third parties can make money by building a dev team around Workday. (This, by the way, is why Microsoft is so ubiquitous: their reseller and partner network is massive.)
In addition to these programming interfaces, Workday has made a serious commitment to Microsoft Teams (Workday Everywhere). You can now view Workday “cards” within Teams and click on deep links within Teams that take you right to Workday transactions. While the company is still committed to continuous improvements in its user interface, I think Workday now understands that users will never spend all day figuring out how Workday works. I believe this trend will continue, and I encouraged Workday to consider Chat-GPT as the next major interface to build. (They were non-commital).
Vertical Applications
I asked the management team “what do you think about Oracle’s decision to buy Cerner, one of the leaders in clinical patient management? Do you think this threatens your vertical strategy?” Aneel Bhusri jumped up to argue “we would never buy an old legacy company like that – it would never integrate into our architecture.” This matters because Workday’s integrated architecture lets the company deliver AI at scale. In other words, Workday intends to be the pure-play architectural leader, and let the vertical applications come over time.
Today Workday focuses on the education market and has several vertical solutions in financial services, insurance, and healthcare (many built by partners). I don’t think the company is going to follow the SAP or Oracle strategy to build deep vertical apps. And this strategy, that of staying pure to the core architecture, may play out well in the longrun. So for those of you who want to build addons, Workday is opening up faster than ever.
What About AI In The Core?
Now let’s talk about AI, the most important technology innovation of our time. Sayan Chakraborty, the new co-president and a recognized academic expert on AI, has a very strong position. He believes that Workday’s 60 million users (many of which have opted in to be used for anonymous neural network analysis), give the company a massive AI-enabled platform already. So the company’s strategy is to double down on “declarative AI” (machine learning) and then look at Generative AI as a new research effort.
In many ways Workday as been “doing AI” since they acquired Identified in 2014, and many AI algorithms are built into the Skills Cloud, sourcing and recruiting tools, and myriad of tools for analytics, adaptive planning, and learning. Most of the product managers have AI-related features on their plates, and David Somers, who runs the HCM suite, told us there are hundreds of ideas for new AI features floating around. So in many ways Workday has been an “AI platform” for years: they’re just now starting to market it.
That said, Workday’s real data assets are not that big. Assume that 30 million Workday users have opted in to Workday’s AI platform. And let’s assume that the Skills Cloud has tried to index their skills and possibly look at career paths or other attributes. Compared to the data resident in Eightfold (over a billion user records), Seekout (nearly a billion), and systems like Retrain.ai, Skyhive, and sourcing systems like Beamery or Phenom, this is a very small amount of data. At some point Workday is going to have to understand that the HCM AI platforms of today are really “global workforce data” systems, not just customer data systems. So most of the AI we’ll see in Workday will make “your version of Workday” run a bit better.
Prism: Workday’s Strategy To Consolidate Data
Finally let me mention the growth of Prism Analytics (now referred to as just Prism), Workday’s open data platform for analytics and third party data. When the company acquired Platfora the original need was to give Workday customers a place to put “non-Workday data.” Since the Workday data platform is a proprietary, object-based database, there was no way to directly import data into Workday so the company needed a scalable data platform.
Since then Prism has grown exponentially. Initially positioned as an analytics system (you could put financial data into Prism and cross-correlate it with HR data), it is now a “big data” platform which companies can use for financial applications, HR applications, and just about anything you want. It’s not designed to compete with Google Big Query or Red Shift from AWS (at least not at the moment) but for Workday customers who want to leverage their investment in Workday security and existing applications, it’s pretty powerful.
One of the customers who spoke at the conference was Fannie Mae, who has more than $4 trillion in mortgages and loans in its risk managed portfolio. They are using Prism along with Workday Financials to manage their complex month-end close and other financial analysis. Last year I met a large bank who was using Prism to manage, price, and analyze complex banking securities with enormous amounts of calculations built in. Because Prism is integrated into the Workday platform, any Prism application can leverage any Workday data object, so it’s really a “Big Data Extension” to the Workday platform.
And that leads back to AI. If Sayan’s vision comes true, the Workday platform could become a place where customers take their transactional data, customer data, and other important business data and correlate it with Workday financial and HCM data, using AI to find patterns and opportunities. While AWS, Google Cloud, and Azure will offer these services too, none of these vendors have any business applications to offer. So part of Workday’s AI strategy is to enable companies to build their own AI-enabled apps, implemented through Extend and Orchestrate and fueled with data from Prism.
This is going to be a crowded space. Microsoft’s new Power Platform Copilot and OpenAI Azure Services also give companies a place (and method) to build enterprise AI apps. And Google will soon likely launch many new AI services as well. But for companies that have invested in Workday as their core Financial or HCM platform, there are going to be new AI apps that wind up in the Workday platform – and that drives utilization, revenue (through Extend, Prism, and Orchestrate), and even vertical apps for Workday.
Workday’s Position For The Future
In summary, Workday is well positioned for this new technology revolution. I did challenge the management team to consider ChatGPT as a a new “conversational front end” to the whole system and they agreed that it is on their list of things to look at.
(By the way, the creative solutions coming to HR in Generative AI are going to blow your mind. I’ll share more soon.)
For enterprise buyers, Workday remains rock solid. With only a few major competitors to think about (Oracle, SAP, UKG, Darwinbox, ADP), the company is likely to continue to grow market share for large companies. There will be pricing pressure because of the economy, but for companies that want a first-class technology platform for core Finance and HR, Workday will continue to be a leader.
Additional Resources
The Role Of Generative AI And Large Language Models in HR
New MIT Research Shows Spectacular Increase In White Collar Productivity From ChatGPT
LinkedIn Announces Generative AI Features For Career, Hiring, and Learning
Microsoft Launches OpenAI CoPilots For Dynamics Apps And The Enterprise.
Understanding Chat-GPT, And Why It’s Even Bigger Than You Think (*updated)
Microsoft’s Massive Upgrade: OpenAI CoPilot For Entire MS 365 Suite.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://joshbersin.com/2023/03/workdays-response-to-ai-and-machine-learning-moving-faster-than-ever/
|
[
{
"date": "2023/03/17",
"position": 11,
"query": "machine learning workforce"
}
] |
The Rise of the Machines: Is Your IT Job at Risk?
|
The Rise of the Machines: Is Your IT Job at Risk?
|
https://www.thinkhdi.com
|
[] |
With the introduction of artificial intelligence (AI) and machine learning, various industries are experiencing significant disruptions. This has led to ...
|
Machine learning and AI is already automating some tasks which once required human effort, but the new wave of technology also will open up new possibilities to IT professionals with the right skills.
The pace of technological advancement is rapidly transforming the world, and the IT industry is not immune to this change. With the introduction of artificial intelligence (AI) and machine learning, various industries are experiencing significant disruptions. This has led to debates about how automation will impact IT jobs. While some experts suggest that automation will enhance the efficiency of IT jobs, others express concerns that it could lead to significant job losses.
The truth is that automation is already transforming the IT industry, and its impact will only increase in the coming years. In this article, we will explore the rise of the machines and the potential impact on IT jobs.
Automation is already making many IT jobs more efficient. Tasks that were once done manually are now being automated, freeing up time for IT professionals to focus on more strategic tasks. For example, automation is being used to manage networks, perform security checks, and even analyze data. These tasks can be done faster and more accurately by machines, freeing up IT professionals to focus on tasks that require human intelligence and creativity.
Many fear that machines will take over tasks that were previously done by humans, leading to a reduction in the number of IT jobs available. While it is true that some jobs may be automated, it is important to remember that automation will also create new jobs that require skills in areas such as AI, machine learning, and robotics. In addition, IT professionals will be needed to manage the machines that are performing the automated tasks.
In the future, IT professionals will need to develop new skills to stay relevant in a world where automation is increasingly prevalent. They will need to learn how to manage and work alongside machines, and develop skills in areas such as AI, machine learning, and data analytics. The IT industry is also likely to see an increase in demand for jobs that require human creativity and problem-solving skills.
In the constantly evolving IT industry, reskilling is crucial for IT professionals to remain relevant and excel in their careers. The ability to adapt to the changing landscape will differentiate successful IT professionals from others. For instance, there will be a significant demand for IT professionals with skills in AI and machine learning, as these technologies become more mainstream in the industry.
Employers play a crucial role in reskilling their workforce, as well. They can offer training and development opportunities to help their employees acquire new skills and stay up to date with the latest technologies. By investing in their workforce, employers can ensure they have a skilled and adaptable workforce that can navigate the challenges of automation and technological disruption.
Jeff Rumburg is the Managing Partner and co-founder of MetricNet, LLC. MetricNet is the leading source of benchmarks and metrics for IT service and support professionals worldwide. For more information, please go to www.metricnet.com.
| 2023-03-17T00:00:00 |
https://www.thinkhdi.com/library/supportworld/2023/it-automation-and-future-of-work
|
[
{
"date": "2023/03/17",
"position": 33,
"query": "machine learning workforce"
}
] |
|
LAUSD families prepare for school closings in possible ...
|
LAUSD families prepare for school closings in possible strike but hope for contract
|
https://edsource.org
|
[
"Kate Sequeira"
] |
Fresno Unified spokesperson resigns after debacle over AI. Wednesday, July ... The labor union declared an impasse in December and voted to authorize a ...
|
July 10, 2025 - Hundreds of thousands of children could lose federally-funded food stamps and health care under the new law.
Credit: SEIU Local 99
Los Angeles Unified schools, parents and students are preparing for campus closures and a major disruption to learning during a threatened three-day strike next week by its service workers union. However, many are still hoping that a contract settlement will emerge from last-minute negotiations and the strike will be averted in the 422,000-student district.
SEIU Local 99, which represents nearly 30,000 custodians, special education aides and other essential workers across the district, said it will move forward with a work stoppage Tuesday through Thursday if the district and the union do not come to an agreement within the next few days over pay and working conditions. United Teachers Los Angeles, the union representing another 35,000 teachers and other employees, is planning to join SEIU Local 99 on the picket line, closing schools for the nation’s second-largest school district.
In the meantime, families and school administrators are preparing for a major disruption to instruction. Parents worry about finding child care and activities for their youngsters and want a quick contract settlement.
“The children will lose control of their learning,” said Maria Baños in Spanish, concerned about her three children who attend LAUSD schools. “When they return, will they go back and learn what they should’ve? Either way, they fall behind.”
District messages have gone out to families, and administrators have been holding informational online and in-person meetings to spread the word to families about the potential school shutdowns. The district is putting together digital homework assignments to try to minimize any further learning losses suffered during the pandemic, although some parents seem skeptical of the value.
The potential strike has put a strain on parents like Heidy Galicia, whose four children attend two of LAUSD’s Southeast Los Angeles schools. She said she will likely take the days off from work to stay home with her children.
“I can’t leave them alone,” Galicia said in Spanish. “So I will have to stop working to stay with them. In that way, it affects me too.”
She has received calls from other parents asking her to take their children in as well for the three days. It’s an extra responsibility that she’s hesitant to take on but also one she feels she has to in order to support her community, she said.
Rocio Elorza, alongside parents from group Our Voice/Nuestra Voz, also shares those concerns as a stay-at-home mom with two children at LAUSD schools in Central L.A. She said she plans to reach out to other parents in her community to see how she can help make the strike easier on them by perhaps helping with childcare.
“We must think about how we are going to support our community,” Elorza said. She typically attends an LAUSD adult school during the day but assumes she may not have classes.
Meanwhile, LAUSD and SEIU Local 99 will continue to bargain Friday and Monday in an effort to reach an agreement. If it occurs, the upcoming strike would be the largest work stoppage to hit LAUSD schools since the six-day teachers strike in 2019.
The district decided it could not possibly keep schools open and operating during the strike. But administrators, alongside any employees who choose to attend, will be on school campuses during the three days to guide families who show up for school and field any questions.
The district also plans to deploy staff to provide support for any children who show up alone and whose parents can’t be reached right away and to deal with any disturbances that might arise. The district also plans to open about 60 sites to distribute food and provide supervision for some students. There are concerns that some students who usually receive free lunch at school may miss important nutrition.
To take pressure off school administrators, the district’s Division of Instruction and IT department are putting together uniform assignments for students by grade in the online platform Schoology that will be easily accessible, according to Associated Administrators of Los Angeles President Nery Paiz. With the experience of navigating the pandemic online, Paiz said the district is better equipped to accommodate the strike than it was during the teachers strike in 2019.
“I think that we’re doing more digitally by this go-around,” he said. “A lot of the schools, a lot of the families have the technology at home, and the students have technology. It helped to pivot, putting stuff electronically in Schoology folders for students to access meaningful work.”
Still, parents aren’t sure exactly how effective the assignments will be. Baños said she hasn’t heard from her children’s schools in South L.A. about what those assignments will look like. She’s hoping to learn more once she attends her parent conference, but she still feels her three children attending LAUSD schools will fall behind.
“For me as family, it impacts me,” Baños said. “It’s three children, and those three children will have to lose three days of learning.”
At a news conference Wednesday, Superintendent Alberto Carvalho emphasized that the district was prepared to meet with the labor union around the clock to avoid a strike and potential repercussions.
“I commit myself 24/7, day and night, along with their team and our team, to find a solution that will avoid and avert a strike, will avoid keeping kids home, will avoid kids from going hungry in our community without access to food they get in school,” he said.
The union contends the strike would be a necessary last resort after not making real progress toward boosting wages for some of the lowest-paid employees in the district.
“This strike is about respect for essential workers who have been treated as a second-class workforce by LAUSD for far too long,” SEIU Local 99 Executive Director Max Arias said in a news release Thursday. He emphasized that workers have dealt with intimidation and harassment for their union activities and that dozens of complaints alleging unfair labor practices have been filed with California Public Employment Relations Board against LAUSD.
The labor union declared an impasse in December and voted to authorize a strike vote in February following months of negotiations after its contract expired in 2020. It canceled its contract extension last week, paving the way for the strike.
The labor union is pushing for members’ salaries to increase 30% and for the district to provide health benefits to all part-time employees. The district is offering a 15% raise over three years retroactive to July 2021 as well as a 9% retention bonus over two years. The district has also offered to provide health benefits to employees who work four hours a day.
UTLA is currently negotiating with the district as well and is asking for a 20% raise over two years.
Both labor unions have been pushing the district to use its $4.9 billion in reserves to pay for the raises in new contracts. However, the district has emphasized that much of that is one-time Covid-19 relief funds that are committed to covering other costs, leaving only $140 million in flexible spending.
| 2023-03-17T00:00:00 |
https://edsource.org/2023/lausd-families-prepare-for-school-closings-in-possible-strike-but-hope-for-contract/686999
|
[
{
"date": "2023/03/17",
"position": 93,
"query": "AI labor union"
}
] |
|
Transform your business with the artificial intelligence ...
|
Transform your business with the artificial intelligence revolution
|
https://enacment.com
|
[
"Nicolas Zubiaur",
"Con Más De Años De Experiencia En Desarrollo De Software Y Tecnología",
"Ayudo A Negocios A Alcanzar Su Máximo Potencial Con Soluciones Innovadoras Y Escalables."
] |
According to PwC, 72% of business leaders believe that AI is a competitive advantage for decision making [2].
|
Introduction
Artificial intelligence (AI) has become a key enabler for driving digital transformation in companies across all industries. The adoption of advanced technologies such as ChatGPT and GPT-4, offered by OpenAI, enables organizations to improve their internal and external processes, optimize decision making and deliver personalized experiences to their customers. In an increasingly competitive and demanding world, AI is a key tool to stay ahead in the race for innovation and sustainable growth.
In this article, we will discuss how the adoption of AI solutions, such as ChatGPT and GPT-4, can offer significant benefits to companies in different industries. We will explore concrete cases that demonstrate the positive impact of these technologies in areas such as operational efficiency, decision making, innovation and customer satisfaction.
Operational efficiencies in e-commerce and customer service
Implementing AI in business processes can significantly increase operational efficiency. For example, ChatGPT can handle customer support queries in the e-commerce industry, reducing response time and allowing human agents to focus on more complicated cases. According to a McKinsey report, companies that adopt AI can increase productivity by 40% and reduce operating costs by up to 25% [1].
Optimization of decision making in the supply and logistics chain
AI, such as GPT-4 and ChatGPT, can help analyze large amounts of data to extract useful information and detect trends, facilitating data-driven decision making in industries such as supply chain and logistics. For example, OpenAI offers models capable of analyzing purchasing patterns and helping companies optimize their supply chain and predict demand. According to PwC, 72% of business leaders believe that AI is a competitive advantage for decision making [2].
Innovation and growth in the financial and technology sector
The adoption of AI technologies such as ChatGPT and OpenAI can drive innovation and growth by enabling companies to develop new solutions and services. These models can be used in content creation, product design, marketing and advertising in the financial and technology sectors, among others. It is estimated that artificial intelligence could add $15.7 trillion to global GDP by 2030, according to PwC [3].
Personalization and customer satisfaction in the marketing and advertising industry
AI makes it possible to offer personalized experiences tailored to the individual needs of each customer. By using AI models, such as GPT-4, companies can analyze customer behavior and adapt their offers and communications accordingly in the marketing and advertising industry. According to Accenture, 91% of consumers prefer to buy from companies that recognize and remember their preferences [4].
Conclusion
Implementing artificial intelligence solutions such as ChatGPT and GPT-4 in your company can make a significant difference in the way you operate and compete in the marketplace. AI offers a wide range of benefits, from improved operational efficiency to optimized decision making and product and service innovation. Companies that embrace AI not only improve their current operations, but also position themselves for long-term sustainable success.
In addition, AI can significantly improve customer satisfaction by delivering personalized experiences tailored to their individual needs. The ability to anticipate customer expectations and adapt offerings and communications accordingly is a crucial factor for success in today’s digital economy.
Don’t wait any longer to take advantage of the benefits that artificial intelligence, such as ChatGPT and GPT-4, can bring to your business. Contact us at Enacment and find out how our custom software development services can help you implement AI solutions that boost your company’s success. Take the first step towards digital transformation and keep your business at the forefront in an increasingly competitive world.
| 2023-03-17T00:00:00 |
2023/03/17
|
https://enacment.com/en/transform-your-business-with-the-artificial-intelligence-revolution/
|
[
{
"date": "2023/03/17",
"position": 34,
"query": "artificial intelligence business leaders"
}
] |
How IBM Consulting brings a valuable and responsible ...
|
How IBM Consulting brings a valuable and responsible approach to AI
|
https://www.ibm.com
|
[] |
Business leaders need to consider trusted vendors and partners with expertise in data, machine learning and AI, data and AI governance.
|
In the first and second part of this three-part series, we looked at definitions and use cases of generative AI. We’ll now explore the approach IBM Consulting takes when embarking upon AI projects.
As business leaders investigate the best way to apply generative AI to their enterprise at scale, they need to consider trusted vendors and partners with expertise in data, machine learning and AI, data and AI governance, and proven capabilities of scaling applied AI within enterprises across industries and geographies.
IBM Consulting has capabilities in Foundation Models delivery at scale.
IBM Consulting brings industry expertise to understand the regulatory constraints and how to derive value with AI by augmenting specific workflows.
IBM has close strategic partnerships to scale AI projects and has won many awards in this regard, including the US 2022 AWS Innovation Partner of the year .
The mission of IBM Consulting is to drive business transformation with hybrid cloud and AI in a way that is valuable and responsible. We formally stood up our AI Ethics Board in 2018 to ensure that AI systems created at IBM are developed and deployed ethically. The board is comprised of senior leaders from research, business units, human resources, diversity and inclusion, legal, government and regulatory affairs, procurement, and communications, who have the authority to direct and enforce AI-related initiatives and decisions.
In the same year, IBM published its own principles of trust and transparency and offers them as a roadmap to others working with and implementing artificial intelligence. They focus on the following:
The purpose of AI is to augment human intelligence;
Data and insights belong to their creator; and
New technology, including AI systems, must be transparent and explainable.
These are the values by which we approach any work involving artificial intelligence: to enhance—not replace—human intelligence; to deliver client success without the requirement that clients relinquish rights to their data—nor the insights derived from that data—even when it is stored or processed by IBM; to provide clarity about who trains our AI systems, what data was used in that training and, most importantly, what went into an algorithm’s conclusions or recommendations. These principals are further supported by our defined pillars of trust, which we have dedicated time and resources to research, implement and disseminate:
Explainability: How an AI model arrives at a decision should be able to be understood Fairness: AI models should treat all groups equitably Robustness: AI systems should be able to withstand attacks to the training data Transparency: All relevant aspects of an AI system should be available to the public for evaluation Privacy: The data used in AI systems should be secure, and when that data belongs to an individual, the individual should understand how it is being used
Generative AI and large language models (LLMs) introduce new hazards into the field of AI and we do not claim to have all the answers to the questions that these new solutions introduce. IBM understands that driving trust and transparency in artificial intelligence is not a technological challenge, it is a socio-technological challenge.
80% of efforts in artificial intelligence get stuck in proof of concept for reasons ranging from misalignment to business strategy, to mistrust in the model’s results. IBM brings together vast transformation experience, industry expertise, proprietary and partner technologies and IBM Research to work with clients wherever they are on their AI journey. With this combination of skills and partnerships, IBM Consulting is uniquely suited to help businesses build the strategy and capabilities to operationalize and scale trusted AI to achieve their goals.
Currently, IBM is one of few in the market that both provides AI solutions and has a consulting practice dedicated to helping clients with the safe and responsible use of that AI. IBM Consulting helps clients establish the organizational culture needed to safe-handle AI, build multi-disciplinary and diverse teams and think through risks and unintended effects. We work with businesses to identify low-risk uses cases, to assess, educate, and communicate across the organization and to stand up their own internal AI ethics board.
IBM embraces an open ecosystem approach, working with IBM technology as well as a diverse set of ecosystem partners including AWS, Microsoft Azure, Google Cloud, Salesforce, and others, designing intelligence and productivity across mission critical workflows and systems. IBM Garage methodology co-creates, co-executes, and co-operates with enterprise teams to quickly ideate, pilot, test and scale projects. In the co-innovation phases, we employ ethics-driven exercises to ensure that our intentions match our actions.
IBM can help companies put AI into action today to re-imagine workflows with AI, to automate end-to-end enterprise processes, to replace mundane tasks to achieve productivity gains with AI-driven decision making, personalize employee and customer interactions, and more. Our AI services include:
Analytics and AI to build, train and deploy AI and ML models for your business. We will work together to integrate bespoke models into your operations, continually revise and optimize them over time. AI and Automation Advisory to integrate best of breed AI and Automation solutions for full stack observation and orchestration, driving highly automated and predictive IT Operations across business processes, applications and hybrid clouds. Full-Service Automation to leverage IBM’s full suite of technology and services platforms that enables straight through “touchless” processing with minimal human involvement.
IBM has a number of resources to help you learn more about AI and Automation services including research about the open-source tools available to activate against trust & transparency and IBM AI Ethics. You can also learn more about this three-part series by reading the first or second installment, or reaching out to an expert for start a conversation about your needs.
| 2023-03-17T00:00:00 |
https://www.ibm.com/products/blog/how-ibm-consulting-brings-a-valuable-and-responsible-approach-to-ai
|
[
{
"date": "2023/03/17",
"position": 37,
"query": "artificial intelligence business leaders"
}
] |
|
The Emerging AI Trends in Sales & Account Management
|
The Emerging AI Trends in Sales & Account Management
|
https://www.demandfarm.com
|
[
"Abhijit Gangoli",
"Co-Founder At Demandfarm",
"The Growth Master",
"Loading Posts..."
] |
AI-powered chatbots and virtual assistants: · Predictive analytics for sales and account management: · Machine learning for customer experience improvement:.
|
Artificial intelligence (AI) is a rapidly evolving technology that is transforming various industries. Sales, account management, and sales enablement are no exception. According to a recent report, the global AI market in sales is projected to reach USD 12.4 billion by 2025, up from USD 1.3 billion in 2019. This rapid growth in the use of AI in sales and account management underscores the technology’s importance to businesses worldwide.
Imagine you’re a sales manager for a large retail company with hundreds of sales representatives operating in different regions. Your job is to ensure that the sales team hits their targets every quarter. You must also provide regular updates on the team’s performance to the executive team. Imagine how challenging it is to keep track of each sales rep’s performance and to provide actionable insights to help them improve.
This is where AI comes in. With AI, you can automate various processes such as lead generation, customer segmentation, and sales forecasting. It also becomes easier for you to guide and coach your team to improve their performance.
If you want to know how AI will shape the world of sales in 2023, this article will help. Discover the latest predictions made by leading industry analysts on the role and impact of AI in sales, account management and sales enablement.
Overview of the Current State of AI in Sales, Account Management, and Sales Enablement
AI has been transforming the sales and account management landscape for several years now. With the ability to process vast amounts of data, analyse patterns, and make predictions, AI helps in improving business performance and aids sales growth. Recent data shows that when AI is used to aid sales processes, leads increase by 50%, call times and overall costs are reduced by about 60%.
A brief history of AI in Sales, Account Management, and Sales Enablement
AI has been a buzzword in the technology industry for many years, but its practical application in sales and account management has gained momentum only recently. Sales and account management professionals have been using AI to automate tasks and optimize performance.
For example, sales teams use AI to generate leads, personalize sales pitches and forecast sales. Account managers are using AI to segment customers, monitor contract renewals, and upsell or cross-sell. Sales enablement tools use AI to deliver personalized content, track engagement and improve sales reps’ productivity.
Current State of AI Implementation in Sales and Account Management
The use of AI in sales and account management is growing at a rapid pace. AI can automate repetitive tasks like data entry, lead prioritization, and follow-up emails. It can also provide insights into customer behavior predicting the likelihood of churn, the probability of purchase, and the optimal time to connect with customers. AI-powered chatbots are becoming more prevalent in sales and account management, providing instant support to customers and prospects. Close to 70% of users enjoy the speed at which chatbots answer.
Overview of Sales Enablement Technologies that Use AI
Sales enablement tools have been integrating AI to improve their functionality and deliver better results. AI-powered sales enablement technologies can help sales teams with lead scoring, sales forecasting and personalization. They can also assist with content creation, management and distribution. Over 40% of marketers agree that using AI for email marketing generates higher market revenue.
AI can also provide insights into content performance, enabling teams to adjust their sales strategy and approach based on what’s actually working. AI can also help sales reps prioritize their leads and opportunities, ensuring focus on the most promising leads.
Ebook: AI-Assisted Account Planning – Conversations of the Future Part 1
Revolutionizing Sales and Account Management: The Emerging AI Trends
As AI continues to make significant inroads in sales, account management and sales enablement, new trends and applications will continue to emerge. Some of the most promising and exciting developments in AI for these fields include chatbots and virtual assistants, predictive analysis, machine learning (ML) to improve customer experience, and AI-powered sales performance monitoring and improvement.
AI-powered chatbots and virtual assistants:
In recent years chatbots and virtual assistants have become increasingly popular in sales and account management. Through extensive use of natural language processing (NLP) and machine learning (ML), these tools can automate routine interactions, answer commonly asked questions and even make product recommendations. According to Grand View Research, the global chatbot market is expected to grow at a compound annual growth rate (CAGR) of 25.7% from 2022 to 2030 and reach USD 3.99 billion by 2030.
Predictive analytics for sales and account management:
Predictive analytics can help sales and account management teams identify potential customers and personalize their outreach.
Through the analysis of data including purchase history, website behavior, and social media activity, predictive analytics tools can identify which prospects are most likely to convert. They are also able to determine which messages will resonate with them. With over 50% of companies worldwide leveraging advanced and predictive analytics, the benefits are obviously widespread.
Machine learning for customer experience improvement:
ML is used by sales and account management teams to better understand their customers and improve customer experience. By analyzing data from emails, chat logs, and social media, ML algorithms can identify patterns and threads in customer behavior.
With these reports sales teams can receive personalized recommendations and tailor their approach to each customer. A study by Salesforce shows that almost 90% of customers believe that the experience offered by a company is equally significant to its products or services.
Automation in sales and account management:
In sales and account management, automation helps with streamlining workflows, helping teams focus on high-value tasks. By automating tedious and time-consuming tasks like data entry, lead prioritization and follow-up emails, sales teams can save time. Thus efficiency is also improved. This helps to free up time for more productive activities like lead generation and nurturing.
Research by McKinsey Global Institute has shown that over one-third of sales-related activities can be automated. Those who implement sales automation early on consistently report positive outcomes. These include more time spent with customers, improved customer satisfaction, 10%-15% efficiency gains, and the potential of a sales uplift of up to 10%.
AI-powered sales performance monitoring and improvement:
AI technology can sift through call transcripts, email responses and even meeting notes. Through this data, AI-powered tools can provide useful perspectives on sales process efficiency and the likelihood of deal conversions. This helps sales teams monitor their performance and identify areas for improvement. There’s plenty of scope here as recorded in The Sales Mastery 2022 survey. The report states that the average ratio of time spent on selling v/s non-revenue producing activities by salespeople is 30:70.
Advancements in AI for Sales – What to Expect in 2023
Gartner, Forrester and IDC are some of the leading industry analysts that have made predictions about the role of AI in sales, account management and sales enablement.
Presentation of Key Predictions by Industry Analysts
Gartner predicted that AI would be increasingly used in sales operations by 2023. The report predicted that AI would take over repetitive sales tasks through automation. Sales forecasts would also improve and sales data analysis enhanced. Personalization of customer experiences and the identification and prioritization of the best leads would also be assigned to AI.
That chatbots will become more prevalent in sales and account management was predicted by Forrester. According to the report, chatbots will be used to provide instant support to prospects and existing customers, improve lead engagement and reduce sales cycle time.
They further predict that AI will be increasingly used to analyze customer behavior and provide insights to reps. This would enable them to offer personalized recommendations and anticipate customer needs. This is important because over 85% of customers want businesses to be proactive in contact and communication.
IDC made predictions on the use of AI in the optimization of sales enablement. Their prediction was that AI will be used to identify and customize relevant content for sales reps, provide sales training and improve sales reps’ productivity. AI will also be used to analyze the effectiveness of sales enablement programs and to optimize the sales process.
The predictions made by industry analysts underscore the growing importance of AI. As businesses aspire to remain competitive and improve customer engagement, the use of AI in these areas will become increasingly important.
Here’s how AI is poised to specifically influence areas of sales, account management and sales enablement:
AI will enable more personalized experiences: 2023 predictions include AI continuing to drive personalization. This allows sales reps to better understand and respond to the specific needs of each customer. Recommendations will see a high degree of personalization, customer experiences will be enhanced and overall results will be better. AI will improve lead scoring and qualification: AI will help sales teams by efficiently sifting through data and qualifying the better leads for follow-ups and opportunities to be explored. Reports on customer behaviour and the probability of conversion will result in better lead prioritization and the overall effectiveness of the sales process. AI will automate more sales tasks: In 2023, AI will continue to be used to automate mundane and repetitive tasks. By doing this sales reps will be free to focus less on administrative activities and more on revenue-generating and relationship-building activities. As a result, deal closing rates will see improvement. AI will drive sales forecasting: AI can analyze customer behavior and predict future trends. Using this information, sales teams can make appropriate changes to their sales strategy. They can also alter their approach to better align with customer needs and ultimately drive better results. AI will improve the effectiveness of sales enablement tools : AI-powered sales enablement tools will become more sophisticated in 2023, providing more personalized content recommendations and insights into content performance. AI can help sales reps to prioritize their content and ensure that they are providing the most relevant information to their customers, thus improving the effectiveness of sales enablement tools. AI will enable better collaboration between sales and marketing: By providing valuable information about customer behavior and preferences, sales and marketing teams can collaborate better. They can deliver more personalized and effective messaging to customers and improve business results.
Real-World Examples of AI in Sales, Account Management and Sales Enablement
In the real world, companies across various industries are implementing AI in their sales and account management processes. Whether it is to deal with routine queries, analyze data, or improve coaching and training, AI technology has found its way into improved processes and better business results.
Salesforce, a global leader in customer relationship management (CRM) software, has integrated AI into its CRM platform with Einstein AI. It presents sales and account management with a range of features including lead scoring, predictive forecasting, and intelligent opportunity management.
Einstein AI automates lead scoring and prioritization. Thus sales teams can focus their efforts on the most promising leads, increasing the efficiency of the sales process. Sales reps gain insights into customer behavior and preferences, allowing them to tailor their approach to each customer.
Predictive behavior is also available, helping sales teams to anticipate future revenue and identify growth opportunities. With the help of AI, sales teams can analyze historical sales data and identify patterns and trends indicative of future success. This allows companies to make better decisions about their sales strategy and allocate resources more effectively.
AI technology supports intelligent opportunity management. Sales reps are provided with recommendations for the best actions at each stage of the sales cycle. Customer behavior is analyzed and historical data leveraged, to deliver personalized guidance for moving deals forward. Additionally, Pitchlane, as a leading video outreach tool, complements these AI capabilities by enabling sales teams to create personalized video messages for prospects, significantly enhancing engagement and conversion rates.
A company that has successfully utilized AI for sales enablement is DocuSign – a cloud-based electronic signature and document management company. The company’s sales team was struggling to keep up with the sheer volume of customer inquiries, making it tough to close deals quickly.
To solve this problem, DocuSign implemented an AI-powered chatbot that could help resolve common customer queries and provide quick responses about the company’s products and services.
The chatbot, which is available 24/7, has been able to handle over 90% of customer inquiries, allowing the sales team time to focus on more complex tasks. This has resulted in a 10% increase in sales and a 50% reduction in response times. The chatbot has allowed customers access to the information they need quickly and easily and has thus helped improve customer experience and engagement.
Best Practices and Tips for Businesses Looking to Implement AI in Sales and Account Management
While implementing AI in sales and account management is no piece of cake, there are some best practices that businesses can follow:
Start with a clear use case – Before implementing AI for the sake of it, have a clear understanding of what problem you are trying to solve and how AI can help. This will help ensure that you’re investing in the right technology. You will also be able to measure the impact of your investment. Involve your sales team – Your sales team will be the ones most using and benefiting from AI-powered tools. It’s therefore important to involve them in the decision-making process. Get them to contribute to the features and functionality that would be most useful to them. Make sure they are comfortable with the technology. Prioritize data quality – The quality of predictions and recommendations made by AI depends on the data input. Make sure your data is of high quality – clean, consistent and up-to-date – before implementing AI. Start small – It’s easy to get overwhelmed by the potential of AI, but remember to start small and focus on a few key use cases. This will help you gain experience with the technology and build a foundation for future expansion. Measure and iterate – once you’ve implemented AI, it’s important to measure the impact of your investment and iterate as needed. Use data to track the performance of your AI-powered tools and adjust strategy as necessary.
Driving Digital Account Planning and Management with AI: The DemandFarm Advantage
DemandFarm is a leader in digital account planning, offering a product that enhances the account planning process by providing contextual insights to account managers.
With a modular deployment approach, the solution can be tailored to an organization’s specific needs, and the connected app environment delivers a holistic user experience. The tool’s AI capabilities are designed to provide meaningful insights to experienced account managers, rather than trivial nudges based on peripheral data.
The platform’s analytics quotient and clean UI are also endorsed by industry analysts. DemandFarm’s productized solutions, gamified onboarding, and training help facilitate a successful transition from online to digital account management.
The company’s focus on change management, adoption and customer experience makes them a trusted partner for any organization seeking to embrace the benefits of digital Key Account Management.
| 2023-03-13T00:00:00 |
2023/03/13
|
https://www.demandfarm.com/blog/analysts-predictions-of-ai-for-sales-account-management-in-2023/
|
[
{
"date": "2023/03/17",
"position": 95,
"query": "artificial intelligence business leaders"
}
] |
Hirebee.ai
|
Top HR Recruitment Software & ATS
|
https://hirebee.ai
|
[] |
But with AI-based ATS and advanced recruitment software, businesses can streamline their hiring process, find top talent faster, and make better hiring ...
|
Hiring can be a time-consuming challenge, especially for small and medium-sized businesses. Traditional methods no longer keep up with the fast-changing hiring landscape, making recruitment even more complex—especially with remote hiring.
But with AI-based ATS and advanced recruitment software, businesses can streamline their hiring process, find top talent faster, and make better hiring decisions—locally, globally, and remotely.
| 2023-03-17T00:00:00 |
https://hirebee.ai/
|
[
{
"date": "2023/03/17",
"position": 36,
"query": "artificial intelligence hiring"
}
] |
|
A New Era in HR Strategy: The Role of AI Interviews
|
A New Era in HR Strategy: The Role of AI Interviews
|
https://interviewer.ai
|
[] |
Although AI has been a buzzword for many years, its application in recruitment and hiring processes is still relatively new. However, AI-powered interviews are ...
|
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
| 2023-03-17T00:00:00 |
https://interviewer.ai/ai-interviews-hr-strategy/
|
[
{
"date": "2023/03/17",
"position": 60,
"query": "artificial intelligence hiring"
}
] |
|
Automated Employment Decision Tools · NYC311 - NYC.gov
|
Automated Employment Decision Tools · NYC311
|
https://portal.311.nyc.gov
|
[] |
Machine learning; Statistical modeling; Data analytics; Artificial intelligence. Under NYC law, to use an AEDT, employers or employment agencies must: Make sure ...
|
Need something else? Discrimination to report unfair treatment involving an AEDT
An automated employment decision tool (AEDT) is a computer-based tool. It is used to screen job candidates or assess employees.
An AEDT helps with employment decisions by using:
Machine learning
Statistical modeling
Data analytics
Artificial intelligence
Under NYC law, to use an AEDT, employers or employment agencies must:
Make sure a bias audit was done before using it
Post a summary of the results of the bias audit on their website
Notify job candidates and employees that an AEDT will be used to assess them
Give instructions to request a reasonable accommodation
Post a notice about the type and source of the data used for the tool and data retention policy on their website
Learn more about AEDTs.
File a Complaint
You can report an employer or employment agency that used an AEDT but did not:
Make sure the required bias audit was done
Post a summary of the results of the bias audit
Give required notices
Your complaint must include:
| 2023-03-17T00:00:00 |
https://portal.311.nyc.gov/article/?kanumber=KA-03552
|
[
{
"date": "2023/03/17",
"position": 81,
"query": "artificial intelligence hiring"
}
] |
|
Learn About AI
|
Learn About AI — aiEDU
|
https://www.aiedu.org
|
[] |
Become an everyday expert in artificial intelligence. Craft your own experience with our self-paced learning options. Get hands on with AI in your life.
|
AI Challenges
Short & fun challenges designed just for you. Influence how AI works, experiment with AI in your life, and share your findings with your community.
| 2023-03-17T00:00:00 |
https://www.aiedu.org/learn
|
[
{
"date": "2023/03/17",
"position": 67,
"query": "artificial intelligence education"
}
] |
|
Sam Altman 'a Little Bit Scared' of ChatGPT, Will Eliminate ' ...
|
Sam Altman admits OpenAI is 'a little bit scared' of ChatGPT and says it will 'eliminate' many jobs
|
https://www.businessinsider.com
|
[
"Jyoti Mann"
] |
The CEO of OpenAI told ABC News that artificial intelligence could replace many jobs, but that it should also lead to "much better" roles.
|
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.
The CEO of OpenAI admitted he's "a little bit scared" of his ChatGPT creation and warned that it could "eliminate" many jobs.
In an interview with ABC News on Thursday, Sam Altman said that "people should be happy" that the company was "a little bit scared" of the potential of artificial intelligence.
"I think if I said I were not, you should either not trust me, or be very unhappy I'm in this job," he said.
Altman also said artificial intelligence could replace many jobs, but that it could also lead to "much better ones".
"The reason to develop AI at all, in terms of impact on our lives and improving our lives and upside, this will be the greatest technology humanity has yet developed," he said.
The 37-year-old told ABC that he's in "regular contact" with government officials and said regulators and society should be involved with ChatGPT's rollout. Feedback could help curb any negative outcomes from its widespread use.
The entrepreneur warned last month in a series of tweets that the world may not be "that far from potentially scary" artificial intelligence. Altman expressed support for regulating AI in the tweets and said rules were "critical," and that society needed time to adjust to "something so big."
OpenAI this week unveiled GPT-4, its latest ChatGPT model, which Altman described as "less biased" and "more creative" than earlier versions. It's only available to users who pay for its Plus subscription.
The latest version is capable of processing image prompts, is said to be more accurate than other versions, and users can have lengthier conversations with it.
The OpenAI chief said on Tuesday that it can pass the bar exam for lawyers and is capable of scoring "a 5 on several AP exams". It is already being used by teachers to help generate lesson plans and quizzes for students.
OpenAI didn't immediately respond to a request for comment from Insider, made outside normal working hours.
| 2023-03-18T00:00:00 |
https://www.businessinsider.com/sam-altman-little-bit-scared-chatgpt-will-eliminate-many-jobs-2023-3
|
[
{
"date": "2023/03/18",
"position": 18,
"query": "AI job creation vs elimination"
}
] |
|
OpenAI CEO Worried That ChatGPT May 'Eliminate Lot Of ...
|
OpenAI CEO Worried That ChatGPT May 'Eliminate Lot Of Current Jobs'
|
https://vocal.media
|
[] |
As more jobs become automated, new jobs will be created. For example, the development and maintenance of AI chatbots like ChatGPT will require human workers.
|
OpenAI's ChatGPT and the Future of Jobs
OpenAI's latest prototype, ChatGPT, is a powerful AI chatbot system that can generate relatively plausible and authoritative-sounding answers, while also being able to show creativity. However, OpenAI CEO, Sam Altman, is worried about the impact of ChatGPT on the job market. In this blog, we'll explore the potential impact of ChatGPT on jobs and the economy.
What is ChatGPT?
ChatGPT is a sibling model to InstructGPT, both of which are built on the GPT-3.5 model. ChatGPT, in particular, is trained to provide conversational answers to a user's natural language prompts. It uses vast swathes of information harvested from the internet to generate responses, but it does not possess true knowledge.
The Impact on Jobs
There is no doubt that AI chatbots like ChatGPT will have a significant impact on the job market. These chatbots can replace human customer service representatives and even journalists. In fact, ChatGPT can already answer questions, write essays, summarize documents, and write software, which are tasks that were previously only possible with human intervention.
However, the impact of ChatGPT on jobs is not all negative. As more jobs become automated, new jobs will be created. For example, the development and maintenance of AI chatbots like ChatGPT will require human workers. Additionally, the adoption of AI chatbots can lead to increased efficiency and productivity, which can ultimately lead to economic growth.
The Future of Work
As AI chatbots like ChatGPT become more advanced, they will become more integrated into our daily lives. This integration will change the way we work and the types of jobs that are available. Workers will need to adapt to these changes by acquiring new skills and knowledge.
Governments and businesses will also need to take action to mitigate the negative impact of AI on jobs. This can be done through policies such as retraining programs, income support for displaced workers, and investment in new industries that are less susceptible to automation.
Will AI Go Down In Future?
Artificial intelligence (AI) has been a buzzword in the tech industry for the past few decades. It has brought about tremendous advances in various fields, including healthcare, finance, and transportation. However, as with any new technology, there are concerns about its future.
Some experts predict that AI will continue to thrive and revolutionize our world. They believe that as technology advances and more data is collected, AI will become even more powerful and sophisticated. AI could potentially solve some of the world's most pressing problems, such as climate change and disease prevention.
On the other hand, there are those who fear that AI may be too advanced for its own good. They worry that AI could become too intelligent and potentially turn against its creators. Furthermore, there are concerns about the impact of AI on employment. As AI becomes more prevalent, many jobs could become automated, leading to widespread unemployment and economic disruption.
Despite the concerns, it is unlikely that AI will go down in the future. It is a technology that has already proven its worth and has the potential to transform our world in many positive ways. The key will be to ensure that AI is developed and used responsibly, with a focus on the benefits it can bring to society as a whole.
In conclusion, AI is a powerful technology that is here to stay. While there are valid concerns about its future impact, we should focus on maximizing its potential for good. With responsible development and use, AI can be a tool to help us solve some of the biggest challenges facing humanity.
Conclusion
ChatGPT is a powerful AI chatbot system that has the potential to replace many current jobs. While this may seem concerning, it's important to remember that new jobs will be created as a result of automation. However, it's crucial that governments and businesses take action to ensure that workers are not left behind in the transition to an automated economy.
| 2023-03-18T00:00:00 |
https://vocal.media/geeks/open-ai-ceo-worried-that-chat-gpt-may-eliminate-lot-of-current-jobs
|
[
{
"date": "2023/03/18",
"position": 87,
"query": "AI job creation vs elimination"
}
] |
|
Italian General Confederation of Labour
|
Italian General Confederation of Labour
|
https://en.wikipedia.org
|
[] |
The Italian General Confederation of Labour is a national trade union centre in Italy. It was formed by an agreement between socialists, communists, ...
|
Italian trade union
The Italian General Confederation of Labour (Italian: Confederazione Generale Italiana del Lavoro, pronounced [koɱfederatˈtsjoːne dʒeneˈraːle itaˈljaːna del laˈvoːro], CGIL [tʃiddʒiˈɛlle, tʃidˌdʒi.iˈɛlle]) is a national trade union centre in Italy. It was formed by an agreement between socialists, communists, and Christian democrats in the "Pact of Rome" of June 1944.[1] In 1950, socialists and Christian democrats split forming UIL and CISL, and since then the CGIL has been influenced by the Italian Communist Party (PCI) and until recent years by its political heirs: the Democratic Party of the Left (PDS), the Democrats of the Left (DS) and currently the Democratic Party (PD).[2]
It has been the most important Italian trade union since its creation. It has a membership of over 5.5 million.[3] The CGIL is currently the second-largest trade union in Europe, after the German DGB, which has over 6 million members. The CGIL is affiliated with the International Trade Union Confederation and the European Trade Union Confederation, and is a member of the Trade Union Advisory Committee to the OECD.[4]
History [ edit ]
Beginnings and opposition to Fascism [ edit ]
The roots of CGIL date back to early 1900s with the foundation of the General Confederation of Labour, an Italian labour union founded in 1906, under the initiative of socialist members.[5] In 1926, during the fascist dictatorship of Benito Mussolini, CGdL's headquarter in Milan was attacked and completely destroyed by fascist blackshirts; after few months, the CGdL's central committee decided to dissolve the trade union and disbanded the entire organization. Their decision was opposed by communists and left socialists like Bruno Buozzi, who spent the next decades maintaining the old trade union clandestinely.[6] The underground CGdL faced a perilous course, not only because of the fascist repression, but because of the dramatic changes in direction of the Communist International (IC). In 1929 Italian communist militants were ordered to enter fascist trade unions, only to be told in 1935, when the IC adopted the Popular Front strategy, to reconcile with the socialists and other anti-fascists in trade unions and faced the fascist regime.
The three CGIL leaders, Lizzardi, Grandi and Di Vittorio, in 1945.
On 9 June 1944, the Pact of Rome was signed between representatives of the three main anti-fascist parties: the Christian Democracy (DC), the Italian Socialist Party (PSI) and the Italian Communist Party (PCI). However, a few days before, Bruno Buozzi, who had worked intensively on the Pact, was murdered by the Nazi troops.[7] The pact established the foundation of a new CGdL, named Italian General Confederation of Labour (CGIL). The Pact was signed by Giuseppe Di Vittorio for the PCI, by Achille Grandi for the DC and by Emilio Canevari for the PSI. The latter will be later replaced as responsible for the socialist component of CGIL by Oreste Lizzadri.[8]
Despite the unitary CGIL was strongly supported by communists and socialists, the Catholic Church did not oppose it. However, in 1945 it favoured the establishment of the Christian Associations of Italian Workers (ACLI). Until the end of the World War, the CGIL worked in the freed regions to spread the so-called "Labour Chambers" and stipulated wage agreements.[9] With the general insurrection proclaimed by the Italian Resistance on 25 April 1945 and the definitive defeat of the Nazi-Fascist regime, the CGIL extended its influence throughout the country. The trade union contributed to the victory of the Republic in the 1946 institutional referendum, that ended the Savoy's monarchy.[10][11]
First congress and split [ edit ]
On 1 February 1947, Salvatore Giuliano, a Sicilian bandit and separatist leader, killed 11 farmworkers and wounded other 27, during May Day celebrations in the municipality of Piana degli Albanesi.[12] His aim had been to punish local leftists for the recent election results. In an open letter, he took sole responsibility for the murders and claimed that he had only wanted his men to fire above the heads of the crowd; the deaths had been a mistake. The massacre created a national scandal and the CGIL called a general strike in protest against the massacre. According to newspaper reports hints at the possibility of civil war were heard as communist leaders harangued meetings of 6,000,000 workers who struck throughout Italy in protest against the massacre.[13]
After a few months, in the first national congress, which took place in Florence in June 1947, the CGIL registered 5,735,000 members and Giuseppe Di Vittorio, from the PCI, was elected General Secretary. Already during the congress, the signs of the divisions between the social-communist component and the Catholic were evident. Tensions increased with the beginning of the Cold War and the 1948 general election, which saw the DC facing the socialists and communists' Popular Democratic Front.[14][15] The pretext that the Christian democratic faction was trying to create to split from the CGIL was provided by the general strike that the Confederation proclaimed following the attack to the communist leader, Palmiro Togliatti, which took place outside the Italian Parliament on 14 July 1948.[16] The Catholic associations ACLI offered a structure on which built, after few days from the strike, the Christian democratic trade union, which was initially named "Free CGIL" and then, in 1950, Italian Confederation of Workers' Trade Unions (CISL).[17] In the same year, the secular and social-democratic faction split from the CGIL too, founding the Italian Federation of Labor, which was quickly transformed into the Italian Labour Union (UIL).[18] These are, even today, the three main Italian trade unions.[19]
Di Vittorio Era [ edit ]
Di Vittorio with Communist leader, Palmiro Togliatti in Modena, 1950.
In January and March 1953, Di Vittorio proclaimed general strikes against the so-called "Scam Law", an electoral law proposed by the Christian Democratic government of Alcide De Gasperi, which introduced a majority bonus of two-thirds of seats in the Chamber of Deputies for the coalition which would obtain at-large the absolute majority of votes.[20]
The anti-communist winds broke out in harsh repression against CGIL members in factories and in the countryside. Many activists were fired, while many others were forced into the so-called "confinement" departments, where communist members were humiliated. To increase the repression against communists, the American ambassadress in Italy, Clare Boothe Luce, declared that the companies where the CGIL trade unionists obtained more than 50% of the votes in the internal commission's election, could not have access to deals with the United States of America.[21] Moreover, Pope Pius XII launched the excommunication to the communists and favoured the alliance between DC and neo-fascist Italian Social Movement (MSI), for the Municipality of Rome. Police repression was also very hard, due to the Christian democratic Minister of the Interior, Mario Scelba, who ordered the police to shoot the communist demonstrators, in order to prevent further strikes. On 9 January 1950, six workers were killed by police in Modena, while more than 200 had been wounded.[22][23]
Giuseppe Di Vittorio, along with the socialist Fernando Santi, reacted to the Government and Confindustria, launching the "Work Plan", a major political initiative with an alternative idea of economic and social development. The Work Plan supported the nationalization of electric companies, the creation of a vast program of public works and public housing and the establishment of a national body for land reclamation. The Plan of Work was not implemented by the Government but with it the CGIL managed to break the isolation, speaking to the whole country and keeping workers and unemployed workers together, from the industrialized North to the rural South. In the early 1950s, the contrast between CISL and UIL was at its peak; while the CGIL was fighting for great national issues, CISL, backed by the government, pursued its rooting in factories, signing numerous separate agreements.[24]
The tragic facts of the 1956 Hungarian revolution, brutally repressed by the Soviet Union, reinforced the conflict between the three trade unions. For CGIL it was a very difficult moment: Di Vittorio, who, unlike the Communist Party, had immediately condemned the Soviet invasion, was forced by Togliatti to a humiliating retraction.[25][26] Many officials resigned and the number of members dropped by 1 million from 1955 to 1958. Di Vittorio died on 3 November 1957 in Lecco, after a trade union assembly.[27] He directed the CGIL during the post-war period, preserving its internal unity and creating the premises for the resumption of the unitary dialogue with CISL and UIL. On 3 December, Agostino Novella was elected new General Secretary.[28]
Protests of 1968 and Hot Autumn [ edit ]
Metalworkers' protests in Turin during the Hot Autumn of 1969.
After years of approach between the three trade unions, in 1966, the Catholic association ACLI broke with the Christian Democracy, asking for a new season of cooperation with communists.[29] 1968 opens with a historical success for the workers' movement: the pension reform, obtained after a strong protest in workplaces. The general strike proclaimed by the CGIL on 7 March was characterized by massive and unitary participation in all the country. The student revolt, launched by the Californian University of Berkeley against the Vietnam War, extended to France, Germany and Italy. In Italy, the student struggles were intertwined with the workers' struggles that, in hundreds of factories, invested work organization, contracts, timetables, and wage inequalities. On 1 May 1968, for the first time after the 1948 break, CGIL, CISL and UIL celebrated Labor Day together.[30] Moreover, the 7th National Congress of CGIL in Livorno was attended for the first time by members from CISL and UIL.[31]
On 21 August 1968, Novella's CGIL not only expressed its clear condemnation against the Soviet invasion of Czechoslovakia, but broke with the World Federation of Trade Unions, the international organization of Marxist-inspired unions.[32] Meanwhile, in Italy, the struggles in the South exploded and the Government did not hesitate in repressing them with extreme hardness. On 2 December 1968, in Avola, near Siracusa, the police shot the workers who were demonstrating after the end of the negotiations for the renewal of employment contracts, killing two demonstrators. On 9 April 1969, near Battipaglia, Campania, the police shot workers who were demonstrating against the probable closure of the local tobacco factory, killing a 19-year-old worker and a young teacher.[33] Workers' protest continued in the so-called Hot Autumn (Autunno Caldo), a term used for a series of large strikes in factories and industrial centres of Northern Italy, in which workers demanded better pay and better conditions.[34] In 1969 and 1970 there were over 440 hours of strikes in the region. The decrease in the flow of labour migration from Southern Italy had resulted in nearly full employment levels in the northern part of the country.[35]
Lama Era [ edit ]
On 24 March 1970, Luciano Lama succeeded Novella, becoming the third General Secretary of CGIL. Through all his secretariat, Lama pursued a unified policy between the three unions.[36] In May 1970, on the wave of the great mass struggles and thanks to the socialist Minister of Labour, Giacomo Brodolini, the "Statute of Workers" was approved by the Parliament.[37]
Secretary Luciano Lama addressing the crowd during a rally in the 1970s.
In October 1970 the general councils of the three confederations met together in Florence to examine the possibility of starting a unification process. In particular, the metalworkers' factions, FIOM, FIM and UILM, strongly supported the union, but the proposal faced strong opposition from UIL and large sectors of CISL.[38] In July 1972, the three general councils, in a unified session, signed the "Federative Pact" in Rome, electing a joint committee of 90 members and a secretariat of 15 members.[39] The CGIL–CISL–UIL Federation will guarantee the unitary management of the main trade union events for all the 1970s and will be dissolved only after the so-called "Valentine's Day decree" of Bettino Craxi's government in 1984.[40]
The 1970s were also marked by great civil rights achievements.[41] In 1970 the Law n. 898 on divorce was approved, while in 1971 the Parliament approved the Law n. 1204 for the protection of working mothers and the one on nursery schools. In 1975, Law n. 151 introduced equality between men and women inside families. Finally, in 1978 the Law n. 194 "Rules for the social protection of motherhood and voluntary interruption of pregnancy" was approved.[42] However, in the second half of the decade, the unions' action began to weaken. Entrepreneurs used the economic crisis to overturn in their favour the balance of power, resulting from Hot Autumn. Intense restructuring processes were implemented almost everywhere, favoured by the introduction of new automation technologies. While, investments in new plants, based on robotics and information technology created unemployment.[43]
During the decade, with the beginning of the so-called, "strategy of tension", the CGIL was the target of terrorist attack, perpetrated by neo-fascist groups. On 28 May 1974, a bomb exploded during a trade union rally in Piazza della Loggia, Brescia, killing eight people and wounding more than one hundred.[44] The bomb was placed inside a rubbish bin at the east end of the square. It was the beginning of the Years of Lead, a period of social and political turmoil that lasted from the late 1960s until the early 1980s, marked by a wave of both left-wing and right-wing political terrorism, which culminated with the kidnapping and murder of Christian democratic leader, Aldo Moro, in 1978 and Bologna railway station massacre in 1980.[45]
Lama during the 1980s.
In February 1978 three trade unions, on the initiative of Luciano Lama, ratified at the Palazzo dei Congressi in Rome, a document, known as the "EUR Turn", proposing a wage restraint in exchange for an economic policy that would support the development and defend employment.[46] But in those years, CGIL and the unitary union were mostly committed to fighting the strategy of tension, defending democracy and democratic institutions from terrorist attacks. The total isolation of subversive groups from the working world would be the main basis of their defeat. In September 1980, Fiat declared that it would proceed with the dismissal of 14,000 workers and unilaterally put 23,000 workers into redundancy. Metalworkers blocked the Fiat factories for 35 days. Luciano Lama and Enrico Berlinguer, General Secretary of the Communist Party, strongly supported the workers' strikes. In October 1980, Fiat's employees and managers protested through the streets of Turin in an event remembered as the "March of Forty Thousand", to protest against the strikes and against the trade unions which organized it.[47] It was a sound defeat for all Italian trade unions, but especially for the CGIL. The "Fiat case" marked forever trade unions' history in Italy, accelerating towards the dissolution of the unitary federation.[48]
In June 1982, the three unions rejected, with a major demonstration in Rome in June 1982, the end of an agreement on the sliding wage scale, better known in Italy as "escalator" (scala mobile in Italian), a method which consisted in increasing the wages as the prices rise in order to maintain the purchasing power of the workers even if there is inflation.[49] However, after few days CISL and UIL opened to the revision of the "escalator", while CGIL was strongly against it. On 14 February 1984, the government led by socialist Prime Minister, Bettino Craxi, unilaterally reduced the "escalator" with the famous "Valentine's Day decree".[50] CISL and UIL expressed their positive view on the decree, while the CGIL announced strikes. The divisions between the trade unions caused the definitive breaking of the unitary federation. The CGIL launched a referendum on the "escalator", which was rejected by voters, marking a strong defeat for the trade union.[51][52]
From the late 1980s to Tangentopoli [ edit ]
The defeat in the referendum on the escalator opened a difficult period for the CGIL, in a context marked by a drastic loss of representativeness of the three confederations and the birth of small autonomous trade unions. In 1986 Antonio Pizzinato, succeeded Lama, becoming the new General Secretary.[53] However, difficulties within the CGIL were suddenly reflected in the following National Congress. Pizzinato, after only two years of the secretariat, resigned from his post in favour of Bruno Trentin, former General Secretary of FIOM during the Hot Autumn.[54]
Meanwhile, in the Soviet Union, the new head of the Soviet Communist Party, Mikhail Gorbachev, started the Perestroika reform movement. Meanwhile, other socialist countries were also invested in renewal processes, inspired by Solidarność trade union movement in Poland. In 1989 the collapse of the Berlin Wall assumed the symbolic value of the defeat of socialism in the countries of the Soviet bloc.[55]
In 1992 the Tangentopoli scandal broke out. It was a nationwide judicial investigation into political corruption in Italy, which led to the demise of the so-called "First Republic", resulting in the disappearance of many political parties.[56] Christian Democracy, which dominated the entire political system for almost fifty years, was disbanded in January 1994, while the Socialist Party disappeared in November. The Communist Party had previously transformed into a democratic socialist party, the Democratic Party of the Left (PDS), led by Achille Occhetto, but it suffered a split from the left with the foundation of the Communist Refoundation Party (PRC), by Armando Cossutta. In this period, new populist movements, such as the Northern League (LN), grew up.
Bruno Trentin with Agelo Airoldi during a CGIL National Congress in the early 1990s.
In July 1992, the government of Giuliano Amato proposed the definitive overcome of the "escalator" and its replacement with a negotiated recovery. Bruno Trentin, to prevent a new dramatic rupture between the unions, signed the agreement and then resigned, being that signature contrary to the negotiating mandate of the governing bodies of CGIL. The next CGIL directorate rejected his resignation and decided to negotiate a new system of relations based on the income policy. A few days later, the financial crisis seems to plunge into bankruptcy. Amato's government decided a drastic devaluation of Italian lira, the consequent exit from the European Monetary System and an extraordinary financial bill of one hundred thousand billion. The measures implemented, such as the increase in the retirement age and the seniority of contributions, the blocking of retirement, the "minimum tax" on autonomous income, the balance sheet on companies, the withdrawal on bank accounts, the introduction of health tickets, the institution of house tax (ICI), caused a widespread of social protest which turned against the three trade unions too. However, in July 1993 CGIL, CISL and UIL signed a new wage agreement with the new Prime Minister, Carlo Azeglio Ciampi, and Confindustria.[57]
On 29 June 1994, Sergio Cofferati became the new General Secretary of CGIL and quickly start facing the new Prime Minister, Silvio Berlusconi, a media magnate who founded the new conservative party, Forza Italia (FI), collecting the electoral heritage of the Christian Democrats in the 1994 election, in alliance with the Northern League and the heirs of the neo-fascist MSI, National Alliance (AN).[58][59] The first act of the Berlusconi's government concerned the attempt to radically reduce the Italian social security system, breaking the "pact between generations" that supports it. The confederations react unanimously with extreme determination and on 12 November a demonstration took place in Rome with a million workers and pensioners. The great popular participation in protest put the centre-right coalition in crisis and, with the withdrawal of the League from the cabinet, the Berlusconi's government fell. The reform, launched in 1995 after an agreement with the social partners and the positive outcome from workers, innovated the social security system with a gradual transition to the contributory system and the beginning of supplementary pensions.[60]
With the victory of Romano Prodi's centre-left coalition in 1996 general election, the dialogue with the trade union movement was strengthened and, as already mentioned, allowed Italy to reach Euro convergence criteria and enter into the single currency.[61] CGIL, CISL and UIL were also protagonists of a battle against the secessionism of the League, which put at risk the political unity of Italy, with major demonstrations in Milan and Venice.[citation needed]
From 2000 to the Great Recession and the COVID-19 pandemic [ edit ]
Berlusconi returned to power after the 2001 general election. His government tried to abolish Article 18 of the Workers' Statute, which protected workers from unjustified dismissal. On 23 March 2002, the CGIL led by Sergio Cofferati announced a great demonstration against the reform. It was the largest mass demonstration in Italian history, with more than three million people gathered at the Circus Maximus in Rome to protest against the abolition of Article 18.[62] The CGIL continued the struggle, proclaiming a general strike for 18 April of the same year, which was later joined by CISL and UIL. After a few weeks the government announced the withdrawal of the reform.[63]
In September 2002, CGIL elected Guglielmo Epifani as the new General Secretary. Epifani continued the fight against Berlusconi's government, started by his predecessor.[64] In particular he launched general strikes against budget laws of 2003 and 2004. After the 2006 general election, Prodi's centre-left, strongly supported by Epifani's CGIL, returned to power.[65] However, he lost the majority after less than two years and Berlusconi became Prime Minister once again after the 2008 general election.[66]
CGIL protest in Bologna against Matteo Renzi's labour reform in 2014.
On 3 November 2010, Susanna Camusso was elected General Secretary; she was the first woman to hold the office.[67] Camusso's secretariat was characterized by the Great Recession and the European sovereign debt crisis, which harshly affected Italy in the early 2010s, leading Berlusconi to resign in November 2011.[68] On 4 December 2011, the technocratic government of Mario Monti introduced emergency austerity measures intended to stem the worsening economic conditions in Italy and restore market confidence, especially after rising Italian government bond yields began to threaten Italy's financial stability.[69] The austerity package called for increased taxes, pension reform and measures to fight tax evasion. Monti also announced that he would be giving up his own salary as part of the reforms.[70] On 20 January 2012, Monti's government formally adopted a package of reforms targeting Italy's labour market. The reforms were intended to open certain professions to more competition by reforming their licensing systems and abolishing minimum tariffs for their services.[71][72] Article 18 of Italy's Workers' Statute, which requires companies that employ 15 or more workers to re-hire any employee found to have been fired without just cause,[73][74] would also be reformed. The proposals faced a strong opposition from Camusso's CGIL and other trade unions, which was followed by public protests, which forced the government to withdraw the amendment on Article 18.[75]
In 2014, Article 18 was finally abolished by the centre-left cabinet of Matteo Renzi as part of a huge labour market reform called the Jobs Act.[76] The proposal was heavily criticised by Camusso, who announced a public protest.[77] On 25 October, almost one million people took part in a mass protest in Rome, organised by the CGIL in opposition to the labour reforms of the government. Some high-profile members of the left-wing faction of the Democratic Party also participated in the protest.[78] On 8 November, more than 100,000 public employees protested in Rome in a demonstration organised by the three trade unions.[79] Despite the mass protests, the Parliament approved the Jobs Act in December 2014.[80][81] After years of fights to protect Article 18 from the reforms promoted by the centre-right, it was finally abolished by the centre-left, causing a serious break between CGIL and its political counterpart, the Democratic Party.[82]
On 24 January 2019, during the 18th National Congress in Bari, Maurizio Landini, a left-wing populist and former Secretary of FIOM, was elected General Secretary.[83][84] Landini's main opponent in the Congress, Vincenzo Colla, a reformist and former Regional Secretary of CGIL for Emilia-Romagna, was appointed Vice Secretary.[85][86] During his inaugural speech, Landini strongly attacked the M5S–League government and especially its Interior Minister, Matteo Salvini, denouncing a serious risk of a return of fascism in the country.[87] On 9 February, CGIL, CISL and UIL protested together in Rome, against the economic measures promoted by Conte's government; more than 200,000s people participated in the march.[88] It was the first time since 2013 that the three trade unions organized a unified rally.[89]
On 9 October 2021, the CGIL's national headquarters in Rome was attacked by a mob of members of the neo-fascist party New Force, who were protesting against the introduction of a COVID-19 vaccination certificate in Italy.[90] Secretary Landini described the attack as an "act of fascist squadrismo".[91]
On 18 March 2023, at the 19th Congress of the CGIL, Landini was re-elected general secretary with 94.2% of votes in favour.
In 2025, CGIL promoted five referendums, concerning changes to the law on the acquisition of Italian citizenship for foreign residents and the repeal of some provisions on employment, three of which were originally introduced by the Jobs Act in 2016.[92][93]
General Secretaries [ edit ]
Timeline [ edit ]
National Congresses [ edit ]
1st National Congress – Florence, Tuscany, 1–7 June 1947
2nd National Congress – Genoa, Liguria, 4–9 October 1949
3rd National Congress – Naples, Campania, 26 November–3 December 1952
4th National Congress – Rome, Lazio, 27 February–4 March 1956
5th National Congress – Milan, Lombardy, 2–7 April 1960
6th National Congress – Bologna, Emilia-Romagna, 31 March–5 April 1965
7th National Congress – Livorno, Tuscany, 16–21 June 1969
8th National Congress – Bari, Apulia, 2–7 July 1973
9th National Congress – Rimini, Emilia-Romagna, 6–11 June 1977
10th National Congress – Rome, Lazio, 16–21 November 1981
11th National Congress – Rome, Lazio, 28 February–4 March 1986
12th National Congress – Rimini, Emilia-Romagna, 23–27 October 1991
13th National Congress – Roma, Lazio, 2–5 July 1996
14th National Congress – Rimini, Emilia-Romagna, 6–9 February 2002
15th National Congress – Rimini, Emilia-Romagna, 1–4 March 2006
16th National Congress – Rimini, Emilia-Romagna, 5–8 May 2010
17th National Congress – Rimini, Emilia-Romagna, 6–8 May 2014
18th National Congress – Bari, Apulia, 22–24 January 2019
19th National Congress – Rimini, Emilia-Romagna, 15–18 March 2023
Affiliated union federations [ edit ]
Current affiliates [ edit ]
Former affiliates [ edit ]
Formally associated bodies [ edit ]
Acronym Logo Name Chair AGENQUADRI General Association of Managers Paolo Terranova Auser Self-management Services Enzo Costa Federconsumatori Consumer Federation Emilio Viafora RSM Secondary School Students' Network Giammarco Manfreda UdU University Students' Union Enrico Gulluni
Symbols [ edit ]
CGIL logo
CGIL flag
See also [ edit ]
| 2023-03-18T00:00:00 |
https://en.wikipedia.org/wiki/Italian_General_Confederation_of_Labour
|
[
{
"date": "2023/03/18",
"position": 46,
"query": "AI labor union"
}
] |
|
The new ai powered microsoft designer is a canva killer
|
The new ai powered microsoft designer is a canva killer
|
https://community.openai.com
|
[] |
Microsoft Designer is an online design tool that leverages the power of artificial intelligence to assist users in creating visually appealing designs.
|
In recent years, Canva has emerged as a popular graphic design tool that allows users to create stunning designs with ease. However, Microsoft has recently launched a new AI-powered tool called “Microsoft Designer” that has the potential to become a Canva killer.
Microsoft Designer is an online design tool that leverages the power of artificial intelligence to assist users in creating visually appealing designs. With a simple and intuitive user interface, Microsoft Designer makes it easy for anyone to create professional-looking designs in a matter of minutes.
One of the key features of Microsoft Designer is its use of AI algorithms to generate design recommendations. As users add elements to their designs, the AI algorithms analyze the composition and offer suggestions for how to improve the overall aesthetic. This is a game-changer, as it removes the need for users to have an in-depth understanding of design principles and allows anyone to create polished designs without much effort.
Another advantage of Microsoft Designer over Canva is its integration with other Microsoft products. Users can import data and charts from Excel, add 3D models from PowerPoint, and use images from OneDrive, making it an all-in-one solution for design needs. This integration also means that users can easily collaborate with colleagues and share designs across various Microsoft platforms.
Microsoft Designer also offers a variety of templates that are optimized for various purposes, including social media posts, business cards, presentations, and more. These templates offer a starting point for users who may not know where to begin with their designs, allowing them to create professional-looking designs quickly and easily.
While Canva has been around for a while and has a significant user base, Microsoft Designer has the potential to become a serious contender in the graphic design space. With its AI-powered recommendations, integration with other Microsoft products, and optimized templates, it offers a unique and innovative solution for design needs.
Overall, Microsoft Designer is a powerful tool that has the potential to revolutionize the way people approach graphic design. While it remains to be seen whether it will truly become a Canva killer, there’s no denying that it’s a tool worth trying out for anyone looking to create professional-looking designs quickly and easily.
| 2023-03-18T00:00:00 |
2023/03/18
|
https://community.openai.com/t/the-new-ai-powered-microsoft-designer-is-a-canva-killer/105006
|
[
{
"date": "2023/03/18",
"position": 10,
"query": "AI graphic design"
}
] |
Lindy — Meet Your AI Assistant
|
Lindy — Meet Your AI Assistant
|
https://www.lindy.ai
|
[] |
Let AI do the work ... Give custom instructions to your agent, all in natural language. Use it ...
|
Lindy is the simplest way to create AI agents — smart automations that integrate with all your apps, from Gmail to HubSpot, to save you hours a week and help you grow your business.
| 2023-03-18T00:00:00 |
https://www.lindy.ai/
|
[
{
"date": "2023/03/18",
"position": 92,
"query": "artificial intelligence workers"
}
] |
|
AI and the future of work: Everything is about to change
|
AI and the future of work: Everything is about to change
|
https://edition.cnn.com
|
[
"Samantha Murphy Kelly"
] |
AI tools, which have long operated in the background of many services, are now more powerful and more visible across a wide and growing range of workplace ...
|
New York CNN —
In just a few months, you’ll be able to ask a virtual assistant to transcribe meeting notes during a work call, summarize long email threads to quickly draft suggested replies, quickly create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.
And that’s just on Microsoft’s 365 platforms.
Over the past week, a rapidly evolving artificial intelligence landscape seemed to leap ahead again. Microsoft and Google each unveiled new AI-powered features for their signature productivity tools and OpenAI introduced its next-generation version of the technology that underpins its viral chatbot tool, ChatGPT.
Suddenly, AI tools, which have long operated in the background of many services, are now more powerful and more visible across a wide and growing range of workplace tools.
Google’s new features, for example, promise to help “brainstorm” and “proofread” written work in Docs. Meanwhile, if your workplace uses popular chat platform Slack, you’ll be able to have its ChatGPT tool talk to colleagues for you, potentially asking it to write and respond to new messages and summarize conversations in channels.
OpenAI, Microsoft and Google are at the forefront of this trend, but they’re not alone. IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.
The pitch from tech companies is clear: AI can make you more productive and eliminate the grunt work. As Microsoft CEO Satya Nadella put it during a presentation on Thursday, “We believe this next generation of AI will unlock a new wave of productivity growth: powerful copilots designed to remove the drudgery from our daily tasks and jobs, freeing us to rediscover the joy of creation.”
But the sheer number of new options hitting the market is both dizzying and, as with so much else in the tech industry over the past decade, raises questions of whether they will live up to the hype or cause unintended consequences, including enabling cheating and eliminating the need for certain roles (though that may be the intent of some adopters).
Even the promise of greater productivity is unclear. The rise of AI-generated emails, for example, might boost productivity for the sender but decrease it for recipients flooded with longer-than-necessary computer-generated messages. And of course just because everyone has the option to use a chatbot to communicate with colleagues doesn’t mean all will chose to do so.
Integrating this technology “into the foundational pieces of productivity software that most of us use everyday will have a significant impact on the way we work,” said Rowan Curran, an analyst at Forrester. “But that change will not wash over everything and everyone tomorrow — learning how to best make use of these capabilities to enhance and adjust our existing workflows will take time.”
Anyone who has ever used an autocomplete option when typing an email or sending a message has already experienced how AI can speed up tasks. But the new tools promise to go far beyond that.
The renewed wave of AI product launches kicked off nearly four months ago when OpenAI released a version of ChatGPT on a limited basis, stunning users with generating human-sounding responses to user prompts, passing exams at prestigious universities and writing compelling essays on a range of topics.
Since then, the technology — which Microsoft made a “multibillion dollar” investment in earlier this year — has only improved. Earlier this week, OpenAI unveiled GPT-4, a more powerful version of the technology that underpins ChatGPT, and which promises to blow previous iterations out of the water.
In early tests and a company demo, GPT-4 was used to draft lawsuits, build a working website from a hand-drawn sketch and recreate iconic games such as Pong, Tetris or Snake with very little to no prior coding experience.
GPT-4 is a large language model that has been trained on vast troves of online data to generate responses to user prompts.
It’s the same technology that underpins two new Microsoft features:”Co-pilot,” which will help edit, summarize, create and compare documents across its platforms, and Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data.
The agent will know, for example, what’s in a user’s email and on their calendar for the day, as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update.
Curran said just how much these AI-powered tools will change work depends on the application. For example, a word processing application could help generate outlines and drafts, a slideshow program may help speed along the design and content creation process, and a spreadsheet app should help more users interact with and make data-driven decisions. The latter he believes will make the most significant impact to the workplace in both the short and long-term.
The discussion of how these technologies will impact jobs “should focus on job tasks rather than jobs as a whole,” he said.
Challenges ahead
Although OpenAI’s GPT-4 update promises fixes to some of its biggest challenges — from its potential to perpetuate biases, sometimes being factually incorrect and responding in an aggressive manner — there’s still the possibility for some of these issues to find their way into the workplace, especially when it comes to interacting with others.
Arijit Sengupta, CEO and founder of AI solutions company Aible, said a problem with any large language model is that it tries to please the user and typically accepts the premise of the user’s statements.
“If people start gossiping about something, it will accept it as the norm and then start generating content [related to that],” said Sengupta, adding that it could escalate interpersonal issues and turn into bullying at the office.
In a tweet earlier this week, OpenAI CEO Sam Altman wrote the technology behind these systems is “still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.” The company reiterated in a blog post that “great care should be taken when using language model outputs, particularly in high-stakes contexts.”
Arun Chandrasekaran, an analyst at Gartner Research, said organizations will need to educate their users on what these solutions are good at and what their limitations are.
“Blind trust in these solutions is as dangerous as complete lack of faith in the effectiveness of it,” Chandrasekaran said. “Generative AI solutions can also make up facts or present inaccurate information from time to time – and organizations need to be prepared to mitigate this negative impact.”
At the same time, many of these applications are not up to date (GPT-4’s data that it’s trained on cuts off around September 2021). The onus will have to be on the users to do everything from double check the accuracy to change the language to reflect the tone they want. It will also be important to get buy-in and support across workplaces for the tools to take off.
“Training, education and organizational change management is very important to ensure that employees are supportive of the efforts and the tools are used in the way they were intended to,” Chandrasekaran said.
| 2023-03-19T00:00:00 |
2023/03/19
|
https://edition.cnn.com/2023/03/19/tech/ai-change-how-we-work
|
[
{
"date": "2023/03/19",
"position": 9,
"query": "future of work AI"
}
] |
AI Guidelines: Can AI Companies Make Responsible AI?
|
Can Big AI Make Responsible AI?
|
https://spectrum.ieee.org
|
[
"Ned Potter"
] |
But the job keeps getting harder as more offenders and malefactors use ... artificial intelligencegpt-4openaiai regulationdeepfakesai ethics · Ned Potter.
|
Perhaps it was inevitable, as the AI world absorbed the news of GPT-4, that some people would think of Frankenstein’s monster. Or HAL 9000. Or the Terminator—any of science fiction’s great stories of technologies that wrought havoc before human beings had thought through their implications.
Even as the latest large language model has taken the tech world by surprise, the industry is scrambling—to burnish its ethical AI credentials, and to keep its standards for AI ethics ahead of the rapid advances in the field. A prime case: Microsoft, which has had a Responsible AI initiative since 2017, has just added new open-source applications to what it calls its Responsible AI Toolbox, coding that developers can use “to make it easier and faster for developers to incorporate responsible AI principles into their solutions.” (Not unrelatedly, in a recent round of layoffs, Microsoft closed down an Ethics and Society team that it said had guided early AI efforts. A spokesperson, contacted by Spectrum, says there has been no letup in “the interdisciplinary way in which we work across research, policy, and engineering.”)
“[T]here’s clearly a need to develop guidelines and move more swiftly than regulation.”
—Claire Leibowicz, Partnership on AI
“AI may well represent the most consequential technology advance of our lifetime,” wrote Brad Smith, Microsoft’s vice chair and president, in a blog post in February. His words were tempered: “Will all the changes be good? While I wish the answer were yes, of course that’s not the case.”
Separately, the Partnership on AI (PAI), a nonprofit that seeks to promote discussions of AI issues, has just published “Responsible Practices for Synthetic Media,” a set of guidelines for how to create and share multimedia content generated by AI. Members of the partnership include such companies as OpenAI, Adobe, Tiktok and the BBC’s R&D arm, as well as several AI startups.
But how effectively can major tech companies be in policing AI’s development, especially given how widely the use of AI tools is spreading beyond the tech giants? If you’re concerned about deepfakes, watch the spread of “cheap fakes,” images or videos fabricated with AI’s help that may often be crude, but that can be made, for free, by anyone who finds an AI app online. The largest social media companies, including Meta, Twitter and Google (which owns YouTube), have committed to removing misinformation or offensive posts. But the job keeps getting harder as more offenders and malefactors use more and increasingly sophisticated AI technologies.
Last month, for instance, a video turned up on Twitter of President Biden announcing he was going to start drafting American troops to protect Ukraine. It was, of course, fake—the conservative influencer who posted it came on camera after Biden to say so. He claimed it was an AI-powered warning of what the White House might do. As of this week it was still online, viewed more than 4 million times. It didn’t violate Twitter’s rules because it didn’t claim to be real. But a lot of people who reacted on Twitter apparently didn’t watch long enough to see the disclaimer.
How to decide, in such cases, what to do? Can the big tech companies—can anyone—set rules in advance that will work for everything that might be done with AI in the future?
“Everyone, I think, is operating in this Wild West and is eager to have some set of guidelines,” says Claire Leibowicz of the PAI. “I think, for good reasons, people are understandably skeptical of voluntary standards. At the same time, based on the swell of interest, and guidance from people from many different sectors, there’s clearly a need to develop guidelines and move more swiftly than regulation.”
Government, particularly in the United States, has moved slowly to make AI rules. That’s fine with many developers who would argue that regulators will be heavy-handed and behind the curve. For now, that leaves companies in charge. They so far have tended to set fairly general standards. The PAI’s framework, for example, recommends that content creators be transparent when they’ve altered or faked something, perhaps using labels or digital watermarks so that users can easily tell. The PAI agrees, at least in public, that it cannot go it alone.
“Microsoft believes that some regulation of AI, particularly for high-risk uses of the technology, is necessary,” says Besmira Nushi, a principal researcher at Microsoft Research, in an email. “As governments worldwide debate approaches to regulating certain uses of AI, Microsoft is committed to doing our part to develop and deploy AI responsibly.”
Leibowicz, at the PAI, says that if companies agree on a list of harmful and responsible uses of AI, it needs to be a living document, adaptable in a fast-changing field. “And it’s our hope that that will catalyze or galvanize the field of people who have a major role to play in this effort. And, to that end, it will be a complement to regulation that’s absolutely necessary.
“But,” she adds, “I think there’s also a degree of maintaining some humility at being unable to predict the future.”
| 2023-03-19T00:00:00 |
2023/03/19
|
https://spectrum.ieee.org/ai-ethics-industry-guidelines
|
[
{
"date": "2023/03/19",
"position": 59,
"query": "AI regulation employment"
}
] |
Who We Are - Aurora Workforce
|
Who We Are
|
http://auroraworkforce.com
|
[] |
... learning management systems, web management, cloud services, artificial intelligence (AI), machine learning (MLaas), software engineering, data analytics ...
|
For more than 20 years, we have designed, implemented, and maintained labor solutions for the Federal government and commercial businesses. These solutions include, but are not limited to, staff management, program management, content management, learning management systems, web management, cloud services, artificial intelligence (AI), machine learning (MLaas), software engineering, data analytics, designing and managing critical systems.
Aurora Workforce’s managers, scientists, engineers, and developers use science, engineering, and data management techniques to transform vast amounts of environmental and scientific data into powerful tools for decision making. Keeping up with the fast pace of technology, we ensure that our staff is equipped with the knowledge and tools to make sure we can expand our expertise into popular solutions.
By harnessing the power of the latest technology in science, information technology, and engineering, Aurora Workforce is helping create a better future for our clients.
| 2023-03-19T00:00:00 |
http://auroraworkforce.com/whoweare/
|
[
{
"date": "2023/03/19",
"position": 54,
"query": "machine learning workforce"
}
] |
|
20 Pros and Cons of Unions (2025)
|
20 Pros and Cons of Unions
|
https://helpfulprofessor.com
|
[
"Chris Drew",
"Phd",
"Learn About Our",
"Root",
"--M-A-Box-Bp",
"--M-A-Box-Bp-L",
".M-A-Box",
"Width",
"Margin-Top",
"Important Margin-Bottom"
] |
Unions can benefit workers by creating the opportunity for collective bargaining, which gives them better outcomes in negotiations. This often benefits.
|
This Article was Last Expert Reviewed on September 13, 2023 by Chris Drew, PhD
We cite peer reviewed academic articles wherever possible and reference our sources at the end of our articles. All articles are edited by a PhD level academic. Learn more about our academic and editorial standards.
| 2023-03-19T00:00:00 |
2023/03/19
|
https://helpfulprofessor.com/pros-and-cons-of-unions/
|
[
{
"date": "2023/03/19",
"position": 54,
"query": "AI labor union"
}
] |
Artificial Intelligence (AI) in Education & Assessment
|
Artificial Intelligence (AI) in Education & Assessment: Opportunities and Best Practices
|
https://assess.com
|
[
"Laila Issayeva M.Sc."
] |
This article will look at some of the latest AI developments used in education, their potential impact, and drawbacks they possess.
|
Artificial intelligence (AI) is poised to address some challenges that education deals with today, through innovation of teaching and learning processes. By applying AI in education technologies, educators can determine student needs more precisely, keep students more engaged, improve learning, and adapt teaching accordingly to boost learning outcomes. A process of utilizing AI in education started off from looking for a substitute for one-on-one tutoring in the 1970s and has been witnessing multiple improvements since then. This article will look at some of the latest AI developments used in education, their potential impact, and drawbacks they possess.
Application of AI
Recently, a helping hand of AI technologies has permeated into all aspects of educational process. The research that has been going since 2009 shows that AI has been extensively employed in managing, instructing, and learning sectors. In management, AI tools are used to review and grade student assignments, sometimes they operate even more accurately than educators do. There are some AI-based interactive tools that teachers apply to build and share student knowledge. Learning can be enhanced through customization and personalization of content enabled by new technological systems that leverage machine learning (ML) and adaptability.
Below you may find a list of major educational areas where AI technologies are actively involved and that are worthy of being further developed.
Personalized learning This educational approach tailors learning trajectory to individual student needs and interests. AI algorithms analyze student information (e.g. learning style and performance) to create customized learning paths. Based on student weaknesses and strengths, AI recommends exercises and learning materials. AI technologies are increasingly pivotal in online learning apps, personalizing education and making it more accessible to a diverse learner base. Adaptive learning This approach does the same as personalized learning but in real-time stimulating learners to be engaged and motivated. ALEKS is a good example of an adaptive learning program. Learning courses These are AI-powered online platforms that are designed for eLearning and course management, and enable learners to browse for specific courses and study with their own speed. These platforms offer learning activities in an increasing order of their difficulty aiming at ultimate educational goals. For instance, advanced Learning Management Systems (LMS) and Massive Open Online Courses (MOOCs). Learning assistants/Teaching robots AI-based assistants can supply support and resources to learners upon request. They can respond to questions, provide personalized feedback, and guide students through learning content. Such virtual assistants might be especially helpful for learners who cannot access offline support. Adaptive testing This mode of delivering tests means that each examinee will get to respond to specific questions that correspond to their level of expertise based on their previous responses. It is possible due to AI algorithms enabled by ML and psychometric methods, i.e. item response theory (IRT). You can get more information about adaptive testing from Nathan Thompson’s blog post. Remote proctoring It is a type of software that allows examiners to coordinate an assessment process remotely whilst keeping confidentiality and preventing examinees from cheating. In addition, there can be a virtual proctor who can assist examinees in resolving any issues arisen during the process. The functionality of proctoring software can differ substantially depending on the stakes of exams and preferences of stakeholders. You can read more on this topic from the ASC’s blog here. Test assembly Automated test assembly (ATA) is a widely used valid and efficient method of test construction based on either classical test theory (CTT) or item response theory (IRT). ATA lets you assemble test forms that are equivalent in terms of content distribution and psychometric statistics in seconds. ASC has designed TestAssembler to minimize a laborious and time-consuming process of form building. Automated grading Grading student assignments is one of the biggest challenges that educators face. AI-powered grading systems automate this routine work reducing bias and inconsistencies in assessment results and increasing validity. ASC has developed an AI essay scoring system—SmartMarq. If you are interested in automated essay scoring, you should definitely read this post. Item generation There are often cases when teachers are asked to write a bunch of items for assessment purposes, as if they are not busy with lesson planning and other drudgery. Automated item generation is very helpful in terms of time saving and producing quality items. Search engine The time of libraries has sunk into oblivion, so now we mostly deal with huge search engines that have been constructed to carry out web searches. AI-powered search engines help us find an abundance of information; search results heavily depend on how we formulate our queries, choose keywords, and navigate between different sites. One of the biggest search engines so far is Google. Chatbot Last but not least… Chatbots are software applications that employ AI and natural language processing (NLP) to make humanized conversations with people. AI-powered chatbots can provide learners with additional personalized support and resources. ChatGPT can truly be considered as the brightest example of a chatbot today.
Highlights of AI and challenges to address
Today AI-powered functions revolutionize education, just to name a few: speech recognition, NLP, and emotion detection. AI technologies enable identifying patterns, building algorithms, presenting knowledge, sensing, making and following plans, maintaining true-to-life interactions with people, managing complex learning activities, magnifying human abilities in learning contexts, and supporting learners in accordance with their individual interests and needs. AI allows students to use handwriting, gestures or speech as input while studying or taking a test.
Along with numerous opportunities, AI-evolution brings some risks and challenges that should be profoundly investigated and addressed. While approaching utilization of AI in education, it is important to keep caution and consideration to make sure that it is done in a responsible and ethical way, and not to get caught up in the mainstream since some AI tools consult billions of data available to everyone on the web. Another challenge associated with AI is a variability in its performance: some functions are performed on a superior level (such as identifying patterns in data) but some of them are quite primitive (such as inability to support an in-depth conversation). Even though AI is very powerful, human beings still play a crucial role in verifying AI’s output to avoid plagiarism and falsification of information.
Conclusion
AI is already massively applied in education around the world. With the right guidance and frameworks in place, AI-powered technologies can help build more efficient and equitable learning experiences. Today we have an opportunity to witness how AI- and ML-based approaches contribute to development of individualized, personalized, and adaptive learning.
ASC’s CEO, Dr Thompson, presented several topics on AI at the 2023 ATP Conference in Dallas, TX. If you are interested in utilizing AI-powered services provided by ASC, please do not hesitate to contact us!
References
Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: A guidance for policymakers. UNESCO.
Niemi, H., Pea, R. D., & Lu, Y. (Eds.). (2022). AI in learning: Designing the future. Springer. https://doi.org/10.1007/978-3-031-09687-7
| 2023-03-19T00:00:00 |
2023/03/19
|
https://assess.com/ai-in-education/
|
[
{
"date": "2023/03/19",
"position": 41,
"query": "artificial intelligence education"
}
] |
ChatGPT AI for Education Resources & Impact
|
ChatGPT AI for Education Resources & Impact
|
https://www.create-learn.us
|
[
"Create",
"Learn Team"
] |
See ChatGPT's impact on education & what's ahead. Explore free ChatGPT education resources for teachers & parents from experts.
|
Your children's learning today and their future have changed drastically right in front of our eyes with the recent launch of ChatGPT. The impact and capabilities of artificial intelligence (AI) are growing almost daily.
With ChatGPT passing Google interviews and law, medical, and business school exams, and about 90% of students admitting using ChatGPT for their homework, there are both huge opportunities to use ChatGPT for improving education and more urgency in shifting how we educate children.
We have gathered this collection of resources to help educators and parents understand and navigate ChatGPT (and similar AI technologies). We will continue to update the page as new developments occur.
What Is ChatGPT?
To put it in the simplest form, ChatGPT is an Artificial Intelligence technology that you can converse with and ask an incredible wide range of questions from how to cook pasta, travel recommendation, to coding, solving calculus problems, and a lot more. It is so intelligent that it can pass Google Interviews, AP tests, and Bar exams just to name a few examples.
Hear how Sam Altman described ChatGPT. He is CEO of OpenAI - the company that developed ChatGPT. For a more technical description of ChatGPT, check out the ChatGPT launch blog.
ChatGPT was created by OpenAI. Similar AI technologies have been developed by Google and other companies. But ChatGPT is the most well known one. In this guide, for simplicity, we sometimes use ChatGPT to represent this kind of AI technology in general.
More In-depth technical references on what ChatGPT is
How To Use ChatGPT
Many products now incorporate ChatGPT. You can try it out still at its original form for free on the OpenAI ChatGPT website. Here is a two-minute video that walks you though how to get started using ChatGPT.
ChatGPT Prompt Engineering
ChatGPT can do numerous things. How much power you can get out of it depends largely on your ability with prompt engineering.
If you are curious about what others have talked to ChatGPT about or want some inspiration on what to prompt ChatGPT with, check out the "Awesome ChatGPT Prompts" repository. It is a collection of hundreds of fun prompt examples to have ChatGPT act as a screenwriter, tea taster, interior designer, statistician, ... the limit is only our imagination!
Here are some of the more technical references:
ChatGPT's Education Impact
One of the areas that ChatGPT will surely create significant impact is education. As an example, as of March 2023, ChatGPT 4 is able to pass a wide range of AP and higher level tests.
The immediate concern is there is a strong potential that students might misuse ChatGPT, instead of trying to do research, homework, and reports on their own.
Most also see the huge potential of how ChatGPT can be that powerful personalized teacher that adapt its teaching for every single student.
The challenge in front of most educators in the short term is how to reduce mis-uses of ChatGPT, and ideally leverage it to improve students' learning experiences. Interestingly, in our opinion, the best way to reduce mis-uses is in fact to proactively incorporate it as part of learning.
The long term strategic/policy implication of ChatGPT is profound. We will need to rethink what students should be learning and how education should be structured.
Views & Thoughts on ChatGPT from Industry Leaders
There are many references in this area. We try to highlight diverse samples of opinions. The goal is not to be exhaustive.
Research and Surveys about ChatGPT for Education
Open AI, OpenResearch, University of Pennsylvania
Key findings (learn more)
1 in 5 American workers could do their job much, much faster. 19% of American workers have at least 50% of their tasks exposed to GPTs, meaning that access to a GPT could reduce its completion time by at least 50%. Faster and cheaper, but not necessarily better…it depends on the data
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
AI & Tomorrow's Job (Junior Achievement) - March 2023
Key findings (learn more, report)
More than nine in 10 teens say they would be interested in learning in high school about how to work with artificial intelligence.
Sixty-six percent of teens said in a newly released survey that they are concerned they may not be able to find a good job as adults because of artificial intelligence. And one in three said they were “very” or “extremely” concerned.
Walton Family Foundation - March 2023
Key findings: (learn more)
A 51% majority of teachers report using ChatGPT, with higher usage among Black (69%) and Latino (69%) teachers.
Three in ten teachers have used it for lesson planning (30%), coming up with creative ideas for classes (30%), and building background knowledge for lessons and classes (27%).
A third of students 12-17 say they’ve used ChatGPT for school (33%), including 47% of those 12-14.
88% of teachers and 79% of students who’ve used ChatGPT say it’s had a positive impact. Teachers
Study.com (Q1, 2023)
Key findings: (learn more)
82% of college professors are aware of ChatGPT, compared to 55% of grade school educators. Over 9 in 10 students are aware of ChatGPT, far more than grade school educators.
Over a third (34%) of all educators believe that ChatGPT should be banned in schools and universities, while 66% support students having access to it. Surprisingly, 72% of college students believe that ChatGPT should be banned from their college's network.
Quizlet (July, 2023)
Key findings: (Learn more)
62% of all respondents have used AI technologies
Students agree that AI technologies help them better understand material
(73%) and study faster or more efficiently (67%)
Students who study three hours or more per night are more likely to say they have used AI technologies
News & Opinion Pieces on ChatGPT and Education
There are also many references in this area. Again, our goal is to highlight a diverse sample of opinions, rather than being exhaustive.
Panel Discussion on ChatpGPT for Education for Teachers and Parents
How ChatGPT and AI are Changing Children’s Learning and Their Future (Event for Parents & Teachers) - Create & Learn March 2023
David Touretzky - Computer Science Professor at Carnegie Mellon University; Founder and chair of AI4K12.org
Jamila Khawaja - Senior Curriculum Development Manager at Code.org
Jia Li - Chief AI Fellow of Cloud First, Data & AI at Accenture, Founding Head of Google Cloud AI R&D, Stanford
Wes Chao - Computer Science teacher at The Nueva School, a leading preK-12 independent school for gifted learners, and former Facebook engineer
How ChatGPT Works Technically
Semi-technical explanation by Ari Seff (YouTube video)
Deeper technical presentation- Stanford Webinar - GPT-3 & Beyond (YouTube video)
Safety for ChatGPT - Our approach to AI safety (OpenAI)
Learn More About AI: AI Classes for Students
Interested in having your children or students learn more about AI? Check out our award-winning live online AI classes for grades K-12, led live by an expert instructor.
| 2023-03-19T00:00:00 |
2023/03/19
|
https://www.create-learn.us/blog/chatgpt-ai-for-education-resources/
|
[
{
"date": "2023/03/19",
"position": 72,
"query": "artificial intelligence education"
}
] |
AI isn't yet going to take your job
|
AI isn’t yet going to take your job — but you may have to work with it
|
https://www.washingtonpost.com
|
[
"Danielle Abril",
"Danielle Abril Covers Technology",
"Its Impact On Workers Across Industries For The Washington Post."
] |
Artificial intelligence is increasingly making its way across industries, changing jobs from retail to medicine to marketing.
|
In a world of infallible artificial intelligence, computers could do most of our work for us. They could diagnose our illnesses in a second. Robots and autonomous vehicles could shop and deliver our groceries. Systems could ensure we don’t break our budgets. AI could operate our transit — planes, trains and cars — without human assistance, and even make our dinner.
That’s the vision of many AI enthusiasts. But the current reality is that while there has been progress, humans are still required to do most jobs. An AI could introduce problems to the workplace, creating risks for workers, their employers and customers, some experts say.
Today, AI can power grocery store robots that change how stores get stocked, speed up vaccine production and generate creative ideas. But the latest advancements raise important questions for workers: How much of our jobs depend on humans? Can technology replace us?
Press Enter to skip to end of carousel Work: Reimagined With attitudes toward work undergoing a dramatic transformation, this series explores the impact of that shift on everything from the shape of the American workplace to the role work plays in our lives. Read more. End of carousel
AI won’t entirely replace humans any time soon, industry experts and companies investing in the technology say. But jobs are transforming as AI becomes more accessible.
“Every job will be impacted by AI,” said Pieter den Hamer, vice president of research who covers artificial intelligence at market research firm Gartner. “Most of that will be more augmentation rather than replacing workers.”
Companies have been using AI for years to help crunch large amounts of data to produce insights for their businesses. Some blue-collar jobs have used AI-powered machines to help with warehouse inventory.
White-collar jobs are likely to see the biggest impact near-term, den Hamer said, as AI can be applied at a relatively low cost compared with deploying a fleet of autonomous trucks, for example.
Story continues below advertisement Advertisement Story continues below advertisement Advertisement
If you’re curious about how some jobs might shift, explore the industries below.
Banking and finance
What’s happening: Large banks have been using AI to improve back-end operations, cybersecurity and power chatbots for faster customer response.
Royal Bank of Canada said it’s testing generative AI to help build software faster. AI can help developers find code they can repurpose for new products or write basic new code, said Martin Wildberger, its executive vice president of innovation and tech.
[Quiz: Did AI make this? Test your knowledge.]
Financial firm Capital One said AI and machine learning are central to its engineering workforce. The bank holds AI and ML patents for fraud detection and natural language processing.
AI advances in the field: Several banks are aiming to offer more personalized financial products and advice, increase the speed of fraud detection to alert customers instantly and remind people of specific bills or spending.
Abhijit Bose, a Capital One senior vice president, said AI could soon monitor transactions to offer more personalized financial advice, insights on spending and saving or quick alerts on deviations from normal spending habits — something as simple as an outlier tip percentage.
Story continues below advertisement Advertisement Story continues below advertisement Advertisement
Morgan Stanley recently began testing chatbots powered by OpenAI’s GPT-4 with 300 advisers to help them easily pull up research and data. The firm plans to open it up to its 16,000 advisers in upcoming months.
But financial institutions are cautious. AI could introduce risks such as frustrating customers with too much automation, breaking privacy laws aimed at protecting customer’s personal financial data and potentially discriminating against people with lower income.
How jobs might change: RBC is asking workers across functions to become familiar with using AI tools, Wildberger said. It could provide customer service representatives with summaries of complex cases based on previous interactions. And business teams could automate some processes to be more efficient.
“We really focus on the productivity side of tech,” Wildberger said. “Can we automate something to free up [employees’] time?”
Capital One said it’s hiring AI and machine learning engineers, but it’s also upskilling current engineers. Bose said the company has already trained more than 100 engineers through its six-month program.
Health care and pharmaceuticals
What’s happening: Many hospitals use electronic medical records, an area that may benefit from AI for organization and analysis, said Hatim Rahman, an assistant professor at Northwestern University’s Kellogg School of Management who studies AI’s impact on work. And drug development can involve analyzing hundreds of millions of data points, another area where AI could help.
AI advances in the field: Johnson & Johnson sped up the trials of its coronavirus vaccine by using AI to identify hot spots including where variants emerged, said Jim Swanson, executive vice president and chief information officer. It also can help narrow the focus on molecules and identify targets for drug discovery or accelerate image analytics to determine drug effects. And AI supports the manufacturing process for a personalized blood cancer treatment that modifies patients’ own cells.
Swanson said AI also helps guide physicians through procedures like surgeries with augmented reality. As the physician works, it provides guidance on the best next steps. It also helps with reporting adverse events related to drugs by scanning the latest medical literature and flagging reports that need to be reviewed or accelerate image analytics to determine drug effects.
The University of Kansas Health System recently rolled out a generative AI app to more than 140 hospitals. The app, from health-tech company Abridge, records audio of a patient interview, transcribes it, then summarizes important elements to automatically fill out a patient’s medical chart.
“The joy of medicine is helping people get to better health, not the clerical activity,” said Gregory Ator, the health system’s chief medical informatics officer and surgeon. This “just streamlines documentation.”
Generative AI can introduce errors, though, which could be problematic for health-care providers. Abridge highlights parts where reliability of the transcript may be decreased so people can review it, CEO and cardiologist Shiv Rao said.
Some health-care professionals are using AI for cancer screenings, medical imaging and predictions to better detect problems. Google is working with partners such as the Mayo Clinic to validate AI that could automate part of the planning process in radiation treatments for cancer, help mobile ultrasound devices detect early stages of breast cancer or provide vital maternity data without a sonographer and power tuberculosis screenings. But it will probably take years before the technology is ready for professional use, Google said.
If relied on too heavily, AI errors in medical processes could have life-altering consequences.
That’s why Tammy Mahaney, a Bay Area nurse and sonographer, said she checks readings provided by AI-enabled systems to ensure they match what she’s seeing. But she said the tools help her care for more patients.
Story continues below advertisement Advertisement Story continues below advertisement Advertisement
During a missionary trip to an underserved community in the Galápagos Islands, Mahaney used Butterfly IQ+, an AI-enabled tool that helps perform an ultrasound, interpret it and automatically provide measurements and images on a mobile device such as an iPhone. With the ultrasound, Mahaney determined that a woman in her mid-40s was pregnant and not suffering from a tumor, as she had been told. Still, Mahaney said AI is just a tool.
“You always want to be cautious about diagnosis,” she said of AI. “The limitations are you don’t get the human interaction and instinct.”
How jobs might change: Rao said AI isn’t too far from being able to aid health professionals with decision-making.
“There will be a space for AI to be a thought partner,” he said, adding that the tech could help find the differentiator between two conditions.
In the future, AI could be tied to more devices and wearables for health, Swanson said. Johnson & Johnson aims to digitally upskill 10,000 additional employees this year so they can use the tech to forecast sales or improve operations. And it’s exploring how to use and combine data without bias.
Retail
What’s happening: One way big retailers use AI is to track the market price of items, which changes based on factors including supply and logistics, to ensure products are competitively priced. AI can help adjust prices of thousands of products across a store, said Ananda Chakravarty, vice president of research at market intelligence firm International Data Corporation. AI can also help forecast exactly when to drop prices to increase profits.
Retailers are also using AI to schedule workers based on a store’s need, automatically charge people for items with computer vision and recommend products to customers online, Chakravarty added.
AI advances in the field: Sam’s Club, which often serves as tech pilot for Walmart, debuted autonomous floor scrubbers late last year, which in addition to cleaning floors, use computer vision to scan shelves for missing items, low inventory or mislabeled products. The information gets sent back to an ecosystem that could change workers’ priority list. For example, they may need to unload and stock water next if scrubbers determine shelves are empty.
The retailer also uses AI in its virtual voice assistant called Ask Sam, which workers can use to quickly find prices, locate items or help customers. It hopes AI will soon help determine things such as how many croissants workers should bake and automatically alert them when the doughnut count is low, for example.
“We’re moving to where AI is going to be embedded in a lot of things so we can increase associate productivity and reduce friction for members,” Pete Rowe, Sam’s vice president of tech, said.
Story continues below advertisement Advertisement Story continues below advertisement Advertisement
Looking ahead, retailers might use computer vision to automatically identify whether a customer is old enough to buy alcohol, Chakravarty said, adding that the tech is in early stages of adoption. Generative AI may also soon write product descriptions for thousands of products, said Christian Beckner, the National Retail Federation’s vice president of retail technology and cybersecurity. And AI could crawl social media to automatically design clothes or products based on trends, allowing retailers to get new items to market quickly.
But AI-enabled systems aren’t always well received by everyone. When Walmart rolled out robot cleaners a few years ago, some store associates complained about malfunctions and the time they tended to training the robots (Walmart said the bots appeared to have been well-received). And facial recognition systems have historically suffered flaws, often misidentifying people of color, which could lead to security unfairly targeting Brown and Black people.
How jobs might change: Workers’ jobs are likely to be dictated by what machines deem most important or risk losing money or efficiency for the store. Workers also will probably need to adjust to working with data and tech more frequently, Chakravarty said.
“You don’t have to be an expert, but you need to know how to interpret the data,” he said.
But more AI could mean more risks.
“The key concern would be … the risk of algorithmic discrimination or adverse consequences in how you treat different types of customers,” Beckner said. “There definitely needs to be a level of caution.”
Writing and marketing
What’s happening: One of buzziest forms of AI — generative AI — can produce digital images, conversational text, code and summaries of lengthy documents from a simple prompt. While it’s still in its early days, it has big implications for jobs that include writing, coding or promoting products.
AI advances in the field: Last year, software development platform GitHub debuted GitHub Copilot, a tool that uses OpenAI models to write code based on a user’s prompt. Copilot can suggest methods, unit tests, boilerplate code and complex algorithms, GitHub said.
Some writers are using generative AI tools like ChatGPT to co-write and illustrate books to sell on Amazon. And one legislator used it to help draft a law aimed at regulating AI. Companies such as Microsoft and Google are integrating generative AI tools so office workers can do tasks like write emails or create presentations faster within the apps.
Jonathan Nelson, senior digital marketing manager of growth for the American Marketing Association, said marketers are experimenting with ChatGPT to write articles, including optimizing them for search engines, though they’re not yet publishing those items.
“You have AI write a 1,000-word article, and then go through and edit it to make it sound human again,” he said. “It’s a framework for articles.”
Story continues below advertisement Advertisement Story continues below advertisement Advertisement
Jeff MacDonald, social strategy director at ad agency Mekanism, said he uses generative AI to brainstorm images for illustrators and designers. He also uses it to scrape TikTok comments and analyze reactions, ideas, and similarities and differences between brands.
But he often uses other tools to double check AI-generated items, as it can make things up or get things wrong, and he avoids using them in finished products. Some AI companies are being sued for scraping copyrighted materials.
“If [AI companies] lose these lawsuits … there’s no saying they can’t go after a brand that used copyrighted imagery,” he said.
How jobs might change: Generative AI tools could help workers become more productive, especially with content creation, said den Hamer of Gartner. That might mean using AI for a first draft and social media posts to solve simple problems or provide summaries of complex topics.
Nelson said though much is still experimental, marketers have a sense that they’ll soon work with AI if they aren’t already — even if it’s just to help determine the success of a campaign. But he said it will be important for the industry to keep human creativity front and center.
“If everyone relies too much on one or two AI [tools], and it operates the same way, do you end up with rampant sameness where nothing stands out?” he said.
| 2023-03-20T00:00:00 |
https://www.washingtonpost.com/technology/interactive/2023/ai-jobs-workplace/
|
[
{
"date": "2023/03/20",
"position": 15,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/20",
"position": 23,
"query": "AI labor market trends"
},
{
"date": "2023/03/20",
"position": 19,
"query": "machine learning workforce"
},
{
"date": "2023/03/20",
"position": 33,
"query": "artificial intelligence workers"
}
] |
|
Automating Discrimination: AI Hiring Practices and Gender ...
|
Automating Discrimination: AI Hiring Practices and Gender Inequality
|
https://cardozolawreview.com
|
[
"Lori Andrews",
"Hannah Bucher",
"J.D.",
"Professor Of Law",
"Chicago-Kent College Of Law",
"Director",
"Institute For Science",
"Law",
"Technology",
"Illinois Institute Of Technology"
] |
As the company grew, needing to hire tens of thousands of employees, management asked its engineers to create an AI algorithm to identify the best potential ...
|
“I think people underestimate the impact algorithms and recommendation engines have on jobs,” Derek Kan, Vice President of Product Management at Monster says. “The way you present yourself is most likely read by thousands of machines and servers first, before it even gets to a human eye.”
Introduction
Amazon is a world leader in the use of artificial intelligence (AI) to address a range of business issues, from predicting consumer purchases to reducing its corporate carbon footprint. As the company grew, needing to hire tens of thousands of employees, management asked its engineers to create an AI algorithm to identify the best potential employees based on their resumes alone.
After 500 attempts, the engineers collectively threw up their hands. Instead of creating a useful automated hiring technology, they had created the perfect tool to discriminate against women. The algorithm rejected applicants who used the term “women” anywhere—such as “Captain, Women’s Soccer Team” or “National Women’s Chess Champion.” It rejected applicants who went to all-women’s colleges. Not only did the program reject potentially qualified women before they even reached the interview stage, but some candidates the algorithm identified for jobs were not even qualified.
How could the world leader in AI so miss the mark? The answer is an abiding fact of AI—it learns to replicate the biases of the data used to create it. Because the Amazon engineers developed the algorithm based on resumes submitted to Amazon, which were predominantly male, the AI responded by assuming male candidates were preferred. This fiasco led Amazon to give up on creating such a hiring tool. However, many other companies are marketing or employing AI-based hiring tools. In a Harris Poll conducted for CareerBuilder, 55% of Human Resource managers said they would be using AI by 2022. The COVID-19 pandemic escalated the demand for AI-based hiring technologies, further entrenching them into normal HR procedures.
Despite the potential for gender discrimination, independent developers and companies sell AI hiring tools to businesses without evidence that those technologies actually identify qualified candidates. At least 407 companies within the Fortune 500 use some combination of three such technologies—resume scanning, one-way video interviews, and the use of video games—to screen applicants.
The developers marketing these technologies claim that the algorithms can decrease costs, save time, and identify the best applicants in the hiring process. The technologies are even touted as a way to avoid racial and gender discrimination and protect employers from being sued under employment discrimination laws because the decisions are made by a computer rather than a human.
Automation, however, is not necessarily a woman’s friend. On the internet, female job seekers are directed to lower-paying jobs more often than male job seekers. Researchers from Carnegie Mellon created hundreds of fake male and female internet job seekers. The fake job applicants from both groups visited employment webpages. The study found that male job seekers received overwhelmingly more ads for high-paying jobs than equally qualified female job seekers. Ads that read “$200k+ Jobs—Execs Only” and “Find Next $200k+ Job” were displayed almost six times more often for men than for women.
The design of the technologies at issue in this Article similarly create a situation that favors male candidates. If the technologies are developed using data from the existing employees (such as their resumes, their speech patterns in one-way video interviews, or the way they play video games), the algorithm will privilege male traits if the existing employees are predominantly male. The risk of gender discrimination is real due to the male-skewed workforce in many major companies. In 2018, men accounted for 81% of Microsoft’s technical workforce, 79% of Google’s, 78% of Facebook’s, and 77% of Apple’s.
This Article makes a unique contribution to the literature by combining a deep understanding of AI hiring technologies with an original series of proposals of how they should be addressed by law. The topic is of crucial importance due to the extensive use of these technologies and their powerful potential for discrimination. This Article addresses three AI-based hiring tools that rank and even reject applicants before they get to the interview stage—resume scanning, one-way video interviews, and the use of video games to screen applicants. It analyzes how the use of seemingly neutral AI in recruiting may discriminate against women and on what legal grounds a woman who is not hired might bring a legal claim challenging the use of these technologies. Part I summarizes the AI-based hiring technologies and analyzes the ways in which they might disadvantage women. Part II provides the overall framework for gender discrimination cases involving employment under Title VII of the Civil Rights Act. Part III applies the legal principles and precedents of Title VII law to the use of AI in hiring assessments, and Part IV proposes policy changes to ensure fairness in hiring in an era of algorithms.
I. Artificial Intelligence and Machine Learning in Hiring Decisions
Hiring software uses artificial intelligence and machine learning to create algorithms to predict which job applicants will be successful in the job. The term “artificial intelligence” refers to all computation efforts to code a machine to make decisions as though it were a human. “Machine learning” is a subset of artificial intelligence in which “the automated model-building process determines which input variables (or features) are most useful and how to combine them to best predict a behavior or outcome based on the latest data available.” In the hiring context, the algorithms look for correlations between various traits that applicants have and the traits of people who, by some measure, have succeeded in the job (such as the top managers in a company). What distinguishes machine learning from human-coded algorithms is that the computer, rather than a person, constantly modifies the algorithms to identify the “important” patterns. According to a joint Accenture and Harvard Business School study, 90% of Fortune 500 businesses use automated technology in hiring to “initially filter or rank potential middle-skills . . . and high-skills . . . candidates.”
Advocates of the use of algorithms in hiring claim that AI reduces the time and cost of finding employees. But they often underestimate the complexity of testing their predictions and validating the results. When discussing the benefits of machine learning in the context of hiring, a team of economists analogized the process to a tool used during brain surgery. During a typical brain surgery to remove a tumor, doctors would generally over-remove brain tissue to ensure that all cancerous tissue is excised. A company developed an algorithm that, in conjunction with a medical imaging device, could analyze in real time the tissue the doctor was assessing during brain surgery. The algorithm predicted with around 90% accuracy whether the brain tissue under the wand was cancerous.
In designing medical studies involving machine learning and cancer, researchers analyze thousands of tissue samples. They follow up by testing the tissue to determine if it is cancerous or not. The employment situation is much different. Algorithms are being developed using data from a limited number of existing employees (for a particular one-way video algorithm, it is 50 employees) . In medical situations, researchers can easily measure false positives and false negatives by testing the tissue. But how do we determine whether the women who were rejected would have done better than the men who were hired?
The hiring context creates a challenge in both defining success and determining what contributes to it. It is surprisingly difficult to determine job success. We do not have a metric for what makes a good employee. Are the people in the top positions in the company or the highest-salaried people necessarily the smartest, most productive, most creative, and best leaders? And what traits actually ensure job success, as opposed to those traits that the supposedly “top” employees share, that are unrelated to doing the job well?
Ascertaining what makes a good employee is a challenge for artificial intelligence hiring technology. Peter Cappelli notes in the Harvard Business Review that researchers have been trying to determine what constitutes a good hire since World War I : “So the idea of bringing in exploratory techniques like machine learning to analyze HR data in an attempt to come up with some big insight we didn’t already know is pretty close to zero.”
Because the data used to train hiring algorithms consists of the kind of traits and qualities possessed by an existing pool of employees, the program will produce results to mirror and favor those inputs. For example, in medical school admissions, an algorithm trained on historic data incorporated the previous human decision biases : the algorithm selected against women and those who were not native English speakers. If a hiring algorithm is modeled on an existing workforce without gender diversity, the results will also lack gender diversity. Any model trained to assess potential candidates will do little other than “faithfully attempt to reproduce past decisions” and, in doing so, “reflect the very sorts of human biases they are intended to replace.”
Because an algorithm ultimately selects which criteria to include, the algorithm itself can consider both illogical and discriminatory variables in its decision-making process. The algorithm may focus on traits of top employees that have nothing to do with actual ability to do their job. For example, the artificial intelligence program created by the company Gild to find potential employees out in the wild processed a massive quantity of data and then advised clients that a good potential employee is someone who visits a certain Japanese manga site.
In another instance, when one of his clients was about to employ a resume scanning program, attorney Mark Girouard inquired into the variables that the algorithm was prioritizing in applicants’ CVs. The algorithm identified two factors as indicative of successful job performance: first, that the candidate’s name was Jared, and second, that the applicant played high school lacrosse. Girouard noted that with such systems, “your results are only as good as your training data.” He said, “[t]here was probably a hugely statistically significant correlation between those two data points [(being named Jared and having played lacrosse)] and performance, but you’d be hard pressed to argue that those were actually important to performance.”
As the Jared example shows, correlation is not causation. If Tony changed his name to Jared, he would not then have more skills. Moreover, creating algorithms by retrospectively assessing a workforce may doom the corporation to stagnation because the few employees who are visionaries with the ability to move the corporation forward would likely have traits that are underrepresented in the data set.
Although AI proponents often tout that their technologies combat discrimination, there are multiple ways in which gender discrimination may inadvertently crop up. Using data from preexisting top performers can lead to “hindsight bias” because the algorithms will presume that (1) the characteristics the algorithm identified led to success, rather than merely being correlated with it; and (2) the characteristics that led to success in the past will necessarily lead to success in the future. Hindsight bias can operate to the disadvantage of groups of individuals who have historically been excluded from the workplace, including women. Given that possibility, what legal recourse is available for women who are not hired because of bias in the algorithm?
II. The Law of Employment Discrimination Under Title VII
Title VII of the Civil Rights Act prohibits a broad range of discriminatory conduct based on an individual’s sex, including an employer refusing to hire an applicant, discharging an employee, refusing to promote an employee, or demoting an employee. The two main theories of liability under Title VII are disparate treatment and disparate impact.
In 1978, the Equal Employment Opportunity Commission (EEOC) released the Uniform Guidelines on Employee Selection Procedures (Uniform Guidelines) under 29 C.F.R. § 1607. Based on court decisions, previous agency guidance, and the policies underlying Title VII, the Uniform Guidelines were designed to help both public and private employers comply with federal employment law. The Uniform Guidelines provide guidance about what types of employer conduct are permissible in assessing job applicants.
These Guidelines provide that before using a selection tool for hiring, an employer should perform a job analysis to determine which measures of work behaviors or performance are relevant to the job or group of jobs in question. Then, the employer must assess whether there is “empirical data demonstrating that the selection procedure is predictive of or significantly correlated with important elements of job performance.” Although the Uniform Guidelines can shepherd employers through the tangle of federal law, the Supreme Court has explained that the “Guidelines are not administrative [‘]regulations’ promulgated pursuant to formal procedures established by the Congress.” Instead, they are an “administrative interpretation” of Title VII by an administrative agency. Nevertheless, the Supreme Court has consistently held that the Uniform Guidelines are “entitled to great deference.”
In 2016, the EEOC held a meeting to educate itself on the use of algorithms in hiring. The Commission received testimony about the benefits of AI in recruitment and its risks. However, the EEOC has yet to articulate any general guidance regarding the effect of algorithms and machine learning on federal employment law. Consequently, a woman who is discriminated against in hiring must turn to the existing legal approaches by demonstrating that the use of an AI hiring technique caused disparate treatment or a disparate impact due to her gender.
A. Disparate Treatment
Disparate treatment is the most blatant form of discrimination because the employer’s conduct is intentional. Liability under the theory of disparate treatment requires a plaintiff to establish that her employer acted with a discriminatory intent or motive. A plaintiff can establish this in one of two ways. First, the plaintiff can present evidence of an employer’s explicit discriminatory statement, such as, “I would hire you, but I am not going to because you are [a female].” And second, the plaintiff can use indirect or circumstantial evidence of the employer’s conduct. An employer can even be liable for disparate treatment if the employer has a mixed motive, such as a legitimate reason for the decision in addition to the discriminatory one.
In the U.S. Supreme Court case Price Waterhouse v. Hopkins, a woman who was passed over for partnership successfully argued intentional sex discrimination. The firm admitted that the employee was qualified and stated that she would have been promoted but for her interpersonal problems. By interpersonal problems, the firm meant that she was “aggressive” or “unduly harsh.” However, there was also evidence that the firm refused to offer her the partnership because the partners felt that she needed to wear more makeup; speak, walk, and talk more femininely; and be less aggressive. Other statements conveyed that the plaintiff was “macho” and that she should “take ‘a course at charm school.’”
In this mixed motives case, the Court had to decide whether the interpersonal skills rationale was a legitimate nondiscriminatory basis for denying her the partnership or whether it was merely a pretext to disguise sex discrimination. The Court held that when a plaintiff can demonstrate that gender or gender stereotyping “played a motivating part in an employment decision,” the burden shifts to the defendant, who may avoid liability “only by proving by a preponderance of the evidence that it would have made the same decision even if it had not taken the plaintiff’s gender into account.” Expanding on the Court’s holding in her concurring opinion, Justice O’Connor explained that the employer’s statements constituted “direct evidence that decisionmakers placed substantial negative reliance on an illegitimate criterion in reaching their decision.” The case was reversed and remanded for further proceedings and ultimately decided in the employee’s favor.
The second way to establish disparate treatment is by using indirect or circumstantial evidence. Circumstantial evidence can be used to show that the employer’s proffered reason is a pretext “unworthy of credence” —for example, that the “employer’s explanation was contrary to the facts, insufficient to justify the action or not truly the employer’s motivation.” The plaintiff can also offer evidence of “suspicious timing, ambiguous statements oral or written, behavior toward or comments directed at other employees in the protected group, and other bits and pieces from which an inference of discriminatory intent might be drawn.” Evidence showing that the employer hired a less qualified applicant over the plaintiff in question, though not per se proof of pretext, may be evidence that the employer’s reasoning was a pretext for discrimination. This burden of persuading the court of the existence of pretext does not follow a rigid test, and “it is important to avoid formalism in its application, lest one lose the forest for the trees. Pretext is a commonsense inquiry: did the employer fire [or, as here, refuse to hire] the employee for the stated reason or not?”
As opposed to being denied a job or promotion because they are too macho, some women are rejected as being not macho enough. In a disparate treatment case centering on pretext, Eldred v. Consolidated Freightways Corp. of Delaware, an assistant linehaul supervisor, Judith Eldred, was denied a promotion purportedly because she lacked aggression. John Bubriski was promoted over Eldred “because he was an enthusiastic and ‘aggressive’ employee, had worked previously as a supervisor in Dock, and had leadership experience as an officer in the Army Reserves.” Eldred, however, “was substantially more qualified for th[e] promotion”—she had superior evaluations, she was in her prior position longer than Bubriski, and Bubriski was often late to work and had “spotty” evaluations. In fact, the only positive evaluation in evidence that related to Bubriski’s performance in the assistant position came after his promotion and appeared to be “an after-the-fact justification.”
Consolidated Freightways stated that it denied Eldred the promotion “because she lacked ‘aggressiveness’ and was too ‘soft’ with the drivers” —justifications that were linked to gender stereotypes. The federal district court found that even if these characterizations about Eldred were true—which the court said was highly doubtful—they never affected Eldred’s job performance. Ultimately, the court found that Eldred was more qualified than Bubriski for the promotion, and the proffered reasons for the refusal to promote Eldred were pretexts for gender-based discrimination. The court went as far as to say that “[t]he unavoidable conclusion is not that plaintiff was passed over for the promotion because she was not aggressive; it was because she was not male.”
An employer’s knowledge that a hiring practice discriminates against women, paired with evidence that shows the employer’s continued use of that same hiring practice, may also support an overall inference of intentional discrimination. Along those lines, in EEOC v. Joe’s Stone Crab, Inc., a restaurant—Joe’s Stone Crab (Joe’s)—sought to provide its customers with an “Old World” dining ambiance. In doing so, Joe’s management gave silent approval to the notion that male servers were preferable to female servers. The Eleventh Circuit Court of Appeals held that, by emulating “Old World traditions” of male servers, Joe’s intentionally excluded women.
B. Disparate Impact
The theory of disparate impact can be used when an employer’s seemingly neutral policy or practice operates to the disadvantage of women. The employer then has a chance to show that its selection criteria are related to job performance and serve the employer’s legitimate business needs. The plaintiff can overcome such a showing by proving that alternative selection criteria would serve the employer’s legitimate business needs, but “without a similar discriminatory effect.”
The Fourth Circuit Court of Appeals decision in United States v. Chesapeake & Ohio Railway Co. provides a helpful articulation of the employer’s burden of proof: “The test of business necessity . . . ‘is not merely whether there exists a business purpose for adhering to a challenged practice. The test is whether there exists an overriding legitimate business purpose such that the practice is necessary to the safe and efficient operation of the business.’”
In disparate impact cases, plaintiffs most often establish their prima facie case of disparate impact by statistical comparison. The Supreme Court acknowledges that statistics can be an important source of proof in employment discrimination cases because, assuming an employer is engaged in nondiscriminatory hiring practices, the workforce should be “more or less representative” of the larger community in which it operates.
The plaintiff is not required to show a disproportionate impact based on a comparative analysis of the actual applicants because courts recognize that “[t]he application process might itself not adequately reflect the actual potential applicant pool, since otherwise qualified people might be discouraged from applying because of a self-recognized inability to meet the very standards challenged as being discriminatory.”
One statistical benchmark for assessing whether a selection procedure results in a disparate impact is the “four-fifths rule” enumerated in the EEOC’s Uniform Guidelines on Employee Selection Procedures. The Uniform Guidelines explain that “[a] selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) (or eighty percent) of the rate for the group with the highest rate will generally not be regarded by the Federal enforcement agencies as evidence of adverse impact.”
The Supreme Court in Griggs v. Duke Power Co., a racial discrimination case, provides the framework and the theory for discriminatory impact cases. Prior to the passage of the Civil Rights Act, Duke Power Company prohibited African Americans from working in any department other than the janitorial department. The employees in that department were the lowest paid at the plant—even the highest paid employee in the janitorial department was paid less than the lowest paid employee in other departments. After the Act’s passage, Duke had to abolish the rule that African American employees were permitted only to work as janitors, but the company developed two new employment requirements for the other departments: (1) a high school degree and (2) a passing grade on standardized general intelligence tests.
In holding that Duke’s employment requirements violated Title VII, the Court explained that the scope of the Act reached “the consequences of employment practices, not simply the motivation.” Under the Act, any employment criteria, while “fair in form,” cannot be maintained if “they operate to ‘freeze’ the status quo of prior discriminatory employment practices.” Even when there is no evidence of prior discriminatory practices, and even if Duke enacted their diploma and testing requirements in good faith, under Title VII, “good intent or absence of discriminatory intent does not redeem employment procedures or testing mechanisms that operate as ‘built-in headwinds’ for minority groups and are unrelated to measuring job capability.”
In a case involving sex discrimination in choosing apprentice boilermakers, Bailey v. Southeastern Area Joint Apprenticeship Committee, those “built-in headwinds” resulted from points being awarded to applicants for criteria that were less likely to have been experienced by women—such as an extra five points for service in the military and an extra ten points for time spent in vocational school. As a result, 2,227 of 7,287 male applicants were accepted into the apprentice program, while only 2 of 94 female applicants were accepted. The female plaintiffs who brought the lawsuit were rejected from the apprenticeship program even though they actually had experience as boilermakers.
The court acknowledged that the apprenticeship committee “undoubtedly developed its screening mechanism in good faith,” albeit “informally and unprofessionally” because they did not perform any validation study of the selection, screening, or ranking procedures they used in their hiring process. The court opined that the screening questions were likely “developed in blissful ignorance of [their] possible impact on women as a protected class under Title VII.” The court recognized that there may be some “tangential relevance” between military service, shop classes, vocational training, and performance on the job, in that those activities are “conceivably indicative of [the applicant’s] general ability to work in a group,” but on the whole, the defendant failed to meet its burden of showing a legitimate business necessity for these questions, nor were the questions a “reasonable measure of job performance.” Finally, the court determined that there were likely less restrictive alternatives to questions about prior military service, vocational training, and shop classes.
Neither discriminatory intent nor previous discriminatory practice are prerequisites for a showing of disparate impact, thereby fashioning Title VII as a defense against more subtle forms of discrimination. Even where there is no conscious effort on the part of the employer to discriminate against a protected class, if its hiring policies or practices cause a disparate impact, the employer cannot escape scrutiny under Title VII.
Previous disparate impact cases challenging pre-employment testing have often involved tests for civil service positions such as police officers, firefighters, and corrections officers. Employers posit that these positions require a minimum level of physical or mental skill, and they rely on pre-employment tests to determine whether an applicant meets their desired standard. But such tests have been routinely challenged for having a disparate impact on a protected class like women or minority candidates. Pre-employment tests for civil service positions therefore provide a useful frame of reference for the kinds of challenges that might be brought against an employer who uses AI hiring technologies that seek to measure skills which the employer believes are necessary for success in the position. A key aspect of this jurisprudence is that even reasonable-seeming testing criteria (such as strength or math ability) will be struck down if it disproportionately disadvantages women, unless it is necessary for the “safe and efficient” performance of the job.
In Berkman v. City of New York, a case involving a physical exam, a twenty—nine-year-old woman was the lead plaintiff in a class action against the New York City Fire Department. She had passed the written exam but failed the physical exam, resulting in her disqualification as an applicant. The physical test had a passage rate of 46% for men and 0% for women. The court determined that the test did not meet the EEOC’s validation metrics for pre-employment testing. The Berkman court concluded that “it [was] possible” that the tests contained “isolated references to work behaviors bearing superficial resemblance” to actual job performance, but, on the whole, the test did not “represent appropriate abilities” that would predict an applicant’s success on the job.
Similarly, in Fickling v. New York State Department of Civil Service, plaintiffs brought suit under Title VII alleging that they were unlawfully terminated for failing an examination given as part of their job as Welfare Eligibility Examiners. The court assessed whether the content of the test was related to the content of the job and whether the scoring system “usefully selects” those applicants who are best suited to perform the job. The court determined that the test failed to comply with EEOC test validation metrics under the Uniform Guidelines because, among other things, 38% of the questions on the exam required arithmetic, even though the ability to do arithmetic was found to be “unimportant” to job performance based on an earlier analysis of the knowledge, skills, and ability of the ideal candidate.
In United States v. Massachusetts, the United States sought to enjoin the Commonwealth of Massachusetts and the Massachusetts Department of Corrections from using the Caritas Physical Abilities Test to select entry-level correctional officers, arguing that the test had a disparate impact on women applicants. While the court understood that, as a matter of common sense and safety, factors like an individual’s speed, strength, and ability could be relevant to determining whether someone is suited to the job of a corrections officer, the court nevertheless determined that Massachusetts failed to show that the test was consistent with business necessity and necessary for “effective, efficient, or safe job performance.”
III. Application of Title VII to AI Technologies in Hiring
A. Resume Scanning
1. The Technological Underpinnings of Resume-Scanning, Its Current Uses, and Its Gendered Impacts
Employers use artificial intelligence technologies to rate job applicants’ resumes. Resume scanning has been used by entities such as JCDecaux, University of Pennsylvania, MoneyCorp, Monster, Nissan, PharmEasy, Wal-Mart, General Electric, Starbucks, McDonald’s, Hyatt, UNICEF, and Chick-fil-A. Employers claim that AI technologies are necessary to deal with the torrent of resumes they receive for any given job. Proctor & Gamble, for example, received 1,000,000 applications for 2,000 jobs. The average number of resumes per job is actually a more manageable number, about 250 resumes per job posting.
One approach to resume scanning is for the developers to decide in advance which words on the resume should lead to a job applicant either being rejected or moved to the next stage. Kathryn Dill of The Wall Street Journal reported on hospitals scanning nurses’ resumes to find those who had listed “computer programming” when hospitals needed nurses who could enter their patient data into the computer. Yet nursing candidates might emphasize care skills on their resumes and not think to add computer skills that they actually possess. Other examples include a power company scanning for customer service experience when hiring power line repair employees and a store’s algorithm only selecting for “retail clerks” if they have “‘floor-buffing’ experience.”
Resume scanning technology can alternatively use artificial intelligence and machine learning to analyze the resumes and rank the candidates. Resume scanning companies claim their software analyzes and can select for traits such as attention to detail, leadership skills, and other qualities that “stand[] out.” To identify the characteristics thought to predict success, employers use resumes submitted by their current roster of top employees as the model for the dataset. The resulting hindsight bias may operate to the disadvantage of groups of individuals historically excluded from the workplace, including women. For example, if most managers in a company are men, and many happened to have been varsity football players, a resume scanning algorithm will give priority to resumes that also include “varsity football” credentials. Since very few women play varsity football, the algorithm will give priority to male candidates—even when playing the sport has no bearing on job performance. This is the process that led to the algorithm identifying the name Jared and having played high school lacrosse as the keys to success.
Resume scans can also discriminate against women due to differences in language that men and women have been socialized to use. Women are more likely to use “we” when describing a project, while men are more likely to say “I” when talking about achievements, so an algorithm trained mostly on men will be biased to choose candidates with “I” language on their resume. Men are more likely to use active verbs like “executed”; in choosing resumes with male-gendered verbs, such as “executed” or “captured,” the Amazon algorithm disadvantaged women.
The application of resume scanning programs that privilege maleness are reminiscent of the situation of Simone de Beauvoir and Jean-Paul Sartre, who both studied philosophy at the Sorbonne. They both sat for the agrégation, a civil service exam where the higher-ranked candidate got his or her pick of professorial jobs. They were neck and neck to be declared the top candidate. But the honor went to Sartre. Why? He received points for attending a prestigious high school. Since the school was for boys only, there is no way de Beauvoir could have matched him under that faulty “algorithm.”
Discrimination can also result from the lack of context in resume scanning. A large and unexplained gap on a person’s resume is often a red flag for a prospective employer and will result in automatic rejection by the algorithm. If a human were reading an applicant’s resume, context clues (i.e., a more suburban address, a more distant graduation year, volunteer experience at a local elementary school) surrounding a large gap between professional experiences on a woman’s resume could indicate a break taken to raise children. To a resume scanning algorithm, none of this context is considered—the program merely red flags and downgrades a resume with a large work experience time gap, and the resume may never be seen by a human recruiter.
A 2021 joint study conducted by professors at Harvard Business School and professionals from Accenture found that around 27,000,000 people have been stopped by resume scanning from finding full-time employment. The study did not provide a gender breakdown of those who were, as they described it, “missing from the workforce.” The study notes that 88% of the employers said “that qualified high-skills candidates are vetted out of the process because they do not match the exact criteria established by the job description. That number rose to 94% in the case of middle-skills workers.”
The researchers were critical of resume scanning algorithms because they can reject qualified candidates. They reject resumes with significant gaps in work experience, which can “eliminate huge swaths of the population such as veterans, working mothers, immigrants, caregivers, military spouses and people who have some college coursework but never finished their degree.”
2. The Potential Role of Existing Law in Response to Gender Discrimination in Resume Scanning
a. Disparate Treatment
What recourse does a woman have if she is rejected for a job by a resume scanning algorithm? She might be able to show disparate treatment if the algorithm downgrades an applicant based on sexist criteria, such as the use of “women” on the resume (such as “Captain, Women’s Lacrosse Team”) as in the Amazon algorithm example.
It could also be argued that an employer is engaged in disparate treatment if the process by which the technology is created is known to be biased in favor of men. Training a model on a dataset that overrepresented men would invariably lead to devaluing female candidates and thus is akin to intentional bias. In tech companies, for example, the existing representation of women is less than the four-fifths ratio suggested by the Uniform Guidelines. According to Google’s 2022 Diversity Annual Report, women made up 30.6% of the company’s tech hires in the United States, while men accounted for 69.4% of the company’s new recruits. Since tech companies can be expected to know that algorithms reflect the dataset on which they are trained, use of such an algorithm could be viewed as intentional discrimination based on sex.
Similarly, an intent to discriminate could be established if the employer has actual knowledge of the discriminatory effect of the algorithm through its own data of the gender breakdown of the people the algorithm ranks highly or through publication of a study about it. This would be similar to the studies done by ProPublica, which revealed that criminal sentencing algorithms discriminate against Black people. If a resume-scanning algorithm disfavors female applicants, the employer should realize the process is discriminating based on a protected characteristic. As one set of commentators opined, “it is not difficult to imagine courts taking a res ipsa loquitur attitude” in such circumstances.
b. Disparate Impact
A woman could alternatively bring a disparate impact claim if resume scanning leads to a significant difference in the hiring of women versus men. Griggs v. Duke Power Co. noted that any employment criteria, while “fair in form,” cannot be maintained if “they operate to ‘freeze’ the status quo of prior discriminatory employment practices.” The use of the resumes of existing employees to serve as the benchmark for the resume-scanning algorithm is problematic in that it privileges men over women. The algorithm results in hindsight bias because it has the tendency to discount groups of individuals historically excluded from the workplace, including women.
A female plaintiff might be able to show disparate impact if the algorithm scans for criteria that are much more likely to apply to men than women, such as playing football or military service. (Recall that in Bailey v. Southeastern Area Joint Apprenticeship Committee, the employer’s use of previous military service or participation in shop classes was held to be discriminatory.) If the algorithm scans for missing time periods in the resume (such as a year off between jobs, which may be more common to women who tend to take time off after childbirth), that, too, might be seen as discriminatory.
The burden would then be on the employer to show that the resume scanning technique was identifying job-related traits. In Griggs, the Court rejected the company’s argument that it should be allowed to use standardized intelligence tests in spite of the disparate impact they caused. The Court explained that an employer must demonstrate that any hiring metric must bear a “manifest relationship to the employment in question” and a “demonstrable relationship to successful performance of the jobs for which it [is] used.”
Think of the situation in which women are disproportionately rejected because men tend to use more active verbs and are more likely to use “I” to claim credit instead of “we.” Is it really likely that those speech styles are tied to better performance on the job—or do they demonstrate that the person is more likely to be arrogant and take credit for another person’s work? Given the lack of objective studies of the ability of resume scanning to predict future job performance—and the sexist nature of algorithms like the Amazon one that was developed using a mostly male workforce—it will be difficult for employers to make a showing that the traits were job-related.
For some traits, the employer might have a better chance of clearing the job-related hurdle. For example, being on the football team might show leadership abilities or team skills. Or a time gap might indicate that someone is not devoted to their career. Then, it would be up to the woman to come up with an alternative to the challenged metric. For example, the woman could argue that she has alternative leadership or team skills, such as participation in other sports. And she could argue that rather than using a gap on her resume after childbirth to suggest a lack of devotion to a career, the potential employer could check references to see how well she performed in her previous jobs.
B. One-Way Video Interviews
1. The Technological Underpinnings of One-Way Video Interviews, Their Current Uses, and Their Gendered Impacts
One-way video interviews differ from standard interviews because they happen without a human interviewer. The job applicant logs in online and records herself or himself responding to prompts in the absence of a human representative of the employer. As with resume scanning algorithms, one-way video interviews are marketed as a more efficient way for employers to evaluate large numbers of candidates and to remove bias and subjectivity from the hiring process.
One-way interviewing purportedly uses AI to analyze whether an applicant is creative, strategic, disciplined, driven, friendly, outgoing, assertive, persuasive, stress tolerant, and optimistic. This technology has been used for positions including customer operations clerks, warehouse workers, fast food crew members, retail supervisors, and by entities such as Six Flags, Facebook, Chick-fil-A, CA.gov, and McDonald’s.
After the interviews are recorded, an algorithm can analyze the video components, the audio components, or a written transcript of the interview. One-way interviewing AI can assess how the applicant’s face moved when responding to each question to determine, for example, how excited the applicant seemed about a certain task or how they would deal with an angry customer. For one company’s algorithm, these facial analyses counted for 29% of the applicant’s score. The Chief Technology Officer of the company told Business Insider about its video interview analysis. She explained that the artificial intelligence algorithm analyzed different features important for different jobs : if a job required client work, the algorithm weighted certain characteristics it read differently: “[T]hings like eye contact, enthusiasm . . . . Do they smile or are they down cast? Are they looking away from the camera?”
When an employer decides to use a one-way video interview, the developer can create a tailored algorithm by recording existing employees and choosing employees whose traits match those of the current successful employees. HireVue asked employers to use the one-way video interviews on all existing employees, “from high to low achievers,” and then used their scores to create a “benchmark of success.” After new applicants sat for their assessments, HireVue would generate a “report card,” which showed how well the applicant’s score matched up with the existing high-performing workers in the job for which they applied.
Hilton International used HireVue’s one-way video interviewing for “thousands of applicants for reservation-booking, revenue management and call center positions.” Although job recruiters at companies like Hilton have access to recordings of all the applicants, they generally will let the algorithm filter out the lower ranked candidates to save time. According to Sarah Smart, Hilton’s Vice-President of Global Recruitment, “[i]t’s rare for a recruiter to need to go out of [the top- ranked] range.”
The risk of creating ideal candidate profiles based on the characteristics of existing employees is that the AI will discount the candidates who look, speak, express, dress, or present themselves differently from the current employees for reasons that have nothing to do with their qualifications for the job. If the technology is trained on a mostly male sample, the algorithm can erroneously presume that male traits, such as being tall, wearing a tie, or having a deep voice, are correlated with success on the job. Speech patterns, whether assessed via audio or transcripts, are also gendered. Comparing speech patterns of a mostly male workforce to that of female applicants can work to the disadvantage of female applicants (as it did with Amazon’s failed resume scanning attempts, which privileged the use of words more commonly used by men).
A person’s linguistic style (i.e., their “characteristic speaking pattern”), will come through even when the content is transcribed into text. Linguistic style involves features such as “directness or indirectness, pacing and pausing, word choice, and the use of such elements as jokes, figures of speech, stories, questions, and apologies.” Essentially, “linguistic style is a set of culturally learned signals by which we not only communicate what we mean but also interpret others’ meaning and evaluate one another as people.” And, because different linguistic styles reflect different cultural norms, the patterns often differ for men and women. For example, girls and boys are socialized to communicate differently from a young age. Deborah Tannen, a professor of linguistics at Georgetown University, dubbed the way women learn to communicate as “rapport-talk” and the way men learn to communicate as “report-talk.” Girls tend to learn and engage in conversational styles that focus on building relationships with their peers, speaking modestly, and downplaying their own achievements, whereas boys engage in conversational styles that focus on status, self-promotion, and one-upmanship.
Even small differences in communication styles, like the choice of which pronouns a person uses, can affect who gets credit for an idea in the workplace, or even who gets a job. Professor Tannen found that men say “I” in situations where women say “we.” These linguistic cues were so ingrained that she even recorded instances of women saying “we” when referring to the work they performed alone.
Given the difference in communication styles between men and women, it is possible that a female applicant who applies for a position will be rejected because she makes “we” statements that highlight team- and relationship-building. Linguistic style differences were part of the reason that gender discrimination occurred in Amazon’s attempt to create a resume scanning algorithm. Trained on a dataset of mostly males, the algorithms learned to favor candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured.” The use of one-way video interviews thus raises serious questions of discrimination based on an applicant’s gender, race, and age, leading critics to call it “a license to discriminate.”
Nor will the one-way video interview necessarily identify competent potential employees because the technology looks for commonalities between existing employees without in-depth assessments of their performance and skills. While the AI systems may be able to tell the difference between a smile and a frown, they are less able to interpret the intent behind those physical expressions. A neuroscientist who studies emotion described the system as “worryingly imprecise in understanding what those movements actually mean and woefully unprepared for the vast cultural and social distinctions in how people show emotion or personality.” Even a former provider of video analysis in hiring, HireVue, has stepped away from analyzing the video images themselves after finding that “visual analysis has far less correlation to job performance than other elements of [their] algorithmic assessment.”
2. The Potential Role of Existing Law in Response to Gender Discrimination in One-Way Video Interviews
a. Disparate Treatment
One-way video interviews present some of the same barriers to the hiring of women as does resume scanning, leading to similar potentials for disparate treatment claims. If the AI was trained on existing employees who are mainly men, it may erroneously assume that all sorts of male traits are prerequisites for performing well in the job—such as having shorter hair, a louder voice, a particular type of clothes, the use of “I” instead of “we,” or the use of more active verbs. Women who would have excelled in the actual job might never even get an in-person interview because they have been downgraded by the algorithm on frivolous grounds that have to do with maleness, not ability.
A disparate treatment claim would be appropriate when gender-based questions are posed in the video interview, such as asking women about how many children they have, if they plan to have children, if they are married, or about their salary history. As the EEOC makes clear,
Questions about an applicant’s sex . . . , marital status, medical history of pregnancy, future child bearing plans, number and/or ages of children or dependents, provisions for child care, abortions, birth control, ability to reproduce, and name or address of spouse or children are generally viewed as not job-related and problematic under Title VII.
Similarly problematic issues might arise if an example is given in the question, such as asking whether the applicant participated in leadership programs like the Eagle Scouts or the Reserve Officers’ Training Corps (ROTC). Only 22% of ROTC cadets in the Class of 2020 were women, and most female job applicants never had an opportunity to participate in the Boy Scouts of America, since the organization only graduated their first class of female Eagle Scouts in 2021.
Even when an employer does not ask gender-based questions, it is possible that AI can be harnessed to capture physical responses that carry an explicit connection to gender. For example, studies have shown that an estimated 60–70% of women experience shortness of breath during pregnancy. This symptom is linked to a variety of factors, including the development and movement of the fetus and the associated compression of a woman’s diaphragm. If an employer uses facial analysis, or even tracks and transcribes an applicant’s speaking patterns during a one-way video interview, the results may show that the applicant is pregnant based on the pauses or pacing to accommodate extra breaths. And, if the employer uses these findings to decide whether the applicant gets the job, it could likely be seen as an explicit and impermissible classification or differentiation based on gender and childbearing capacity.
If the AI awards a greater number of points to candidates who resemble or speak like men, this would seem analogous to the sexist treatment of Judith Eldred who was criticized as not being aggressive enough to be promoted—a justification that was found by the court to be impermissibly linked to a gender stereotype. And if an employer continues to use the algorithm after it disproportionately favors men, the employer could be found liable for disparate treatment, akin to what happened when an employer continued to use a discriminatory practice in EEOC v. Joe’s Stone Crab, Inc.
b. Disparate Impact
Employers can be liable under the theory of disparate impact when a seemingly neutral policy or practice disadvantages individuals based on their protected class. Video interview analysis might, for example, downgrade female candidates because they use a different style of language than male candidates. As with resume scanning, women may be less likely to use aggressive words like “executed.” If the algorithm favored responses of the applicants who used those words or those who used “I” statements, the applicant could demonstrate that the process disadvantaged female applicants. As shown in the distinction between “report-talk” and “rapport-talk,” women tend to be more generous about giving credit to others, but that does not mean they are worse employees. And, to the extent that one of the justifications for hiring men was that they participated in team sports and would be better team players, women who speak in “we” statements may actually be better suited to contribute to team projects by allocating both responsibility and credit to others.
Joy Buolamwini, a researcher with the MIT Media Lab, has analyzed the risk of training AI with the inputs from an employer’s existing workforce—a risk magnified when using AI that performs voice and facial recognition. As she pointedly asks, “how do we know a qualified candidate whose verbal and nonverbal cues tied to age, gender, sexual orientation or race depart from those of the high performers used to train the algorithm will not be scored lower than a similar candidate who more closely resembles the in-group?” Thus, if the verbal cues and facial expressions of a largely homogenous workforce are used to train the patterns identified by an AI platform measuring enthusiasm for the job, the risk remains that people who do not use the same expressions or verbal cues will be discounted by an algorithm that is trained to search for similarities.
In a variety of cases challenging the hiring tests administered to fire department applicants and police department applicants, women were able to successfully bring disparate impact claims when the selection criteria (such as written tests and strength requirements) disproportionately led to the exclusion of female candidates and could not be shown to be job-related. A woman may be able to succeed with a disparate impact challenge to one-way video interviews because, like the questions used for police officers in Harless v. Duck, they lack a reasonable “degree of correctness” because they were developed using biased training data (i.e., the substantive responses and speaking patterns of men) and there has not been a relationship shown to success on the job. Even where enthusiasm and linguistic analyses claim to be facially neutral selection methods, much like the facially neutral and “blissful[ly] ignoran[t]” design of the boilermaker apprenticeship application in Bailey, “good intent or absence of discriminatory intent” will not suffice as a defense in the face of a disparate impact.
In the case of one-way video interviews, the hype that companies have used to market the technology to employers may come back to haunt them when employers are challenged to show empirically that the technology identifies traits that are actually job-related. If, as one company claimed, the AI can assess 15,000 data points that have to do with appearance, speech, eye contact, facial expressions, and more, it would take a study of tens of thousands or even hundreds of thousands of people to statistically correlate that number of traits with job performance. No studies of that magnitude have been performed. Employers cannot prove that the one-way video interviews have been validated empirically.
C. The Use of Video Games for Pre-Employment Testing
1. The Technological Underpinnings of Video Games in Pre-Employment Testing, Their Current Uses, and Their Gendered Impacts
Developers are marketing video games and companies are employing video games, for use in lieu of traditional hiring tests, to determine a job applicant’s traits and abilities. The developers claim that employers can “replac[e] archaic resumes with behavioral data” and by “captur[ing] thousands of behavioral data points,” their game assessments “build[] a profile of what makes a person and job unique.” Companies also claim they save about $3,000 per applicant if they can reject someone before the interview stage.
General success in video gaming might be viewed by the employer as useful for certain jobs. It might measure the small motor skills needed by a surgeon or a drone pilot. But pre-employment video game screening has been used for positions that are not linked to gaming skills, including investment bankers, entry-level engineers, and project managers, and by companies such as JP Morgan, PwC, Daimler Trucks North America, Royal Bank of Canada, and Kraft Heinz. The video games are created by companies such as Knack and pymetrics to assess applicants’ traits. These video game assessments purportedly collect “thousands of behavioral data points” to analyze thousands of traits at one time, including attention, assertiveness, decision making, effort, emotion, fairness, focus, generosity, learning, and risk tolerance.
Video game assessment companies ask current employees of an organization to play the game with the goal of ranking applicants in terms of the skills currently valued by that employer. The goal is to use machine learning on the video games’ data “to evaluate the cognitive and behavioral characteristics that differentiate a role’s high-performing incumbents to make predictions about job seekers applying to that role.”
When an applicant plays a game, data is collected every millisecond to provide a list of qualities exhibited by the player. This data includes how long a player hesitates to make a decision, where on the screen a player touches, and the moves the player makes. The games vary —one involves shooting water balloons at fast-approaching fire emojis, while another asks the applicant to select which side of the screen shows a larger or smaller proportion of colored dots.
The company Knack offers three primary games—Meta Maze, Dashi Dash (also known as Wasabi Waiter), and Bomba Blitz. Meta Maze has the player arrange shapes from Point A to Point B. Dashi Dash has the player serve food to avatars representing people based on the avatar’s facial expressions. Bomba Blitz has a player save flowers by throwing water balloons at fireballs coming from a volcano. Knack’s founder claims that these games can assess “how you deal with stress, how you collaborate with people, [and] how much you listen.” The company also offers to analyze its data for specific sets of traits. For example, a Knack assessment for “High Potential Leadership Talent” claims to assess the following skills based on game play: self-discipline, solution thinking, relationship building, composure, reading people, critical thinking, striving, and agile leadership. After an applicant completes the series of games, the data collected is analyzed by the developer’s proprietary algorithms, and a profile of the applicant is created. This profile is then used by the company to determine whom to hire.
Game play technology—even if the results are shown to employers without the name or gender of the player listed—does not guarantee a gender-blind process. Men and women play games differently and value different aspects of game play. Any gender differences in game play may reduce a woman’s chance of having her traits match those of current model employees, leading to her being rejected without an interview.
Like the Amazon algorithms, not only can the use of video games discriminate against women, but it might not even lead to the hiring of the best employees. Correlation does not mean causation in terms of previous success, creating a disconnect between what the video game measures and what is important for a job. It is not immediately apparent how an applicant’s game play might affect the way a system’s algorithm scores the applicant. For instance, when we asked law students and their friends to play the Knack games, people who had no useful skills or interest in certain areas were nonetheless told they would make a good investment banker or doctor.
The games are often simplistic and seemingly unrelated to the actual job task, such as the use of Wasabi Waiter (now called Dashi Dash), a video game where the player is a waiter, to analyze how good a surgeon someone will be. In that game, perhaps the player’s ability to ascertain risk is analyzed based on whether a player focuses on serving restaurant customer emojis at risk of becoming dissatisfied, or cuts his or her losses by ignoring the emoji with the lowest level of satisfaction. But there is no empirical basis for believing that those actions assess emotional intelligence or other personality traits and predict job performance in an array of jobs from surgeon to investment banker to McDonald’s worker.
Employers use video games to assess applicants without proof that these technologies provide an adequate assessment of an individual’s capabilities and value. No truly independent research exists to judge the validity of these games because researchers studying the efficacy of the approach had conflicts of interest because they either owned stock in Knack, received fees from Knack to do the research, or, in the case of pymetrics, were asked to perform an analysis by the company and paid $104,465 to do so. Even these studies are deficient because they did not follow up to determine how people chosen by the algorithm actually performed in the job.
2. The Potential Role of Existing Law in Response to Gender Discrimination in Video Games in Pre-Employment Testing
a. Disparate Treatment
Under Title VII, employers are permitted to use pre-employment tests to screen candidates and to assist in making hiring decisions. In the past, employers have used such tests to measure a candidate’s cognitive abilities, physical abilities, personality, or other desired characteristics. However, as the Court explained in Griggs v. Duke Power Co., pre-employment tests, while “obviously . . . useful,” must be evaluated in light of the employment testing procedures developed by the EEOC. The Uniform Guidelines describe the standards such tests should meet. First, there needs to be an assessment of what characteristics are related to success on the job and how to test for those characteristics. Then, there must be a determination that there is “empirical data demonstrating that the selection procedure is predictive of or significantly correlated with important elements of job performance.”
Even if a video game does not ask for information about the sex of the player, a certain style of play may be more associated with being a woman and thus allow the AI (and the employer) to distinguish between women and men. Women typically score higher than men on such tests in the following areas: “agreeableness, openness, extraversion, and warmth.” “[I]f an employer were to manipulate the requirements of the job or otherwise unfairly categorize female applicants based on their [personality test] scores,” then it would be engaging in a disparate treatment violation.
Under the EEOC v. Joe’s Stone Crab, Inc. precedent, a disparate treatment claim might also be brought in a situation where an employer with knowledge that the video game discriminated against women continued to use the game. Ultimately, employers are “unlikely to escape disparate treatment liability if they deploy algorithms that make facially discriminatory classifications.”
b. Disparate Impact
If job applicants are required to play a video game, a disparate impact claim could be brought if significantly fewer women are selected to be interviewed or hired after playing the game, either according to the
four-fifths rule enumerated in the Uniform Guidelines or a standard deviation analysis. A disparate impact claim against the use of video games in pre-employment testing does not require proof of intentional discrimination. Statistical bias can be present in an algorithm due to the way that certain variables can be omitted or downgraded. Or, the algorithms may even be “built using biased, error-ridden, or unrepresentative data” which could also lead to statistical bias. As Professor Pauline T. Kim notes, “data miners implicitly assume that the dataset used to train the model is complete enough and accurate enough to identify meaningful patterns among applicants or employees.” But by using data from a male-skewed workforce, the algorithm will likely privilege male traits.
A disparate impact analysis of the video gaming algorithms in hiring will likely rely on precedents about testing for mental and physical abilities. Video game AI analyzes data about the applicants’ video game-playing style such as the order in which tasks are undertaken, where a person clicks on the page, and how the person reads the emotions of an avatar. If these analyses lead to significantly more men than women being hired, it is unlikely that an employer could prove these “were necessary for effective, efficient, or safe job performance.” While success at video games might be related to the skills needed to be a drone pilot, it would be hard to prove it is related to other jobs, such as being a store manager. An audit performed by one of the enterprises that markets video games for hiring conceded that there is no independent research to suggest that the company’s tests actually measure the skills correlated with job performance. Even if an employer could prove a relationship between a video game involving water balloons and a particular job, such as being a manager, the plaintiffs could still prevail by identifying an alternative screening practice that does not result in a disparate impact and is as effective in meeting the employer’s business needs.
D. Revising the Algorithm
If a developer realizes that its AI hiring algorithm is disfavoring women because it was trained on a mainly male workforce or because women behave differently in the eyes of the algorithm, the developer or an employer using the algorithm might attempt to “correct” the bias after the fact. For example, if women use “we” and men use “I” on a resume or in a one-way video interview, additional points could be added, after the fact, to individuals who use “we.” But tweaking the results after the fact to favor women can itself run afoul of Title VII.
Various entities have attempted to undo the gender bias in their hiring and recruitment algorithms. LinkedIn’s algorithms recommended different jobs based on a person’s gender (even when gender was not specified on a resume) because the algorithm analyzed the behavior of each applicant. Women, LinkedIn found, were less likely to apply for jobs that required work experience beyond their qualifications than men. Because of this gender difference, the job recommendations tended to disadvantage women. LinkedIn added a correction, explaining that “before referring the matches curated by the original [i.e. the one that can discern gender through behavior] engine, the recommendation system includes a representative distribution of users across gender.” Using an alternative approach, ZipRecruiter attempted to correct for gender bias in the algorithm on its platform by eliminating or changing words on a resume, such as waitress, that are associated with women.
These after-the-fact attempts to balance gender are analogous to the situation in Ricci v. DeStefano, where the city of New Haven, Connecticut decided not to certify the results of an examination administered for promotions within the City’s fire department because the test disadvantaged minority candidates. The examination results showed that white candidates outperformed minority candidates, and, concerned about the possibility of a disparate impact lawsuit, the City threw out the results of the examination.
Subsequently, white and Hispanic firefighters—who likely would have been promoted based on their test performance—sued the City. The plaintiffs alleged that the City’s refusal to certify the test results constituted disparate treatment discrimination in violation of Title VII. The Supreme Court held that despite the City’s “well intentioned” and “benevolent” objective, “the City made its employment decision because of race” which amounted to disparate treatment. Similarly, a well-intentioned effort to correct for an inherent gender bias in a hiring algorithm might also be vulnerable to a challenge from men alleging disparate treatment under Title VII.
IV. Policy Approaches to Combatting AI Gender Discrimination in Employment
When information collected in the hiring process poses risks of discrimination or privacy risks, or when any type of technology creates a potential risk to individuals or groups, there are three possible legislative approaches to regulating the practice or the technology. The employer could be required to disclose information about the practice or technology, could be prohibited from discriminating based on the information gleaned through that practice or technology, or could be banned from using that practice or technology. All three approaches are present in current employment law and in lawmakers’ attempts to regulate the use of technology in the employment sphere.
A. A Disclosure Policy Approach
A person who submits a resume or who undergoes a one-way video interview may have no idea that these items will be screened by AI rather than by a human. As a result, if a woman is not offered a job after applying, she may think the chosen candidate had better credentials and not think to inquire about whether she was a victim of biased AI. Under a policy of disclosure, an employer is permitted to use a technology or collect certain information but must disclose to candidates what technology the employer is using.
In the first AI interviewing legislation in the nation, Illinois in 2019 enacted the Artificial Intelligence Video Interview Act. The Act requires an employer to obtain the applicant’s consent before conducting AI analysis of a video interview. Additionally, any employer using AI in that situation must “[p]rovide each applicant with information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants,” as well as maintain the confidentiality of any information shared by the applicant, and agree to destroy all copies of the interview within thirty days of the applicant requesting such action.
A disclosure approach to the other hiring technologies described in this Article would similarly require advance disclosure of and require that consent be sought for a hiring process that uses AI assistance and machine learning. By disclosing how the process works, job applicants will become aware that the technology is developed through machine learning with mostly male employees. This could lead to pressure on employers not to use these biased tools.
B. An Anti-Discrimination Policy Approach
Disclosure to job applicants about a practice or technology may be of limited use unless the legislation also prohibits using the information collected in a discriminatory way. Disclosure alone means little if the only option for the applicant on learning that AI is being used is to seek a different job. At the very least, the disclosure approach should be coupled with a ban on the use of the information collected by the AI in a discriminatory way.
Prohibitions on discrimination are at the heart of Title VII, which prohibits employers from “fail[ing] or refus[ing] to hire . . . any individual . . . because of such individual’s race, color, religion, sex, or national origin.” EEOC guidelines and opinions drill down into what behaviors are prohibited. For example, the EEOC has issued agency guidance explaining that an applicant’s salary history, by itself, cannot “justify a compensation disparity” between men and women—an important provision to attempt to stop the practice of underpaying women relative to men. “Women job applicants, especially women of color, are likely to have lower prior salaries than their male counterparts.” “In 2020, women earned 84% of what men earned, according to a Pew Research Center analysis of median hourly earnings of both full- and part-time workers.” And because of the pervasiveness of the gender pay gap, “employers who rely on salary history to select job applicants and to set new hires’ pay will tend to perpetuate gender- and race-based disparities in their workforce.”
In an effort to mitigate the perpetuation of this gender disparity, the EEOC has issued agency guidance explaining that an applicant’s salary history, by itself, cannot “justify a compensation disparity” between men and women. Rather, “permitting prior salary alone as a justification for a compensation disparity ‘would swallow up the rule and inequality in [compensation] among genders would be perpetuated.’”
An anti-discrimination approach to AI-assisted hiring technologies would allow their use only if the employer could prove in advance that technologies would not create any built-in headwinds for women by institutionalizing male norms (for example, of speech, education, looks, or experiences).
C. Banning a Practice or Technology
In some cases, however, nothing short of a ban may work to achieve gender parity. This is especially true in the case of algorithms created through machine learning, where an employer may not even realize the machine has modified the algorithm to include discriminatory variables. Bans are not uncommon in employment law. Bans on certain hiring practices or hiring-related technologies are used to avoid discrimination, to protect privacy, and to avoid the use of technologies that do not function properly.
Employers are banned, for example, from using lie detectors tests in hiring. The reasons for the ban are similar to the reasons we might consider banning certain uses of AI in hiring. Lie detector tests are prohibited because they do not adequately predict a potential employee’s future behavior on the job. In fact, the Senate Committee on Labor and Human Resources found that “many employers and polygraph examiners abuse and manipulate the [polygraph] examination process, and frequently use inaccurate or unfounded results to justify employment decisions which otherwise would be suspect.” Approximately 400,000 “honest workers” had been inaccurately labeled as deceptive by polygraphs and thus faced adverse employment consequences.
Employment laws also commonly ban the collection of certain information or the use of a particular technology to collect certain information. The logic behind such laws is that a ban on discriminatory uses of such information is not sufficient because it is difficult for a person denied a job to prove she was not chosen (or was offered a lower salary) based on that information or for some other reason. An employer might indeed be discriminating but the job applicant may have no way of knowing it or proving it if the employer is allowed to collect the information in the first place. As opposed to the federal guideline telling employers not to discriminate based on a woman’s past salary, many state laws prohibit the employer from collecting that information at all. The City of Philadelphia, after learning about the gender wage gap between men and women in the city, issued an ordinance that makes it unlawful for an employer “[t]o inquire about a prospective employee’s wage history, require disclosure of wage history, or condition employment or consideration for an interview or employment on disclosure of wage history, or retaliate against a prospective employee for failing to comply with any wage history inquiry.” In 2020, the Third Circuit determined that the ordinance did not violate employers’ First Amendment right to free speech. Twenty-three other states and municipalities similarly enacted bans on employers asking people for past salary history information.
There are other prominent bans on employers obtaining certain information because it might facilitate discrimination or invade privacy. Some states ban employers from asking for job applicants’ social media passwords to get at private information about the employee. And courts have prohibited the use of certain screening tests once in common use (such as the Minnesota Multiphasic Personality Inventory (MMPI)) because they can generate information about a person’s health condition in violation of the federal Americans with Disabilities Act. Sometimes, the bans focus on technologies that elicit certain information that can lead to employment discrimination. The federal Genetic Information Nondiscrimination Act of 2008, for example, prohibits employers from requiring job applicants to undergo predictive genetic tests that indicate that they have a predisposition to later develop a genetic disease.
D. Developing a Policy Response to AI-Assisted Hiring Technologies
AI-assisted hiring raises many of the problems that have led to bans in the past. Like the use of polygraphs, there is no proof that AI-assisted hiring correctly measures the traits that make a good employee. Like the MMPI, one-way video interviewing and video games can identify medical and psychiatric conditions.
Because AI hiring technologies discriminate and may not even identify qualified applicants, there is a sufficient rationale for a ban on their use. In a complaint filed with the Federal Trade Commission (FTC), the Electronic Privacy Information Center (EPIC) provided the policy rationale for a ban. EPIC argued that HireVue, a one-way video interview platform, “lack[ed] a ‘reasonable basis’ to support the claims” that HireVue’s “video-based algorithmic assessments ‘provide[] excellent insight into attributes like social intelligence (interpersonal skills), communication skills, personality traits, and overall job aptitude.’” Specifically, EPIC argued that the use of such technology was “unfair” and “deceptive” within the meaning of the FTC Act, and, moreover, that the use of AI can result in gender, racial, and neurological bias. As an unfair trade practice, EPIC noted, the tool “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” Before the FTC could act on EPIC’s complaint, however, HireVue issued a statement that it would discontinue the use of facial analysis in its screening technology.
Given the limits of AI-assisted hiring technologies, a ban is an appropriate approach and would avoid the need to challenge the practices one by one in front of the FTC. Even short of a total ban, it would be useful to limit the situations in which AI-assisted hiring practices were permissible. If a ban cannot be achieved, we should adopt guidelines to help ensure appropriate gender representation—such as not being able to create or refine the algorithm on current employees if the representation of women among leaders of the company does not meet a four-fifths standard. This would serve to prohibit the use of AI-assisted hiring in many well-known tech companies and Fortune 500 corporations that are led primarily by men.
We could also require that, for any AI-assisted hiring, the algorithm be shown as valid in advance for the type of job at issue before it is applied. Along those lines, Congress or state legislatures could codify, with stiff penalties, the Uniform Guidelines approach that before using a selection tool for hiring, an employer should perform a job analysis to determine which measures of work behaviors or performance are relevant to the job or group of jobs in question. Then the employer must assess whether there is “empirical data demonstrating that the selection procedure is predictive of or significantly correlated with important elements of job performance.”
Conclusion
The quest for fairness in hiring practices is not just about preventing discrimination. Gender diversity is also a driver of innovation and a stronger economy. A host of studies show that diverse teams make better decisions. Men working with other men tend to agree with each other. Adding women to the groups makes men prepare better and anticipate alternative arguments. As a result, mixed groups create more innovative solutions. Gender diversity can also help the bottom line. When business school professors assessed the companies that make up the Standard & Poor’s 1500, they found that having female representation in top management correlated to a $42 million increase in firm value.
The use of resume scanning, one-way video interviews, and video games to screen applicants stifles diversity and creates for female applicants the sort of “headwinds” which have been viewed by the U.S. Supreme Court as impermissible under Title VII of the Equal Employment Opportunities Act. The developer of a social bookmarking site called Pinboard, Maciej Cegłowski, referred to the phenomenon more bluntly, “call[ing] machine learning ‘money laundering for bias.’ . . . ‘[A] clean, mathematical apparatus that gives the status quo the aura of logical inevitability.’”
As with Title VII itself, our policy recommendations are not designed to give women an unfair advantage. They are instead an attempt to level the playing field so that women are not discriminated against by AI in ways that perpetuate existing bias. In that sense, we are asking no more than Ruth Bader Ginsburg asked of the Supreme Court at oral argument in Frontiero v. Richardson when she quoted the words of 19th century abolitionist and feminist Sarah Grimké: “I ask no favor for my sex. All I ask of our brethren is that they take their feet off our necks.” And their biased AI out of our job prospects.
| 2023-03-20T00:00:00 |
https://cardozolawreview.com/automating-discrimination-ai-hiring-practices-and-gender-inequality/
|
[
{
"date": "2023/03/20",
"position": 74,
"query": "artificial intelligence employment"
},
{
"date": "2023/03/20",
"position": 49,
"query": "job automation statistics"
},
{
"date": "2023/03/20",
"position": 40,
"query": "AI job creation vs elimination"
},
{
"date": "2023/03/20",
"position": 74,
"query": "AI skills gap"
},
{
"date": "2023/03/20",
"position": 74,
"query": "AI labor market trends"
},
{
"date": "2023/03/20",
"position": 95,
"query": "government AI workforce policy"
},
{
"date": "2023/03/20",
"position": 51,
"query": "machine learning workforce"
},
{
"date": "2023/03/20",
"position": 99,
"query": "artificial intelligence workers"
},
{
"date": "2023/03/20",
"position": 11,
"query": "artificial intelligence hiring"
}
] |
|
The Job Outlook for CNC Lath & Mill Machinists
|
The Job Outlook for CNC Lath & Mill Machinists: What You Need to Know
|
https://totalaviationstaffing.com
|
[] |
... Labor Statistics projecting a 3% job growth rate from 2020 to 2030. The ... labor to automated systems, creating new job opportunities for CNC machinists.
|
The Job Outlook for CNC Lath & Mill Machinists: What You Need to Know
If you’re interested in pursuing a career in the manufacturing industry as a CNC lath and mill machinist, now might be the perfect time to do so. The job outlook for CNC machinists is optimistic, thanks to the increasing demand for precision parts and the rise of automation.
However, it’s crucial to have a comprehensive understanding of the industry’s qualifications, skills, and opportunities to succeed in this career path. We’ll delve into the job outlook for CNC lath and mill machinists and the necessary steps to become one.
What Does the Job Outlook Look Like?
Computer Numerical Control (CNC) lathes and mills are used to cut and shape a variety of materials, from metals to plastics, to create a wide range of products. As manufacturing and engineering continue to evolve and grow, so does the demand for skilled CNC machinists who can operate and program these complex machines.
Job Outlook for CNC Lath & Mill Machinists:
The job outlook for CNC lath and mill machinists is positive, with the U.S. Bureau of Labor Statistics projecting a 3% job growth rate from 2020 to 2030. The demand for skilled CNC machinists is driven by the growing need for precision manufacturing across various industries, including aerospace, automotive, medical devices, and electronics. The use of CNC machines has increased productivity and accuracy in manufacturing, leading to a shift from manual labor to automated systems, creating new job opportunities for CNC machinists. The adoption of advanced technologies, such as robotics and artificial intelligence, has created a need for machinists who can work with these systems, expanding the scope of CNC machining jobs. The outlook for CNC machinists is also influenced by the location of the job, with states like California, Texas, and Michigan having the highest employment rates for machinists.
This job is promising, with new opportunities emerging as technology continues to advance.
If you have an interest in precision manufacturing and enjoy working with complex machinery, a career as a CNC machinist could be a great fit. With the right training and skills, you can become a sought-after professional in a growing industry.
What Does the Job Outlook Look Like?
Explore Opportunities in the Field
If you’re interested in pursuing a career as a CNC machinist, there are plenty of opportunities available in this growing field. CNC lathes and mills are used in a wide range of industries, and demand for skilled machinists is only increasing as technology advances.
Other opportunities available in the field:
CNC machinists can work in a variety of industries, including aerospace, automotive, medical devices, electronics, and more.
There are several different types of jobs available in the field, including CNC programmer, machinist, operator, and technician.
Depending on the job, you may work with different types of machines, such as CNC lathes, mills, routers, or grinders.
CNC machinists can also specialize in certain areas, such as programming or setup, or focus on a particular material, such as metal or plastic.
Advancements in technology are creating new opportunities in the field, such as working with 3D printing or robotics systems.
With the right training and skills, CNC machinists can advance in their careers, becoming team leaders or supervisors, or even starting their own businesses.
As you can see, there are many opportunities available in the field of CNC machining, with a variety of jobs and industries to choose from. Whether you’re interested in aerospace, medical devices, or another field, there is likely a need for skilled CNC machinists. With the right training and experience, you can build a rewarding career in this growing industry.
Explore Opportunities in the Field
What Qualifications Do You Need to Become a CNC Lath & Mill Machinist?
CNC lath and mill machinists are skilled workers who operate computer numerical control (CNC) machines to produce precision parts for various industries such as aviation, automotive, and manufacturing. As the demand for these parts continues to grow, so does the need for qualified CNC machinists.
Here are some of the qualifications that employers may look for:
High school diploma or equivalent
Vocational or technical training in machining, CNC programming, or related fields
Knowledge of CNC machine operation, programming, and maintenance
Proficiency in reading and interpreting blueprints and technical drawings
Understanding of machine shop safety protocols and procedures
Good mechanical aptitude and problem-solving skills
Attention to detail and ability to work with precision measuring instruments such as micrometers and calipers
To become a CNC lath and mill machinist, you need education, training, and experience. A strong foundation in machining principles and adaptability to new technologies can lead to success in this dynamic field. As the demand for precision parts grows, CNC machinists will remain in high demand, making it an excellent career choice for those passionate about machining and manufacturing.
What Qualifications Do You Need to Become a CNC Lath & Mill Machinist?
The Future of CNC Machining: What Lies Ahead for the Industry
The CNC machining industry is rapidly evolving with the introduction of new technologies and automation. As a result, the future of CNC machining is expected to be exciting and full of opportunities.
Some of the trends that are shaping the future of the industry include the adoption of Industry 4.0 technologies, such as the Internet of Things (IoT), robotics, and artificial intelligence. The increasing use of 3D printing and additive manufacturing is also changing the way products are designed and produced.
As CNC machines become more sophisticated, there will be a growing demand for skilled operators who can program, operate and maintain these machines.
The future looks bright for those pursuing a career in CNC machining, as the industry is expected to continue to grow and evolve in the coming years.
The Future of CNC Machining: What Lies Ahead for the Industry
Final Thoughts
As the manufacturing industry continues to evolve and embrace new technologies, CNC Lath and Mill Machinists will play an integral role in creating high-quality, precise parts. By staying up-to-date on the latest trends, technologies, and techniques, CNC machinists can position themselves for success in this dynamic field.
With a solid foundation in machining principles, the ability to adapt to new technologies and processes, and a passion for precision work, CNC lath and mill machinists can enjoy a rewarding and fulfilling career in the manufacturing industry.
Are you ready to take the next step in your career as a CNC lath and mill machinist? With Total Aviation Staffing, you can find the perfect role for you with top MROs, OEMs, airlines, aerospace, and charter companies. We’ll help take your career forward by connecting you with the most sought-after aviation and aerospace companies while providing you with resume assistance, job search support, and more. Contact us today and let’s start building your dream full-time career in the sky!
| 2023-03-20T00:00:00 |
https://totalaviationstaffing.com/the-job-outlook-for-cnc-lath-mill-machinists-what-you-need-to-know/
|
[
{
"date": "2023/03/20",
"position": 61,
"query": "job automation statistics"
}
] |
|
AI will eliminate 'a lot of' current jobs, says ChatGPT creator
|
AI will eliminate 'a lot of' current jobs, says ChatGPT creator
|
https://www.techcircle.in
|
[] |
Sam Altman, co-founder, and CEO of OpenAI, the startup that created ChatGPT and DALL-E, is worried that Artificial Intelligence (AI) based products such as ...
|
Sam Altman, co-founder, and CEO of OpenAI, the startup that created ChatGPT and DALL-E, is worried that Artificial Intelligence (AI) based products such as ChatGPT are going to eliminate a lot of the current jobs.
“Properly done it is going to eliminate a lot of current jobs. That is true,” Altman said in an interview with ABC News.
“Talking about downsides and trying to avoid those while we push in the direction of the upside is important. We will need ways to figure out ways to slow down the technology over time,” added Altman.
Loss of many of the existing job roles due to the growing adoption of AI in core business operations is one of the top concerns of working professionals across industries.
According to a Mckinsey & Company report, more than 100 million workers may have to switch occupations by 2023 due to the faster adoption of automation and AI, which was accelerated by the Covid-19 pandemic.
ChatGPT has emerged as one of the most sought-after AI products to date. Within two months of its launch, ChatGPT is believed to have acquired 100 million monthly active users, making it the fastest-growing application in history ever, according to a UBS study, published in February.
Its underlying technology GPT-3.5 and the upgraded version GPT-4, which was released last week, has generated a lot of interest from enterprises across the world including in India. Several firms are working on building chatbots on it using open APIs.
Despite fears of job losses, ChatGPT and GPT are expected to create a lot of new roles. Many startups are looking to hire prompt engineers to write nuanced natural language prompts and test the effectiveness of generative AI models.
US-based AI startup Anthropic is offering a salary of $175,000 - $335,000 annually to hire prompt engineers.
| 2023-03-20T00:00:00 |
2023/03/20
|
https://www.techcircle.in/2023/03/20/ai-will-eliminate-a-lot-of-current-jobs-says-chatgpt-creator/
|
[
{
"date": "2023/03/20",
"position": 44,
"query": "AI job creation vs elimination"
}
] |
How Can Your LMS Help Bridge the Skills Gap?
|
How Can Your LMS Help Bridge the Skills Gap?
|
https://talentculture.com
|
[
"Kishor Amberkar"
] |
But the good news is that these pressures are causing employers to look within their organizations to bridge this skills gap. As a result, we're seeing ...
|
Sponsored by Learnsoft
The Skills Gap is Growing. So is Pressure on L&D
Demand for skilled employees seems limitless. Modern technology and automation are displacing workers in all industries, even while creating new jobs that need to be filled. Baby Boomers are rapidly retiring, but entry-level people from younger generations haven’t yet developed enough expertise to take on these positions. And competition for skilled professionals in technology, healthcare and other specialties remains fierce.
Throughout the pandemic, HR departments felt pressure to deliver a high-performing workforce. Unfortunately, that pressure isn’t likely to ease any time soon. In fact, by 2030, talent shortages in the U.S. alone are expected to result in $162 billion in unrealized revenue.
If these trends give you heart palpitations, I apologize. But the good news is that these pressures are causing employers to look within their organizations to bridge this skills gap. As a result, we’re seeing increased investment in upskilling and reskilling of current employees. Even so, L&D programs are not as efficient as HR and business leaders want them to be.
In part, this is because organizations are not leveraging available learning tools and resources to their full capacity. If you see this happening in your organization, how can you improve?
Let’s take a closer look at the primary types of skills gaps and how organizations are responding. Then, I’ll explain how a learning management system (LMS) can go beyond simply delivering training content to help your business address critical skills challenges.
3 Kinds of Skills Gaps: What Are They?
“Skills gap” is generally used as a catch-all phrase for whatever is amiss in the employee/employer productivity relationship. But actually, there are three gaps to consider:
1. Skill Gap
Unlike the broader term, this specifically refers to intellectual or functional gaps in a person’s ability to perform a particular job effectively. For example, in healthcare this can be demonstrated by a lack of certification required to provide patient care. Or in construction, skilled laborers may need to develop proficiency with new equipment before they can use it at a job site. This differs from a knowledge gap.
2. Knowledge Gap
When employees do not know relevant information about their job or how their role fits into their department or organization, this is a knowledge gap. It can surface during onboarding – but can persist throughout an employee’s tenure. This is why hiring managers need to understand a new employee’s industry and job-specific knowledge, and then provide resources to bring that individual up-to-par as soon as possible.
3. Performance Gap
To perform well in a role, skills and knowledge are essential. However, motivation and commitment are just as important. This brings us to the performance gap – which is the disparity between an organization’s goals and an individual’s performance. This can be measured by a lack of engagement, low productivity levels, poor quality output, and other relevant metrics. These gaps can be especially detrimental, because they tend to expand over time when organizations lack tools to accurately measure key performance factors.
How Employers Are Addressing Skill Gaps
The most efficient way to accurately measure skills in an organization is with an appropriate skills management tool. For example, almost all large companies (98%, according to Training Magazine), use an LMS to manage and deliver e-learning courses and training programs.
The most-used function of an LMS is the ability to track training completions and course certifications within the learning platform. This solves some of the basic skills problems organizations face. However, the missing piece in many LMS platforms is a comprehensive and intuitive reporting capability.
For years, organizations in many industries tracked individual skills and knowledge through manual processes. In some industries, this is still managed manually.
That’s right. In 2023, organizations continue to struggle with automating and streamlining data management and reporting. Even when training is conducted online through an e-learning platform, the data is not easily transferred between applications.
I’ve worked with organizations where employees complete training online or in-person, and then a data entry specialist spends time manually extracting the completion data and copying it into an excel file. Next, they manually import the information into another HR application. This process is time consuming, inefficient and leaves room for error. But fortunately, there are better ways to manage this data-intensive business process.
An LMS Can Do More Than Deliver Content
1. Leverage Integrations
To truly maximize the benefits of an LMS, you need to integrate it with other enterprise applications and tools. By integrating your LMS with your HR ecosystem, you can streamline and automate your training processes, reduce administrative burdens, and enhance the user experience.
Your organization can track and manage L&D goals across the entire company using a single login system that connects an end user to any application within the LMS system. Users don’t need multiple logins to access the intranet, the compliance training portal, benefits and payroll, professional development courses, and so on. Instead, they’re all housed in one system – and those systems talk to each other so they can verify transferred data.
Here’s the benefit from a skills gap perspective: Because these applications work together within the HR ecosystem, you can easily identify employee reskilling and upskilling needs.
2. Support Employee Career Advancement
Understanding employee competency is essential to optimize the talent available in your workforce. This is why an LMS platform’s reporting function is just as important as its content delivery function. Job turnover is bound to happen, but how can an LMS help you more rapidly fill unexpected job openings?
L&D can quickly turn to a comprehensive reporting dashboard that identifies team members who are compliant and certified to fill a role. Intuitive reporting can make it easy to identify these qualified employees, regardless of their team or location. You can also leverage reporting to pinpoint existing skill deficits and make data-driven employee development decisions.
3. Establish Clear Paths to Success
The most important step in closing any skills gap is offering individuals opportunities to upskill through learning experiences and resources that expand their professional knowledge. Research indicates that employees agree. In fact, according to SHRM, 76% of employees are more inclined to stay at a company where continuous learning is available.
This is the strong suit of a modern LMS. It can help L&D teams work with managers to define skills benchmarks, build assessments that identify skills gaps, and determine how development can close those gaps.
You can outline specific courses employees must complete to move up in rank. Then you can communicate about these career growth opportunities and the path forward.
4. Meet Employees on Their Learning Terms
The keyword here is learning. There are many ways to distribute information. But you need to ensure that employees don’t just “acknowledge” that information. The goal is to absorb it, understand it and retain it.
A lack of learning engagement doesn’t benefit employees, and it can even put your organization at risk. For example, Corporate Compliance Insights found that 49% of survey respondents skipped or did not thoroughly listen to mandated compliance training. Imagine almost half of your workforce admitting they don’t pay attention to required learning! Sadly, this is a reality.
How can you avoid passive learning and drive engagement? Whatever content you create, it’s important to bring training directly to individuals and make sure the experience is as accessible, useful and relevant as possible.
Be sure people have access to personalized training that best suits their needs. In some scenarios, this means face-to-face virtual training. In others, it means microlearning modules people can knock-out in 5 or 10 minutes.
Engaged learners make empowered workers. It is important to remember that people are lifelong learners. Employees need to train, retain, and show competency in their roles. This doesn’t stop when they clock-in for work. A flexible LMS can help employees train at workstation or remotely on a laptop or phone. And it should support personalized learning paths that help tailor learning to individual interests and goals.
Your Organization Has Changed. Has Your LMS?
Addressing the skills gap means prioritizing your employees by making learning accessible, personalized and engaging. Most LMS providers require organizations to enter a multi-year contract – some up to 10 years. That’s a long time to use a platform if it doesn’t meet all your needs.
Is your LMS keeping pace with the needs of your workforce or your business? Consider these criteria of an effective LMS platform:
SaaS-based solution with flexibility to address diverse, changing needs
Integrates seamlessly with your HR ecosystem
A user experience that is easy for learners, instructors and administrators
Functionality that accommodates individual learning schedules and needs
Supports various content types to drive learning engagement
Streamlines upskilling/reskilling/cross-training efforts
Enables self-directed learning paths with recommendations based on job position, requirements, skills, competencies, and performance.
| 2023-03-20T00:00:00 |
2023/03/20
|
https://talentculture.com/blog/how-can-your-lms-help-bridge-the-skills-gap/
|
[
{
"date": "2023/03/20",
"position": 49,
"query": "AI skills gap"
}
] |
A New Model to Leverage the Impact of ChatGPT and AI
|
A New Model to Leverage the Impact of ChatGPT and AI
|
https://kenaninstitute.unc.edu
|
[] |
Skills Gap · American Growth Project · About · Centers and Initiatives · Kenan ... Task Force 5 is responsible for understanding the business model impacts of AI ...
|
Since its release in November, interest in and use of ChatGPT has exploded (100 million users and counting). With its arrival and the appearance of other generative artificial intelligence tools, we now have accessible technologies that can turn text into images, videos or music – and everyone is paying attention. As these tools embed themselves into everyday society, all of us are trying to understand the short- and long-term impact of generative AI. Amid this sea change, business leaders are asking themselves questions such as:
How can I use these tools to improve employee productivity and creativity now?
How can I use these tools to transform my organization long term?
How are my customers using these technologies now and what does that mean for my firm?
How will these technologies change my business model?
What are the ethical and legal uses of these technologies?
A powerful way to organize the questions above is to put them in two categories. The first of these is Applications. Questions of this sort generally seek to uncover how one can use these technologies to better achieve a given mission and goals. The second category is Implications, which take a broader view to look at how these technologies will change the organization, its environment and the world. Put more simply, Applications look at how AI can be used; Implications examine what effect these technologies will have on us.
We can put the two together into what I am calling the Technology Applications and Implications Model (TAIM, pronounced “tame”). Combining the above categories with a division into the short term and long term produces a 2×2 matrix as seen below.[1]
One can then take the questions the company faces and apply them to the model, then proceed to create internal task forces to address them, as seen below.
Organizing Internal Task Forces to Define a Company’s AI Approach
As we can see, the TAIM matrix produces five task forces, each of which would require a different set of skills and participants. Below is one way of defining these task forces in the context of artificial intelligence.
Task Force 1 is focused on increasing employee productivity and creativity. This task force needs to be cross-functional to ensure an understanding of how each function (e.g., human resources, accounting, etc.) might use AI in its domain. It also must be composed of people who are AI-fluent, are using the tools already and are always exploring new ways to apply the technology.[2]
Task Force 2’s mission is understanding customer use of AI. Therefore, this task force must have market-facing employees. For example, it should include sales leaders and technologists who talk to leading-edge customers. It should also consist of personnel who have the skills to take that information and understand the consequences of it by using tools such as the Implication Wheel.
The Implication Wheel puts an item (e.g., generative AI) in the center and then examines the first-, second- and third-order effects of that item on, in this case, the firm. With this new knowledge, the team can then factor in the learnings to bring forward recommendations to the company.
Figure 1: Example of an Implication Wheel
Task Force 3 is focused on ethical and legal use of AI. This task force may need to bring in outside ethicists to work with members who have company and industry domain knowledge to develop its findings. It will also require legal counsel. Both the legal and ethical uses of generative AI are being debated and defined right now – for example, the U.S. Patent Office said images generated from text using generative AI do not qualify for copyright protection – and companies must prepare to align with the changes that will surely come.
Task Force 4 is charged with building an AI transformation strategy. It should be composed of staff members who are involved in digital transformation as well as representatives from each function who are digitally literate and have a deep grasp of industry trends.
Task Force 5 is responsible for understanding the business model impacts of AI (and therefore industry impacts of AI as well). This task force will require people who have both an extensive knowledge of the industry (e.g., marketing personnel) and the firm’s business model (financial staff) as well as organization strategists and open thinkers who possess futurist skills (either sourced internally or externally). As a side note, there may be some overlap of personnel between Task Force 4 and Task Force 5, and the two groups must be in constant contact.
Leadership Required for Successful Implementation
This task force structure must be overseen by an overall meta-task force leadership team. This team would be responsible for ensuring each task force stays on track, providing resources to each task force so it can accomplish its mission and convening regular interlocks of taskforce leaders to ensure alignment and cross-pollination of ideas among the different units. The meta-task force leadership team would comprise senior leaders, the taskforce leaders themselves and staff who have the project management skills to drive progress and results.
Here at UNC, we have proposed to use the TAIM methodology complete with cross-discipline task forces to navigate and leverage AI within our university and our ecosystem. This approach will have to be broad, as we consider the applications and implications of AI not only on our students, faculty and staff but society and humanity as well. Higher education is a field that is grappling with the AI challenge on multiple levels, from how students are allowed to ethically use generative AI to how AI will change the industry to what AI can tell us about being human.
In sum, the TAIM task force approach is an effective and efficient means for an organization to navigate any important technological change and respond proactively to carry out its mission and achieve competitive advantage. Perhaps your company would benefit from piloting it to address the significant change heralded by AI.
[1] This approach can also be used for any new technology, trend or change that one wants to systematically analyze.
[2] For more examples of creative ways people are using ChatGPT, I recommend the work of Ethan Mollick.
| 2023-03-20T00:00:00 |
https://kenaninstitute.unc.edu/commentary/a-new-model-to-leverage-the-impact-of-chatgpt-and-ai/
|
[
{
"date": "2023/03/20",
"position": 95,
"query": "AI skills gap"
}
] |
|
5 Macrotrends Shaping The Future of Work
|
The Future of Work: 5 Macrotrends
|
https://romebusinessschool.com
|
[] |
Automation and AI have already had a significant impact on the workforce. Robots and automated systems are increasingly taking on tasks that were previously ...
|
The future of work is rapidly evolving, and businesses are being forced to adapt to keep up with the changing landscape. In this article we’ll explore the challenges and opportunities that lie ahead.
Managing Remote Teams
The COVID-19 pandemic has accelerated many trends that were already reshaping the workplace, with remote work becoming the new norm and digital transformation becoming increasingly important for businesses. As a result, one key challenge facing businesses is how to effectively manage remote teams, ensuring effective communication and collaboration among team members.
Effective remote team management requires the implementation of solutions that foster communication, collaboration, and engagement among team members. Some of these solutions include investing in video conferencing and project management software, providing regular feedback and performance evaluations, establishing clear expectations and goals, and promoting a culture of transparency and open communication. Additionally, managerial soft skills such as empathy, active listening, and flexibility are crucial for building strong relationships and trust with remote team members. Good practices such as regular check-ins, virtual team building activities, and opportunities for team members to share their ideas and opinions can also help foster a sense of community and belonging among remote team members. By implementing these solutions and practices, managers can ensure that their remote teams remain productive, engaged, and motivated in a virtual work environment.
The Importance of Data-Analytics
Another trend that is shaping the future of work is the increasing importance of data analytics. With businesses becoming more reliant on digital technologies, they are generating vast amounts of data that can be used to drive insights and inform decision-making. Therefore, developing data analytics capabilities is crucial for businesses to stay competitive in the marketplace.
Data analytics can be a powerful tool for businesses, providing insights into customer behavior, market trends, and internal operations. To make good use of data, businesses must invest in technologies and tools that enable them to gather and analyze data effectively. However, it is equally important for businesses to train their employees to interpret and use data effectively. This includes providing training on data analysis tools and techniques, as well as fostering a culture of data-driven decision-making.
The Rise of Artificial Intelligence
The rise of artificial intelligence and automation is also transforming the workplace. While these technologies offer many benefits, they also present challenges, such as the potential for job displacement. To tackle this challenge, businesses must find ways to upskill and reskill their employees to ensure that they remain relevant in a rapidly changing job market.
Automation and AI have already had a significant impact on the workforce. Robots and automated systems are increasingly taking on tasks that were previously performed by humans. This trend is set to continue, and experts predict that up to 40% of jobs could be automated in the next 15 years.
For companies, this presents both challenges and opportunities. On the one hand, automation and AI can help increase efficiency, reduce costs, and improve the quality of goods and services. On the other hand, they can lead to job losses and displace workers.
To prepare for this future, companies must take a proactive approach to workforce planning. This means identifying the skills and competencies that will be required in the future and developing strategies to acquire and develop these skills within the existing workforce. Companies can provide employees with the skills they need to work alongside automated systems and robots, rather than being replaced by them. This might involve upskilling employees in areas such as data analysis, programming, or digital marketing.
Corporate Social Responsibility
Corporate social responsibility is another trend that is becoming increasingly important for businesses. Consumers are demanding that companies act in a socially responsible manner, and this trend is only likely to continue. As a result, businesses must find ways to integrate sustainability into their operations and be transparent about their social and environmental impact.
Rome Business School Research Center analysed this topic in the Fashion industry within the Research New Fashion Trends in Italy and within the Research Sustainability and Corporate Social Responsibility in Italy
Diversity and Inclusion
Finally, the workplace of the future will be more diverse and inclusive. With businesses becoming more global, they must find ways to ensure that their workforce reflects the diversity of the communities they serve. Creating a culture of inclusivity and equity is essential for businesses to attract and retain top talent. At Rome Business School we’ve created the Fair Employability Program, in order to train our students into #BetterManagers4aBetterWorld with modules dedicated to diversity, inclusion, ethics and social responsibility.
The future of work is rapidly evolving, and businesses must be prepared to navigate the changing landscape. By investing in remote work tools, data analytics capabilities, upskilling and reskilling programs, sustainability initiatives, and diversity and inclusion efforts, businesses can stay ahead of the curve and thrive in a rapidly changing world.
| 2023-03-20T00:00:00 |
2023/03/20
|
https://romebusinessschool.com/blog/macrotrends-shaping-future-work/
|
[
{
"date": "2023/03/20",
"position": 84,
"query": "AI labor market trends"
}
] |
Does your company need a policy for AI like ChatGPT?
|
The Corporate Governance Institute
|
https://www.thecorporategovernanceinstitute.com
|
[
"Dan Byrne"
] |
The policy should address how the company will ensure that AI systems do not discriminate against individuals or groups based on protected characteristics such ...
|
Does your company have a policy for ChatGPT use? It probably wasn’t on any firm’s to-do list a year ago. How fast things can change in management and governance.
‘Chat’ as some call it is generative AI that has become a household name. Supporters love it for its cutting-edge ability to generate human-like content and save time on certain menial tasks. Critics are nervous, however. They don’t like the pace of change it brings, nor the potential impact it could have on day-to-day business. Whatever your company thinks about it, one thing is clear: we are rapidly approaching the stage where boards and management need a policy for ChatGPT, and other forms of generative AI. By creating a comprehensive policy around the use of AI in the workplace, a board of directors can help to ensure that the technology is used effectively and ethically and that employees are trained on how to use it safely and responsibly. Read more: What is generative AI?
Stay compliant, stay competitive Build a better future with the Diploma in Corporate Governance. Download brochure View course Stay compliant, stay competitive Build a better future with the Diploma in Corporate Governance. Download brochure View course ×
Quick reminder: what is ChatGPT?
ChatGPT is generative AI that has surged in popularity this year. Its strengths lie in its ability to generate content in response to human questions. Generally, this content is factual, relevant, and delivered in a way that makes it look human. You can read more about it here.
Why has ChatGPT become so popular?
Its uniqueness. Its groundbreaking ability to speak to humans naturally and conversationally sets it apart from competitors.
Its usefulness. ChatGPT has been used to write articles, marketing content, essays (something schools and colleges are rapidly trying to gain control over), and even computer code.
Its newsworthiness. The above points have earned the system front-page publicity worldwide, ensuring more would read about it and try it out.
Is ChatGPT already a big player in business?
It appears so. A new report from Korn Ferry estimates that nearly half of surveyed businesses already use ChatGPT to complete work tasks. 80% said the system was “legitimate and beneficial.” In other words, it has only been in the public realm for months, and ChatGPT has already found a home in a huge proportion of offices. “They’re figuring out ways to make generative AI work for them,” said Esther Colwill, president of Korn Ferry’s Global Technology, Communications, and Professional Services practice.
How are businesses using ChatGPT?
So much. It’s part of the system’s appeal. Examples include: Writing templates for online content.
Customer service correspondence.
Writing code.
Writing sales pitches.
Summarising long reports.
Analyse business trends. Supporters of the system maintain that it doesn’t signal a replacement of traditional workers, but it does give traditional workers a time-saving tool, the likes of which they have never seen before. In other words, it’s opening new doors.
Sounds great, so why the urgent need for a policy for ChatGPT?
Because as good as ChatGPT looks on first viewing, the system also has its share of limitations that could cause problems if left unchecked. Much of these limitations stem from the information bank available to it. That bank does not keep up with any news cycle. The most recent information could be months, if not years, old. This means any ChatCPT-produced content could ignore the most recent relevant events.
The information bank can include biased sources. ChatCPT could misinterpret these as hard facts and present them as such.
The bank may contain sensitive data, which ChatGPT could deem fair game for widespread publishing. If organisations use ChatGPT for published content, they become liable. In addition, the system (like any other tech) can make simple errors that might be challenging to spot. All of this adds up to scenario where something useful could become something harmful if the right controls aren’t put in place.
A ChatGPT policy is a good idea
Limitations are just one factor in why companies should create a policy for chat GPT. The other is the pace of its popularity. Many will undoubtedly feel they need help to keep up and make sound decisions about its use. Policies help correct this imbalance. They allow corporate leaders to decide what ChatGPT is helpful for, and when it should be avoided. They also ensure that a business isn’t shying away from what is undoubtedly a significant new player in the business world but isn’t going in blind either. Like any new corporate movement, strategy is crucial. Make sure you form yours soon. Read more: How to start using ChatGPT
What should a ChatGPT or AI usage policy contain?
| 2023-03-20T00:00:00 |
https://www.thecorporategovernanceinstitute.com/insights/news-analysis/policy-for-chatgpt-may-be-crucial-as-ai-surges-in-popularity/?srsltid=AfmBOoq_3YQMXYryaXVK_MSklGjeBtR_uZQX3hDqzEKF72zAmKTr61Tj
|
[
{
"date": "2023/03/20",
"position": 35,
"query": "government AI workforce policy"
}
] |
|
Innovation 2023 five minutes with… Sana Khareghani, ...
|
Five minutes with Sana Khareghani
|
https://www.globalgovernmentforum.com
|
[
"Mia Hunt"
] |
Getting the use of AI technologies right across the above three dimensions – building blocks, adoption, governance – can help us solve some of the most pressing ...
|
In this sister series to our ‘Five minutes with’ interviews, we share insights from the civil and public service leaders that will be speaking at our free Innovation conference. Taking place in London on 21 March 2023, and available to stream on demand, during the event officials from all over the world will promote and develop new approaches to policymaking and service delivery.
In this interview, Sana Khareghani, former head of the UK Office for AI – who will join the conference session on innovation in artificial intelligence – tells GGF about the importance of sharing AI knowledge between countries to improve public services, taking inspiration from Canada and Estonia, and the values she most admires in people.
Click here to register for Innovation 2023
What are you most interested in discussing at Innovation 2023?
I am most interested to hear about the advancement of other countries in using AI technologies for public services, specifically the building blocks. For example, how are they ensuring access to the right infrastructure from data to compute; encouraging adoption across sectors; and getting the governance right.
Getting the use of AI technologies right across the above three dimensions – building blocks, adoption, governance – can help us solve some of the most pressing challenges of our time, and sharing the learnings with each other means that we can achieve that goal even more quickly.
What is the best piece of advice you’ve been given in your working life?
Listen to people – really hear what they have to say and understand where they are coming from, that will help you address any issues and ensure you meet expectations.
What advice would you give someone starting out in the civil service?
Find long-term civil servants who are exceptional at their job (of which there are many!) and watch them. What do they do? How do they do it? And try to understand why they do it. The civil service is an interesting beast and you have to learn how to get things done, otherwise you run to stand still.
What do you like most about working in the civil service?
The civil service allows you to find solutions to large, complex problems that affect the population. Working in the civil service gives real meaning to the work that you do – you can see its effects in people’s day to day lives.
And what do you dislike about it?
There are many layers in the civil service and there is a formula to getting things done. Sometimes this is more complex than it needs to be for a solution that everyone agrees on and at these times, it can feel frustrating. I guarantee though, there is no better feeling than having the secretary of state or the prime minister endorse and embrace a new policy announcement from your area of work.
Which country are you most inspired by and why?
Estonia has been an inspiration for me with the work they have done with their digital government, the public services they provide to their citizens and the way they have continued to innovate as new technologies emerge. They are a leading light when it comes to thinking around the complexities of using AI technologies and their approach recognises that AI technologies are not magic dust, they are but an ingredient in the solution set.
Can you name one lesson or idea from abroad that has helped you and your colleagues?
The Canadian Institute for Advanced Research (CIFAR) network. How it has been set up and the terms within which professors are allowed to work is revolutionary. Bringing researchers and industry together, allowing professors the flexibility to work in new areas, create and take a stake in commercial products and companies while still allowing them to remain renowned academics should be a lesson for all. Yes, this can be done in other places, but not with the ease with which it is done in Canada.
A cheeky second, is the Canadian expert visa – which includes a working visa for your spouse/partner which is done within just one week – is also something of a marvel!
What attributes do you most value in people?
Drive, ambition, thirst for knowledge and problem solving. I like people who look at a problem, clap their hands and say, ‘Excellent, let’s get stuck in and find the solution!’
If you weren’t a civil servant, what would you be?
I have been a civil servant but I came to it from industry and have now returned to industry. I have been in tech and business all of my career across many different sectors including the civil service. I enjoy the thrill of learning about a new sector whilst relying on my understanding of the fundamentals around technology and business.
More from this series:
Laura Gilbert, 10 Downing Street chief analyst
Ann Dunkin, US Department of Energy CIO
Peter Pogačar, director general of Slovenia’s Ministry of Public Administration
Megan Lee Devlin, UK Central Digital and Data Office chief executive
Martin Ledolter, Austria’s Federal Procurement Agency managing director
Christine Bellamy, director of publishing, GOV.UK, Government Digital Service
Want to write for GGF? We are always looking to hear from public and civil servants on the latest developments in their organisation – please get in touch below or email [email protected]
| 2023-03-20T00:00:00 |
https://www.globalgovernmentforum.com/innovation-2023-five-minutes-with-sana-khareghani-former-head-of-the-uk-office-for-ai/
|
[
{
"date": "2023/03/20",
"position": 76,
"query": "government AI workforce policy"
}
] |
|
The Impact of AI on Connected Safety Solutions
|
The Impact of AI on Connected Safety Solutions
|
https://becklar.com
|
[
"Jon Robertson"
] |
Explore how machine learning and AI technology can dramatically improve ... intelligence to improve personal and workplace health and safety in your industry or ...
|
Staying connected is critical in today’s society. It has never been easier to manage our safety, that of our loved ones, our property or our employees through the development of connected devices and technology. This paper will explore the aggressive evolution and implementation of Internet of Things (IoT) and connected devices, the practical application of Artificial Intelligence (AI) to improve human interactions and significantly improve emergency response times, and how together they can have a profound impact on workplace safety, personal safety, and overall health and wellness.
| 2023-01-20T00:00:00 |
2023/01/20
|
https://becklar.com/artificial-intelligence-connected-safety/
|
[
{
"date": "2023/03/20",
"position": 78,
"query": "machine learning workforce"
}
] |
Retail Applications of Workforce Management Technology
|
Retail Applications of Workforce Management Technology
|
https://www.itsallgoodsinc.com
|
[
"Kerrie Gill",
"Kerrie Creates Web Content In A Number Of Venues. He Specializes In Researching Business",
"Technology Affairs",
"Putting Pen To Paper."
] |
Workforce management software utilizes algorithms and machine learning to correlate these elements. The following are some key features and benefits of using ...
|
M
ost c-store operators and managers do not have access to large HR departments. Therefore, you have to manage your employees on your own. Unfortunately, this can be daunting, especially since you are already faced with daily multitasking on a grand scale.
Time tracking, regulatory compliance, and employee scheduling can all be extremely time-consuming when done by hand. Workforce management (WFM) technology can streamline the process and help you focus on employee retention.
Here, we show you what workforce management is and why it is essential. Then, we will discuss its key elements and how technology can make your workload more manageable. After that, you will get an idea of which workforce management software tools work best for c-store operators.
What Is Workforce Management?
Simply stated, workforce management is the science of optimizing human resources. However, it goes well beyond matching the employee to the task. Instead, it encompasses a wide array of management disciplines to achieve maximum efficiency from every team member.
The research on this is clear:
When managers communicate a clear direction for employees to explore perspectives within the directive expectation, the manager is able to solidify employees that follow based on the need of the directive rather than the authority of the assignment (Krone, Kramer & Sias, 2010). Providing leadership to raise performance is an important factor when managing staff expectations. Although employees may have significant knowledge and expertise in the area of the profession, it is important that the workplace expectation stay linear to the company's primary mission, goals, and objectives. Managers that facilitate directives are dependent on the understanding and expectation of the regular staff to constantly develop efficient processes. This allows the adaptation of the needs of the job to meet goals while maintaining the workplace principles and ethics of the organization. (Williams, Pg. 11).
The overarching goal of workforce management is to mitigate risk and prioritize human capital. It also seeks to create a more productive environment in your store. All tasks are prioritized in order of importance and are checked off at the end of the day.
Benefits of Proper Workforce Management
Optimal labor forecasting
Efficient scheduling
Helps you stay within budget
Safe work environment
High employee retention rates
Consequences of Poor Workforce Management
Inefficient labor force
Increased accident rates
Regulatory non-compliance/lawsuits
Worker shortages
Soaring costs
What Are the Key Elements of Workforce Management?
Workforce management utilizes activities that focus on employee production, scheduling, and retention. The following are the most common elements in that process:
Hiring
Employee scheduling
Time tracking and attendance
Labor forecasting
Government compliance
Data collection
Training
Problems arise when any of these functions break down. Therefore, you need a system in place that can proactively mitigate the risks of having a human workforce.
How Does Technology Help?
Workforce management software utilizes algorithms and machine learning to correlate these elements. The following are some key features and benefits of using this technology for your c-store.
Scheduling
Employee scheduling is much easier by utilizing workforce management tools. It also makes the task of trading shifts between team members a lot simpler. You can even select certain employees for different jobs based on skills, experience, and other criteria.
Time Card Features
You can set up virtual time cards and apply certain rules specific to your c-store. Automatically track days off, sick days, and vacation time with your password-protected user dashboard.
Regulatory Compliance
Broken HR software allows you to keep ahead of government regulations and requirements. In addition, most software packages include regular updates.
Overtime Tracking
An overtime tracking function automatically alerts you if an employee gets close to their hourly limit. This valuable feature helps save you money long-term by limiting unauthorized overtime hours.
Labor Forecasting
By utilizing workforce management tools, staffing shortages become a thing of the past. Customized computer algorithms can tell you in advance whether holidays And special events may become a factor.
Analytics
Generating reports and graphs by hand is the old way of doing things. Now, you can do real-time data on demand and download this vital data instantly.
Mobile Management
Sometimes you are away from the store. However, that is not a problem since most workforce management packages include a mobile capability.
How To Choose the Right WFM Solutions for Your C-store
Here are some things to look for in a workforce management software package. Keep in mind that there can be several differences between each option. However, in most situations, you should look for these features when choosing the best one for your c-store.
At a minimum, your WFM software package should include:
Recruitment management
Onboarding and hiring management
Time management
Payroll and tax management
Performance and engagement management
Basic HR management
In addition, it is best to have these attributes as well:
Full integration across multiple channels and devices
Complete automation
Flexibility
Ease-of-use
Regular updates from the developer
Excellent customer service
Final Thoughts
Workforce management plays a vital role in the way a c-store operates. Your objective is to utilize the technology available to your advantage. Hopefully, we have given you a basic starting point to do that here.
References:
Williams, Michelle J., "Labor Management Principles in the Communication Discipline: Developing a Communication Plan Based on an Organization Analysis" (2017). All Capstone Projects. 340. http://opus.govst.edu/capstones/340
Merrimack College: How Retailers Use Data Science To Predict Your Purchases
MITSloan Management Systems: Workforce Ecosystems
G2: "Best Workforce Management Software for 2022"
| 2023-03-20T00:00:00 |
https://www.itsallgoodsinc.com/insights/retail-applications-of-workforce-management-technology
|
[
{
"date": "2023/03/20",
"position": 79,
"query": "machine learning workforce"
}
] |
|
Why White-Collar Unions Are on the Rise
|
Why White-Collar Unions Are on the Rise
|
https://www.reworked.co
|
[
"Michelle Hawley",
"About The Author"
] |
A trend has emerged in the American workforce: white-collar workers, including those in knowledge-based industries, banding together to form unions.
|
"Union.” What comes to mind when you hear the word?
I think of precarious blue-collar workers, those in the manufacturing and industrial sectors with dangerous and time-sensitive jobs. I think miners, steel workers, underwater welders, truck drivers.
But in recent years, a trend has emerged in the American workforce: white-collar workers, including those in knowledge-based industries, banding together to form unions.
What is driving this change? Why are white-collar workers turning to collective action, and what does the future hold for white-collar unions?
White-Collar Workers Spark New Labor Movement
White-collar unions are not new. Teachers have unions. Hollywood scriptwriters have unions. But organized labor is still not a common sight in white-collar workplaces.
Still, in the past few years, we've seen an upward trend in white-collar unionization efforts.
“Across industries, support for unions is higher than it’s been in 50 years, reflecting growing recognition of the power imbalance between workers and employers,” said Shelly Steward, director of the Future of Work Initiative at the Aspen Institute.
A Success for Organized Labor
At Alphabet, Google's parent company, employees have attempted unionization for years. Four workers, coined the 'Thanksgiving Four,' were fired in 2019 for “speaking out” during a demonstration at the company’s San Francisco location.
In 2021, around 230 Google employees officially announced they were forming a union with the Communications Workers of America (CWA), called the Alphabet Workers Union (AWU).
Organizers claim the union is a response to workplace harassment and unethical business choices, including Alphabet's bid on a Department of Defense project that would have workers develop artificial intelligence (AI) intended for war.
In 2023, two years after the AWU’s official launch, it has more than 1,300 members.
Organizing Efforts Face Backlash
In mid-February, white-collar workers in the autopilot division at a Tesla factory in Buffalo, New York, announced a campaign to form the company’s first union — clashing with anti-union CEO and co-founder Elon Musk.
In 2019 (and upheld again in 2021), the National Labor Relations Board (NLRB) found that Tesla illegally fired a worker involved in labor organization. Musk was also ordered to delete an anti-union tweet (shown below) considered threatening to labor organizers at the company.
One day after the launch of unionization efforts in Buffalo, Tesla fired several employees. Remaining staff received an email on a new policy prohibiting the recording of workplace meetings without all participants’ consent.
Tesla organizers call themselves “Tesla Workers United,” and they’re working alongside Workers United, affiliated with the Service Employees International Union. Unionization efforts, however, have so far been unsuccessful.
Related Article: Companies Are Not Families
Tech Workers Take Action
In the past, tech companies, including Amazon and Apple, have seen unionization efforts from their blue-collar workers, like warehouse employees. Today, white-collar workers in the industry believe unionization should extend beyond blue-collar jobs, and it's easy to see why.
The tech sector is notorious for long hours and competitive atmospheres. Tech workers might have a good paycheck, but they tend to lack work-life balance. And when they are at work, they don’t necessarily feel a sense of belonging — that someone there cares about their wellbeing and has their back.
In today’s digital age, where many organizations have made the switch to remote or hybrid work, tech companies have held tight to the physical workplace, giving employees no flexibility in when and where they work — something people desperately crave.
In fact, a 2022 survey of around 11,000 global knowledge workers found that 95% of them want flexible work hours, and 78% want flexibility in their working location.
“Tech companies are some of the most powerful in today’s economy," said Steward, "and unions provide workers a way to share that power and shape their conditions.”
The Non-Tech White-Collar Worker
Tessa West, professor of psychology at New York University and author of "Jerks at Work: Toxic Coworkers and What to Do About Them," said one place where she has seen the formation and effectiveness of unions among a historically white-collar group is where she works — with graduate students at NYU
“The desire for fair wages is now an argument made in all sectors — not just blue-collar ones,” she said. “Massive pay disparities in white-collar jobs, an awareness of how historical discrimination has given rise to these disparities, coupled with increasing transparency in that discrepancy, probably plays a role.”
Grad student unionization hasn't been localized at NYU, either. Similar efforts have happened at campuses across the country, including the University of California (all 10 campuses), Duke University, the University of Texas, Yale University, Boston University, Johns Hopkins University, the University of Pennsylvania and many more.
Related Article: Don't Deal With Employee Dissent This Way
Behind the Rise of the White-Collar Union
We've all had a bad job or two. Think back to that job. What made you unhappy there?
Pay might come to mind. But maybe the work wasn't challenging, or you didn't feel valued by your boss, or there was never an opportunity for career development. There are many issues beyond wages that workers are concerned about, said Steward.
"While white-collar workers may be relatively well-paid, they may have a wide range of other concerns about the workplace environment or about how the business operates," she said.
Employee Experience in Labor Organizing Efforts
Employee experience is an expression you hear often today — that idea of organizations looking at ways to maximize employee happiness, satisfaction, value, purpose, etc.
In a survey of 500 HR leaders, 92% cited employee experience as a top priority.
That same survey also found that organizations with high employee experience see twice the customer satisfaction, double the innovation and 25% higher profits than companies with an "inadequate" employee experience.
Yet, the rise of labor unions among white-collar workers seems to point to a disconnect between workplace efforts and employee perceptions.
“‘Employee experience’ is a concept that means a lot of different things to a lot of different people,” said West.
It might mean a four-day workweek or a boss who doesn’t micromanage. But these things are fleeting and subject to change, she explained. And what people want to see is substantial change that is codified. “And ‘employee experience’ initiatives are often not that.”
Frustrations Run High for Union Members
West said she has seen firsthand the thoughts and emotions that go on behind the scenes for employees deciding to unionize.
“There's a lot of frustration, and the feeling of promises not being delivered. And forming a union and aligning with union rules can create some awkward dynamics at work,” she said.
When NYU’s graduate students formed a union, she said, they went on strike and could not teach. And they had to tell someone who held power over them that they couldn’t perform certain duties due to the strike.
“I'm saying this to highlight that when things get heated, there's more than just an ‘us vs. them’ mentality in the air,” explained West. “There are power dynamics, interpersonal dynamics at play. And these things can be tricky to navigate. Union rules can create fissures in the interpersonal dynamics at play in the workplace, and people often feel ill-prepared to navigate them.”
Collective Bargaining Without Repercussion
Business leaders need to listen to their workforce, said Steward.
But if employees don’t feel they can speak without repercussion, these attempts to listen won’t succeed. “And then business leaders don’t know what they need to know about employee experience.”
Unions provide balance to that experience. They give workers a voice in decision-making.
“And this rebalancing of power also creates an environment for workers to express their concerns and ideas with some trust that they will be addressed by reducing the likelihood of negative repercussions and building a process for addressing issues raised,” Steward explained.
Related Article: How to Build a Modern, Holistic Employee Listening Strategy
Lack of Identification in White-Collar Jobs
One thing that’s shifted dramatically, said West, is how identified people are with their chosen careers and with the place where they work.
“People used to be highly identified with their organizations, and they were ‘embedded’ at work — their home lives and work lives were a seamless web, and it was hard to pull the two apart.”
People worked in a town where their kids went to school, she explained. They loved their communities, they were friends with co-workers.
“Eroding these things means it's psychologically easier to hop from job to job, which we see a ton of these days. I think the work-from-home/hybrid shift contributed a lot to that.”
Overall, Steward summed it up nicely: “The pandemic catalyzed widespread reflection on what we want from our jobs, and unions provide an avenue to reach those goals.”
The Company Benefit of the White-Collar Labor Movement
Companies typically react negatively to the idea of workers trying to unionize and build their power to make change in a company, said Steward.
“Many things companies believe about unions come from anecdotes of particularly bad experiences in the past rather than an understanding of how unions actually work."
But business leaders should rethink that stance for three reasons.
“First,” Steward explained, “workers organizing to make change rather than simply quitting and walking away is a sign that they care enough about the organization to take personal risks to improve it — that is something business leaders should see as a good thing.”
Second, she added, the process can get problems out in the open, meaning leaders can solve them rather than letting them sit unacknowledged and unaddressed.
“And third, it’s an opportunity for business leaders to demonstrate respect for the people who power their company and build employee engagement. Businesses recognize the value of an engaged workforce and really they should see an organizing campaign as an expression of employee engagement and respond more constructively.”
When unionization is done right, she said, it can improve productivity, employee engagement, employee experience and business performance. It can also boost worker retention and support a healthy labor market.
Related Article: Organizational Gaslighting
White-Collar Workers Unite for a Brighter Future
In five years, when you think of organized labor, blue-collar workers might not be the first thing to come to mind. Instead, you might think of data analysts in tech factories, coders for digital magazines and graduate students at your local university.
Ultimately, the onus is on organizations to accept that change is necessary and listen to what their employees truly want. And they shouldn't oppose that process, said Steward. "Because it is the only way that they can demonstrate respect for their workers and that they authentically want to build a better employee experience."
| 2023-03-20T00:00:00 |
https://www.reworked.co/employee-experience/behind-the-rise-of-white-collar-unions/
|
[
{
"date": "2023/03/20",
"position": 14,
"query": "AI labor union"
}
] |
|
Federal Employee Union Membership is Up 20%
|
Federal Employee Union Membership is Up 20%
|
https://www.govexec.com
|
[
"Erich Wagner"
] |
Since implementation of a number of pro-labor policies a year ago, federal employee unions gained 80000 new dues-paying members.
|
Vice President Kamala Harris’ office last week touted early progress in the Biden administration’s effort to strengthen the federal workforce by improving worker empowerment, boasting a sizeable increase in the number of federal employees who are dues-paying union members.
In February 2022, the White House Task Force on Worker Organizing and Empowerment, chaired by Harris and then-Labor Secretary Marty Walsh, issued its first report, which contained more than 70 recommendations for federal agencies to make it easier for employees in both the federal and private sectors to organize or join a union.
The task force asked federal agencies to foster collaborative relationships with their union partners, involve labor organizations in predecisional policy discussions, and remove barriers from unions trying to increase their membership or organize new bargaining units. The group recommended that the Office of Personnel Management instruct agencies to provide information on whether job openings are represented by unions and encourage agencies to provide unions more opportunities to communicate with new hires.
In a blog post last week, the vice president’s office announced that just a year after agencies began implementing the task force’s recommendations, the initiative is already paying dividends: over the last year, nearly 80,000 federal employees have joined a union, increasing the total number of dues paying union members at federal agencies by 20%. And in the private sector, petitions for union representation increased 53% from fiscal 2021 to fiscal 2022, while overall union membership grew by 273,000 last year.
The task force attributed the boost in union membership to both OPM’s efforts to encourage agencies to improve labor-management relations, including developing an internal agency survey on the issue, and agencies’ efforts to make it easier for union organizers to access federal property, either to contact federal employees or contractors.
“Four task force agencies—the departments of Defense and Interior, the General Services Administration and the Office of Personnel Management—committed to securely and safely make it easier for union representatives to reach potential or current union members to discuss their rights,” the White House wrote. “A fifth agency—the Department of Homeland Security—has since taken action to facilitate access at airports through the work of the Transportation Security Administration. These actions will make it possible for more workers to talk with and hear from union organizers at their workplaces—a significant step in addressing the imbalance of information that exists under current law.”
In the private sector, the Biden administration touted agencies’ work to promote the use of organized labor in federal grants and contracts, including requirements or preferences that encourage applicants for federal spending to have project labor agreements or registered apprenticeships, particularly on projects related to the bipartisan infrastructure law, the Inflation Reduction Act and the CHIPS and Science Act. The blog post specifically cited the Commerce Department’s inclusion of “strong labor standards language” in its grants to expand access to broadband internet, as well as the Energy and Transportation departments’ encouragement of project labor agreements for contractors and grantees.
Additionally, the Labor Department, National Labor Relations Board, Federal Labor Relations Authority and Federal Mediation and Conciliation Service have partnered on a number of educational initiatives.
First, the Worker Organizing Resource and Knowledge Center serves as an online portal with information for employees, companies and agencies to learn about labor issues and how to establish collaborative tools like labor-management partnerships. And the four agencies have collaborated on an initiative to help labor groups and employers negotiate their first union contract.
“The NLRB, FMCS and the FLRA are collaborating on efforts to help parties reach an initial collective bargaining agreement when workers first organize,” the White House wrote. “The agencies have improved the flow of information between their agencies about newly-organized units; expanded outreach to the parties encouraging the use of their agencies’ training and mediation services; and updated and expanded training of agency mediators.”
| 2023-03-20T00:00:00 |
2023/03/20
|
https://www.govexec.com/workforce/2023/03/federal-employee-union-membership-20/384205/
|
[
{
"date": "2023/03/20",
"position": 51,
"query": "AI labor union"
}
] |
The Rise Of Artificial Intelligence In Healthcare
|
The Rise Of Artificial Intelligence In Healthcare
|
https://elearningindustry.com
|
[
"Sofster Undefined",
"Kapil Rawal",
"Maitry Pandya",
"Ken Turner Lion",
"Anubha Goel"
] |
AI can help optimize healthcare operations by improving patient flow, reducing wait times, and optimizing staff schedules. For example, AI can analyze patient ...
|
Artificial Intelligence And Healthcare
Artificial Intelligence (AI) adoption in different sectors is getting quite prominent and healthcare is certainly one of them. In fact, the scope of AI in healthcare is vast and continues to grow rapidly. The use of AI has the potential to help healthcare providers in helping manage patients' data and administrative work seamlessly. Most of the technologies related to AI in healthcare have powerful applications, but the strategies they support can be different depending upon the healthcare provider and organization.
Some experts believe that AI in healthcare can perform as well as humans. Some are still skeptical about a major shift in healthcare towards Artificial Intelligence and its development. Having said that, Artificial Intelligence use has significantly grown over the years, as it has the potential to improve the quality of healthcare, reduce costs, and save lives. However, it is important to ensure that the use of AI in healthcare is ethical and transparent and that the patient’s privacy is protected. So let us look into the swift rise of Artificial Intelligence in healthcare, its scope, and more.
Challenges Faced In Healthcare And Wellness That AI Can Overcome
Artificial Intelligence has the potential to help address many of the challenges faced by the healthcare industry. However, it is important to make sure that AI development and implementation are ethical and responsible, with consideration given to issues like data privacy, algorithm bias, and the potential to impact healthcare professionals and patients. Additionally, it is also important to ensure that AI does not replace the element of human healthcare and that healthcare professionals are trained to work collaboratively with AI tools to deliver the best possible care to their patients.
1. Diagnosis And Treatment
AI can help improve the accuracy and speed of diagnosis by analyzing patient data such as medical history, test results, and imaging scans. This can help healthcare professionals identify healthcare conditions that may have been missed or misdiagnosed, leading to more effective and timely treatment. AI can also assist in treatment selection by analyzing patient data and determining the most appropriate treatment options based on factors such as disease stage, genetic makeup, and comorbidities.
2. Access To Healthcare
Chatbots powered by AI can help expand access to healthcare services, particularly in rural or underdeveloped areas, where there may be a shortage of healthcare professionals or medical facilities. These tools can enable patients to receive medical consultations, monitor chronic conditions, and access mental health services remotely, without the need for a physical medicine facility.
3. Chronic Disease Management
AI can assist in the management of chronic diseases by analyzing patient data such as blood glucose levels, blood pressure readings, and medication adherence, to identify trends and patterns that may indicate a change in a patient’s condition. This information can be used to provide personalized treatment plans and inventions, such as medication adjustments or lifestyle changes, that can help patients better manage their condition and reduce the risk of complications.
4. Workforce Shortages
AI can help alleviate workforce shortages by automating routine tasks such as scheduling appointments, processing medical records, and triaging patients. This can help healthcare professionals focus on more complex tasks such as diagnosis and treatments, and reduce the burden of administrative work to a great extent.
5. Medical Research
AI can help accelerate medical research by analyzing large sets of data from clinical trials and medical studies, identifying patterns and insights that may not be immediately apparent to human researchers. This can help researchers better understand disease mechanisms, develop new treatments, and improve patient outcomes.
6. Patient Engagement
AI-powered apps and wearables can help patients track their health metrics, monitor medication adherence, and receive real-time feedback and recommendations. This can empower patients to take a more active role in their healthcare, and help healthcare professionals to better monitor and manage their patients' conditions.
7. Preventive Healthcare
AI can help predict and prevent health problems before they occur, by analyzing patient data to identify individuals who may be at risk for a certain condition based on their medical history, lifestyle data, and other factors. This can enable healthcare professionals to intervene early, providing personalized interventions and preventive care that can help improve patient outcomes and reduce healthcare costs.
How Can Artificial Intelligence Transform The Healthcare Sector?
Artificial Intelligence has been increasingly adopted in healthcare in recent years, with the potential to revolutionize the way healthcare is delivered. AI can be applied to a wide range of healthcare tasks, including medical imaging analysis, drug development, patient monitoring, and personalized treatment plan.
One of the most promising applications of AI in healthcare is medical imaging analysis. AI algorithms can analyze medical images such as X-rays, CT scans, and MRI scans to identify abnormalities, such as tumors or other diseases. AI can also help to identify patterns in large datasets of medical images, which can lead to more accurate diagnoses and personalized treatment plans. Another area where AI is being increasingly used in drug development. AI can be used to analyze large amounts of data to identify potential drug candidates and predict their efficacy and safety. This can help to accelerate the drug development process and reduce the costs involved.
AI can also be used to monitor patients and predict their health outcomes. For example, AI algorithms can analyze data from wearable devices such as smartwatches to monitor patients' vital signs and detect early signs of health problems. This can help prevent health problems from developing or worsening, and reduce the need for hospitalization.
Scope Of Artificial Intelligence In Medical Science And Healthcare
The scope of AI in medical science and healthcare is vast, and it is expected to have a transformative impact on the industry in the years to come. The scope of healthcare is vast in different related fields and areas of healthcare delivery, from diagnosis and treatment to healthcare operations and telemedicine. As AI continues to advance, it is expected to play an increasingly important role in healthcare delivery and help improve patient outcomes and lower healthcare costs. Here are some of the key areas where AI is expected to have a significant impact:
1. Medical Imaging
AI-powered image analysis can help healthcare professionals detect and diagnose diseases such as cancer, heart diseases, and neurological disorders with higher accuracy and speed than traditional methods. For example, AI can analyze medical images to identify patterns and anomalies that may be difficult for the human eye to detect, enabling earlier and more accurate diagnoses.
2. Electronic Health Records (EHRs)
AI can help healthcare professionals make sense of the vast amounts of data contained within EHRs by identifying patterns and insights that may be missed by human analysis. For example, AI can analyze EHR data to identify patients who may be at risk of developing certain conditions based on their medical history and lifestyle factors, enabling proactive interventions.
3. Personalized Medicine
AI can help healthcare professionals create personalized treatment plans that are tailored to each patient based on their medical history, genetic information, lifestyle data, and other factors. For example, AI can help identify the most effective treatments for a particular patient based on their unique characteristics, reducing the risk of adverse reactions and improving patient outcomes.
4. Chronic Disease Management
AI can help patients manage chronic diseases such as diabetes, heart disease, and asthma, by providing real-time monitoring and personalized treatment plans based on patient data. For example, AI can monitor a patient’s blood glucose levels and provide personalized recommendations for medication adjustments and lifestyle changes based on their individual needs.
5. Drug Development
AI can help speed up the drug development process by identifying potential drug targets, screening drug candidates, and predicting drug efficacy and safety. For example, AI can help identify new drug targets by analyzing large datasets of genetic and medical data and predict the safety and efficacy of new drugs based on clinical trial data.
6. Healthcare Operations
AI can help optimize healthcare operations by improving patient flow, reducing wait times, and optimizing staff schedules. For example, AI can analyze patient data to predict demand for certain services, enabling healthcare facilities to allocate resources more efficiently and reduce wait times.
7. Telemedicine
AI-powered telemedicine tools can help expand access to healthcare services, particularly in rural or underserved areas. For example, AI-powered chatbots can provide basic medical consultations and triage patients based on their symptoms, enabling patients to receive care remotely without the need to travel to a physical medicine facility.
Conclusion
The rise of AI in healthcare has the potential to revolutionize the way healthcare is delivered and improve patient outcomes. AI is currently being used in the healthcare sector for purposes such as medical imaging analysis, electronic health records, personalized treatment plans, drug development, and virtual assistance. Notably, the future of AI in healthcare looks promising. AI has the potential to improve the speed and accuracy of diagnosis, personalize treatment plans, and improve patient outcomes.
As AI continues to improve and become more sophisticated, it may be able to predict health outcomes more accurately and develop more personalized treatment plans for patients. However, there are also concerns about data privacy, security, and ethics that need to be addressed as AI becomes more widespread in healthcare. Overall, the rise in AI in healthcare presents exciting possibilities for improving the quality and accessibility of healthcare for patients.
| 2023-03-20T00:00:00 |
2023/03/20
|
https://elearningindustry.com/the-rise-of-artificial-intelligence-in-healthcare
|
[
{
"date": "2023/03/20",
"position": 8,
"query": "AI healthcare"
}
] |
Artificial intelligence: magic at your fingertips
|
Leverage AI with delaware for better business
|
https://www.delaware.pro
|
[
"Alan Turing"
] |
Artificial intelligence (AI) was first coined by American computer scientist John McCarthy in 1956. Today, it is an umbrella term that encompasses a wide range ...
|
Artificial intelligence: magic at your fingertips
Artificial intelligence (AI) was first coined by American computer scientist John McCarthy in 1956. Today, it is an umbrella term that encompasses a wide range of topics, from machine learning to robotics.
Artificial Intelligence (AI) has been around for more than 60 years, so the concept itself is far from revolutionary. However, what has changed is the ability to execute. With the introduction of graphics processing units (GPUs) and cloud computing – which makes rapid processing of vast amounts of data accessible and affordable – all the pieces are now in place to develop AI that really works and creates tangible value.
At delaware, we are firm believers in the power of AI and the enormous opportunities it will bring. However, we are also convinced that the path towards AI will be an evolution, not a revolution. The systems that exist today will evolve gradually, and their ability to support both people and machines will improve step by step.
| 2023-03-20T00:00:00 |
https://www.delaware.pro/en-in/solutions/artificial-intelligence
|
[
{
"date": "2023/03/20",
"position": 85,
"query": "artificial intelligence business leaders"
}
] |
|
The Benefits of AI: How it Can Help Your Business
|
The 5 Key Benefits of AI That are Revolutionizing Work Culture
|
https://emeritus.org
|
[
"Siddhesh Shinde",
"About The Author",
"Read More About The Author",
"Srijanee Chakraborty",
"Aswin Bhagyanath"
] |
Artificial Intelligence (AI) is a rapidly evolving field with the potential to transform how we live and work. The benefits of AI are certainly profound, ...
|
Artificial Intelligence (AI) is a rapidly evolving field with the potential to transform how we live and work. The benefits of AI are certainly profound, and they are already impacting several sectors like healthcare and transportation, as well as finance and manufacturing. A recent PwC report states that AI could contribute a whopping $15.7 trillion to the global economy by 2030, and nearly 72% of business leads in the U.S. believe that AI will be the advantage of the future. Now’s a good time then to examine the current state of AI and how it could shape our lives in the coming days.
What is Artificial Intelligence?
AI is the term used to describe the creation of computer systems capable of carrying out tasks that normally require human intelligence. These systems examine massive volumes of data, find patterns, and make predictions or choices using algorithms, statistical models, and machine learning approaches. Applications of AI technology include robotics, self-driving cars, picture identification, and natural language processing. AI aims to develop machines that can reason, learn, and understand in ways that are similar to humans in order to handle complicated problems more effectively than conventional computing techniques. These are just a few benefits of AI. Let’s take a closer look at it some more.
ALSO READ: What’s the Fate of AI in 2023? 8 Pathbreaking Technology Trends
What are the Benefits of AI?
Reduction in Human Error
One of the major benefits of AI is that it can significantly reduce human error by automating tasks that require high accuracy and consistency, such as data analysis, quality control, and decision-making. AI algorithms process massive amounts of data, detect patterns, and make accurate predictions without being influenced by emotions, biases, or fatigue. By automating these tasks, AI frees up human workers to focus on more complex and creative tasks that require human skill and judgment. This not only reduces the possibility of errors but also boosts efficiency and productivity. Furthermore, AI systems can continuously learn and adapt based on new data, improving their performance over time and decreasing the likelihood of errors even further.
Takes Risks Instead of Humans
In certain situations where human safety is at stake, AI can take on the risks instead of people. AI-powered drones, for example, can perform tasks such as search and rescue, monitoring hazardous environments, and exploring unknown territories without endangering human lives. Furthermore, AI can analyze data from various sources and identify potential risks or threats that human operators may not see, allowing for timely and effective responses. However, it is critical to ensure that AI systems are designed and programmed with safety and ethics in mind and that human oversight and intervention are in place to avoid unintended consequences or malfunctions.
Available 24×7
AI systems can operate continuously without breaks, holidays, or sleep, making them available 24 hours a day, seven days a week. This means that AI-powered tasks and services can be performed around the clock, increasing accessibility, efficiency, and customer satisfaction. Chatbots powered by AI, for example, can provide instant customer support and assistance without requiring the constant availability of human agents. Similarly, AI-powered monitoring and alert systems can continuously analyze data and detect anomalies or issues that need to be addressed, allowing for quick responses and reduced downtime.
Helps with Repetitive Jobs
AI can help make repetitive tasks easier by automating tedious tasks that require little or no creativity. In manufacturing plants, for example, AI-powered robots can assemble and package products, reducing the need for human workers to perform these tasks. Similarly, AI can automate data entry, invoice processing, and other administrative tasks, freeing up employees to focus on higher-level tasks that require creativity and problem-solving abilities. This improves efficiency and productivity while also lowering the risk of human error and fatigue.
Faster Results
AI can produce faster results by processing massive amounts of data and performing complex tasks at a rate that humans cannot match. AI algorithms, for example, can analyze millions of data points in real-time to identify patterns, trends, and insights that humans would find difficult or time-consuming to detect. Similarly, AI can outperform humans in calculations, simulations, and predictions, allowing for faster decision-making and problem-solving. This is especially useful in industries where timely and accurate information, such as finance, healthcare, and logistics, is critical to success.
Learning AI with Emeritus
AI is a rapidly evolving field that is transforming various industries and improving our lives in numerous ways. The benefits of AI are certainly there for all to see. However, we must consider the ethical and societal implications of AI as well so that the development of AI is a responsible and transparent process. To learn more about AI and machine learning, check out these online machine learning courses and artificial intelligence courses offered by Emeritus in tie-ups with the best universities around the world and advance your knowledge in the field.
Write to us at [email protected]
| 2023-03-20T00:00:00 |
2023/03/20
|
https://emeritus.org/blog/ai-and-ml-benefits-of-ai/
|
[
{
"date": "2023/03/20",
"position": 97,
"query": "artificial intelligence business leaders"
}
] |
Inside 'McHire': How AI reduced McDonald's time-to-hire by ...
|
Inside 'McHire': How AI reduced McDonald's time-to-hire by 65%
|
https://www.hrdconnect.com
|
[
"Oana Iordachescu"
] |
Artificial intelligence (AI) is transforming the world of business at an unprecedented rate. This technology is revolutionizing the future of work, ...
|
Provided by
Categories Case Studies Digital Transformation
Artificial intelligence (AI) is transforming the world of business at an unprecedented rate. This technology is revolutionizing the future of work, from recruitment and employee engagement to learning and development. Indeed, despite the recent spike in popularity largely thanks to the public launch of ChatGPT in November 2022, many organizations and industries are already reshaping their workflows with the help of artificial intelligence.
With a deep history in organizational innovation, it’s no surprise McDonald’s is at the front of the queue. In 2019, it rolled out talent hiring platform from Paradox, re-branded as McHire, to company-owned restaurants, and offered the platform as an optional program to franchisees in the US and Canada. It has since adapted McHire for the UK and Ireland. As many organizations only just begin to consider how generative AI could reshape their full employee lifecycle, McDonald’s restaurants in the UK and Ireland are already reaping the rewards.
Following the introduction of McHire, 1,450 McDonald’s restaurants in the UK and Ireland have seen:
Time-to-hire reduced by nearly 65%
A 20% increase in the number of candidates completing the process compared to the previous system
85.9% of surveyed candidates rated their experience as four out of five or five out of five. (December 2022)
Three to five hours of time saved per recruiter per week
Richard Bainbridge, People Technology Lead at McDonald’s UK&I, is part of a team formed specifically for this type of digital transformation. He takes us through the McHire journey.
Was this article helpful? Yes No
Meet McHire, McDonald’s AI hiring platform
McHire is a talent-hiring platform that leverages conversational AI to create a shorter and more engaging candidate experience. This experience is driven by a recruiting assistant that is live 24 hours a day and reframes the application process for prospective candidates applying for roles in a restaurant. Bainbridge explains a typical journey for an applicant to the fast-food chain.
“We wanted the initial contact to feel engaging and conversational and not like filling out a traditional application form. When a candidate reaches out to register an interest in a vacant role, the AI assistant, Olivia, jumps in. It asks a number of pre-screening questions and gathers information and data on the candidate. They complete an interactive assessment that populates the application form. The system assesses their suitability for the role, and if successful, they are automatically set up for an interview at the restaurant they are applying for.
“It’s a much simpler, quicker, and easier process that’s more enjoyable for the applicant. And that was our thinking throughout, we wanted to ensure we were candidate-centric in our thinking while supporting our restaurant recruiters.”
McHire is a significant change to McDonald’s UK&I prior model for talent acquisition, having introduced its last system a number of years ago. It has seen a significant improvement in hiring times, the number of candidates completing applications and overall candidate satisfaction. Following the roll-out of McHire, average time-to-hire has dropped by nearly 65%. Restaurants have also seen a 20% increase in the number of candidates fully completing the job application process. 85.9% of surveyed candidates rated their experience as four out of five or five out of five. (December 2022)
Navigating the AI vendor selection process
As with any HR vendor selection process, choosing the right partner is paramount. The first stage of this process is diagnosing the challenge the technology, tool, or vendor needs to solve. With over 200 franchisees, creating a clear picture of the organization’s needs was paramount. Alongside digital consultancy and partner Enfuse Group, McDonald’s invested a significant amount of time in creating a cross-functional team across the business, to understand restaurant and recruitment challenges. Bainbridge outlines the huge scope of stakeholders involved in this process.
“To understand the challenges for our restaurants and what they wanted to move away from was essential. We needed to make a system that would work for the entire business, so we pulled together members from all areas of the business, bringing together what we refer to internally as the ‘Three Legs of the Stool’ —franchisees, suppliers, and company employees. The teams included Talent Recruiters, People Managers, franchisees, HR, technology teams, legal teams, finance, cyber security, and data analysts, who all came together to help us define what we needed. It was a complete deep dive into the current process.”
McDonald’s and Enfuse Group used these insights to create a Request for Proposal (RFP), for potential suppliers to put forward their solutions. The RFP yielded eight different solutions that McDonald’s quickly whittled down to four options based on functionality. The cross-functional team met with each vendor to understand their product and how it could be customized to fit McDonald’s.
“There were three parts to this process. Understanding each vendor’s business and future thinking, vendor demonstrations, and an open question and answer session with the cross-functional team. We also held ‘Commercial Clarification Calls’ to ensure that the pricing and costs submitted were aligned to the needs of the business.”
This resulted in a decision paper, and updated Business Case, which outlined the process, scoring, and rationale before being approved. This best practice has since been incorporated into McDonald’s People Technology ways of working for the future.
The roll-out of McHire
The difference between the theoretical application and practical adoption of tools and software is an uncomfortable reality for HR leaders to face. Software gathers dust when organizations select the wrong tool, but also when implementation and training are ineffective. Indeed, another vital component of McDonald’s success with McHire was the adoption of the system.
“You can have the best solution, but if it’s not adopted well in the restaurants, then that causes even more frustration. The business has really embraced the system. They’ve told us it was the step change they needed to support their recruitment needs in what is an extremely challenging and competitive market.”
Alongside support materials from the AI vendor, McDonald’s took a personalized approach to training. Moreover, the creation of a test environment for restaurants was vital in ensuring the adoption of the technology.
Customized training and community development
McDonald’s curated in-person sessions for smaller groups to come together and begin learning how to use this technology. It personalized the sessions, allowing each stakeholder to access training highly relevant to their job role.
“We adapt the materials for different audiences. From a franchisee’s point of view, would they be in the finer detail of the recruitment system? No, and we wouldn’t expect them to be, but we did want to take them on the journey and for them to understand how this technology will impact their business and the benefits it will deliver. The People Managers and recruiters needed the details and finer training, given the role they play using the system day to day.”
“And for change on this scale, we feel there’s nothing better than face-to-face training. It was the first time after Covid-19 we were reverting to in-person, and it worked extremely well. We had great buy-in across the business. People could break out into smaller sessions and share their experiences. We delivered over 30 face-to-face sessions across the UK&I for over 2,000 users and created open forums for people to ask questions and share their experiences. It became a community that helped each other adopt the technology, with the project team on hand to support through webinars, where needed.”
Developing a test environment
These sessions also included access to a testing environment. Ahead of the go-live date, restaurants, and key employees were able to access a secure environment to explore the way McHire works. This was augmented with a pilot scheme of 29 restaurants that initially went live and processed real-life candidates.
“It let people become accustomed to the system and experience the application process. It helped them understand what it was like in practice and gave them a chance to test and feedback, so we had the time to implement any changes ahead of the full deployment.”
This test-based approach has continued since the release of McHire, to constantly iterate and improve the platform. To date, over twenty user-led enhancements have been successfully rolled out since its launch to an estate of 1,450 restaurants.
“Our approach now is to ensure we only implement systems that we can continue to evolve. Through our ongoing engagement with our franchisees and restaurant teams, we work to understand how we can continually improve, sharing our feedback with the supplier, and road mapping enhancements to make sure the technology continues to add the value our restaurant teams need.”
AI enables rather than replaces
Bainbridge also highlights the role the franchisees have played in adopting this technology. With the support of the McDonald’s UK&I People Team, AI has helped franchisees to tackle their hiring bottlenecks, from locating quality candidates to securing the right quantity of applicants in the pipeline.
“AI is an important enabler of achieving what our franchisees and recruiters need. However, AI on its own won’t guarantee the right outcomes. Rather it needs to be implemented holistically across people, processes, policy, and data. Each restaurant has its own specific challenges and needs and McHire was able to support this, whether that be a high-volume restaurant with high recruitment needs or a hard-to-hire location the system has been a great fit for all.”
The introduction of this technology has freed up time for the recruiting team to focus on adding value in other areas. McDonald’s estimates each recruiter has saved on average three to five hours of time per week. Recruiting leaders are reinvesting this time into training and onboarding new joiners to support retention and delivering a great customer experience.
Why McDonald’s created a People Technology team
To support the ever-increasing people technology portfolio and the introduction of McHire and future software implementations, McDonald’s created its first People Technology team in the UK and Ireland. Previously, this work spanned across two different functions, People and Technology.
This team owns the strategic direction of the portfolio while being aligned to business needs, whether that be recruitment, scheduling, HR, or engagement.
Within that work, the team covers various responsibilities, including:
Project Management. This complex change for McDonald’s required working with six third-party systems and myriad deadlines. The PeopleTech team partnering with Enfuse Group ensured that all the dependencies were managed effectively, risks and issues were resolved, and decisions were quickly but carefully made to avoid delays to implementation.
This complex change for McDonald’s required working with six third-party systems and myriad deadlines. The PeopleTech team partnering with Enfuse Group ensured that all the dependencies were managed effectively, risks and issues were resolved, and decisions were quickly but carefully made to avoid delays to implementation. Design & Configuration. A key part of encouraging adoption was creating an intuitive and simple process. All team members came together to ensure that the design was candidate-centric.
A key part of encouraging adoption was creating an intuitive and simple process. All team members came together to ensure that the design was candidate-centric. Change Management . With the prior solution dating back to 2007, this was a huge change for the restaurant teams. The PeopleTech team worked closely with Enfuse Group and their own Change Team to manage the planned change that was delivered against project goals.
. With the prior solution dating back to 2007, this was a huge change for the restaurant teams. The PeopleTech team worked closely with Enfuse Group and their own Change Team to manage the planned change that was delivered against project goals. Transition to business-as-usual. Once live, McDonald’s needed to ensure that the rollout continued to be a success. The People Technology team worked with franchisees and restaurants to resolve any issues and implement any further changes.
“We needed a team of people who proudly have one foot in technology, and another in restaurant operations. The team guides the business and engages with our franchisees on their work and systems that would help them succeed.”
“The team has done a huge amount of work to engage with every user of the system through phone calls, webinars, and surveys to solicit feedback. That includes our franchisees, our recruiters, and our new hires that have come through the system. We’ve even spoken with people that have completed the process and haven’t been hired.”
Where next for AI at McDonald’s?
With McHire rolled out in the US and Canada, and the UK and Ireland, McDonald’s and some of its franchisees are now beginning to scale the use of AI for hiring and HR processes across the globe. The journey to create a customized model for McDonald’s UK and Ireland is informing this process.
“I’m proud to say that McHire is a global solution adopted across many markets. We’re feeding and supporting global teams with our thoughts, experience, and lessons learned. There’s a working group to make sure this knowledge is shared and to ensure the system is right for each market. That includes legalities but also how processes work in each location.”
Moreover, with a proven user case in the UK and Ireland, and a dedicated People Technology team, McDonald’s is now turning its attention to other areas of the employee lifecycle and considering the impact AI may have.
“We are now focused on the next stage of the candidate journey, their onboarding experience. We are exploring how AI can support here, from the welcome meeting through to celebrating their first day and anniversary with us. There is clearly a huge amount of potential to explore here, and I’m looking forward to continuing to unlock it.”
Subscribe to get your daily business insights
| 2023-03-20T00:00:00 |
https://www.hrdconnect.com/casestudy/inside-mchire-how-ai-helped-mcdonalds-drop-time-to-hire-by-65/
|
[
{
"date": "2023/03/20",
"position": 65,
"query": "artificial intelligence hiring"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.