Angle icon pointing down An icon shaped like an angle pointing down. There is exciting new work in technology at the intersection of politics, ethics, and artificial intelligence. Jenny Chang-Rodriguez/BI
- A company’s ethics director ensures that AI is used responsibly.
- They define principles for regulating technology, understand the legal landscape and communicate with stakeholders.
- Those in that role typically earn six-figure annual salaries.
The launch of ChatGPT marked the beginning of a new era in the business world.
Robot technology, generative artificial intelligence, could write emails, generate code and materialize graphics in a matter of minutes. Suddenly, the days of workers poring over their inboxes and painstakingly crafting presentations seemed like a relic of the past.
Companies, drawn by profits and productivity gains, were quick to adopt the technology. According to a May report survey According to data from the consulting firm McKinsey & Company, 65% of the more than 1,300 companies surveyed said they now regularly use generative AI, double the number from the previous year.
But the risks of misusing the technology are huge. Generative AI can cause mind-bending, spread misinformation and reinforce bias against marginalised groups if not managed properly. Since the technology relies on large volumes of sensitive data, the potential for data breaches is also high. However, in the worst case scenario, there is a danger that the more sophisticated it becomes, the less likely it is to align with human values.
With great power comes great responsibility, and companies making money from generative AI must also make sure they regulate it.
That’s where the ethics director comes in.
A fundamental role in the era of AI
He The details of the role vary from company to company, but broadly speaking, they’re responsible for determining the impact a company’s use of AI could have on society at large, according to Var Shankar, director of AI and privacy at Enzai, a software platform for AI governance, risk and compliance. “Beyond your business and your bottom line, how does it impact your customers? How does it impact other people in the world? And then how does it impact the environment?” he said. “Then comes the idea of creating a program that standardizes and scales those questions every time AI is used,” he told Business Insider.
It’s a position that gives policy and philosophy experts, as well as programming geniuses, a foothold in the fast-moving tech industry — and often comes with a hefty six-figure annual salary.
But right now companies aren’t hiring for these roles fast enough, according to Steve Mills, director of AI ethics at Boston Consulting Group. “I think there’s a lot of talk about risks and principles, but very little action to operationalize them within companies,” he said.
A C-suite level responsibility
According to Mills, those successful in the role should have four areas of expertise: technical knowledge of generative AI, experience building and deploying products, knowledge of key laws and regulations around AI, and significant experience hiring and making decisions within an organization.
“Too often I see people putting mid-level managers in charge, and while they may have experience, desire and passion, they typically don’t have the stature to change things within the organization and bring together legal, business and compliance teams,” he said. Every Fortune 500 company using AI at scale should task an executive with overseeing a responsible AI program, he added.
Shankar, a lawyer by training, said the position does not require any specific training. The most important qualification is understanding a company’s data. That means having an idea of the “ethical implications of the data being collected and used, where it comes from, where it was before it came to the organization and what kind of consent is in place for it,” he said.
He pointed to the example of healthcare providers who could unintentionally perpetuate biases if they don’t have a solid understanding of their data. study According to a study published in Science, hospitals and health insurance companies that used an algorithm to identify patients who would benefit from “high-risk care management” ended up prioritizing healthier white patients over sicker black patients. That’s the kind of mistake an ethics officer can help companies avoid.
Collaboration between companies and industries
Those in this position should also be able to communicate confidently with a variety of stakeholders.
Christina Montgomery, vice president, chief privacy and trust officer at IBM and chair of its AI Ethics Board, told BI that her days are typically packed with client meetings and events, in addition to other responsibilities.
“I’ve been spending a lot of time externally, probably more time lately, speaking at events and engaging with policymakers and on external boards because I feel like we have a huge opportunity to influence and determine what the future looks like,” she said.
She sits on the boards such as the International Association of Privacy Professionals, which recently launched an Artificial Intelligence Governance Professional Program. Certification For people who want to lead the field of AI ethics. Also collaborates with government leaders and other ethics officers.
“I think it’s absolutely critical that we talk to each other regularly and share best practices, and we do a lot of that across all companies,” he said.
Her goal is to develop a broader understanding of what is happening at a societal level, something she considers key to her role.
“My fear in the current situation is that there is no global interoperability between all these regulations, what is expected and what is right and wrong in terms of what companies will have to comply with,” he said. “We cannot operate in a world like that. That’s why conversations between companies, governments and boards of directors are very important right now.”
JOBs Apply News
For the Latest JOBs Apply News, Follow ©JOBs Apply News on Twitter and Linkedin Page.