Facebook has recently come under intense scrutiny for sharing the data of millions of users without their knowledge. We’ve also learned that Facebook is using AI to predict users’ future behavior and selling that data to advertisers. Not surprisingly, Facebook’s business model and how it handles its users’ data has sparked a long-awaited conversation — and controversy — about data privacy. These revelations will undoubtedly force the company to evolve their data sharing and protection strategy and policy.
More importantly, it’s a call to action: We need a code of ethics.
As the AI revolution continues to accelerate, new technology is being developed to solve key problems faced by consumers, businesses and the world at large. It is the next stage of evolution for countless industries, from security and enterprise to retail and healthcare. I believe that in the near future, almost all new technology will incorporate some form of AI or machine learning, enabling humans to interact with data and devices in ways we can’t yet imagine.
Moving forward, our reliance on AI will deepen, inevitably causing many ethical issues to arise as humans turn over to algorithms their cars, homes and businesses. These issues and their consequences will not discriminate, and the impact will be far-reaching — affecting everyone, including public citizens, small businesses utilizing AI or entrepreneurs developing the latest tech. No one will be left untouched. I am aware of a few existing initiatives focused on more research, best practices and collaboration; however, it’s clear that there’s much more work to be done.
For the future of AI to become as responsible as possible, we’ll need to answer some tough ethical questions.
Researchers, entrepreneurs and global organizations must lay the groundwork for a code of AI ethics to guide us through these upcoming breakthroughs and inevitable dilemmas. I should clarify that this won’t be a single code of ethics — each company and industry will have to come up with their own unique guidelines.
For the future of AI to become as responsible as possible, we’ll need to answer some tough ethical questions. I do not have the answers to these questions right now, but my goal is to bring more awareness to this topic, along with simple common sense, and work toward a solution. Here are some of the issues related to AI and automation that keep me up at night.
The ethics of driverless cars
With the invention of the car came the invention of the car accident. Similarly, an AI-augmented car will bring with it ethical and business implications that we must be prepared to face. Researchers and programmers will have to ask themselves what safety and mobility trade-offs are inherent in autonomous vehicles.
Ethical challenges will unfold as algorithms are developed that impact how humans and autonomous vehicles interact. Should these algorithms be transparent? For example, will a car rear-end an abruptly stopped car or swerve and hit a dog on the side of the street? Key decisions will be made by a fusion processor in split seconds, running AI, connecting a car’s vast array of sensors. Will entrepreneurs and small businesses be kept in the dark while these algorithms dominate the market?
Driverless cars will also transform the way consumers behave. Companies will need to anticipate this behavior and offer solutions to fill those gaps. Now is the time to start predicting how this technology will change consumer needs and what products and services can be created to meet them.
The battle against fake news
As our news media and social platforms become increasingly AI-driven, businesses from startups to global powerhouses must be aware of their ethical implications and choose wisely when working this technology into their products.
We’re already seeing AI being used to create and defend against political propaganda and fake news. Meanwhile, dark money has been used for social media ads that can target incredibly specific populations in an attempt to influence public opinion or even political elections. What happens when we can no longer trust our news sources and social media feeds?
AI will continue to give algorithms significant influence over what we see and read in our daily lives. We have to ask ourselves how much trust we can put in the systems that we’re creating and how much power we can give them. I think it’s up to companies like Facebook, Google and Twitter — and future platforms — to put safeguards in place to prevent them from being misused. We need the equivalent of Underwriters Laboratories (UL) for news!
The future of the automated workplace
Companies large and small must begin preparing for the future of work in the age of automation. Automation will replace some labor and enhance other jobs. Many workers will be empowered with these new tools, enabling them to work more quickly and efficiently. However, many companies will have to account for the jobs lost to automation.
Businesses should begin thinking about what labor may soon be automated and how their workforce can be utilized in other areas. A large portion of the workforce will have to be trained for new jobs created by automation in what is becoming commonly referred to as collaborative automation. The challenge will come when deciding on how to retrain and redistribute employees whose jobs have been automated or augmented. Will it be the government, employers or automation companies? In the end, these sectors will need to work together as automation changes the landscape of work.
No one will be left untouched.
It’s true that AI is the next stage of tech evolution, and that it’s everywhere. It has become portable, accessible and economical. We have now, finally, reached the AI tipping point. But that point is on a precarious edge, see-sawing somewhere between an AI dreamland and an AI nightmare.
In order to surpass the AI hype and take advantage of its transformative powers, it’s essential that we get AI right, starting with the ethics. As entrepreneurs rush to develop the latest AI tech or use it to solve key business problems, each has a responsibility to consider the ethics of this technology. Researchers, governments and businesses must cooperatively develop ethical guidelines that help to ensure a responsible use of AI to the benefit of all.
From driverless cars to media platforms to the workplace, AI is going to have a significant impact on how we live our lives. But as AI thought leaders and experts, we shouldn’t just deliver the technology — we need to closely monitor it and ask the right questions as the industry evolves.
It has never been a more exciting time to be an entrepreneur in the rise of AI, but there’s a lot of work to be done now and in the future to ensure we’re using the technology responsibly.