Code of Ethics in Technology

I spent a recent Saturday morning talking to a group of grade school kids about artificial intelligence. Many of them had never coded before, let alone heard of AI. During the session, one exercise required them to come up with ideas for how the AI they create would be used in the real world. I was struck by the kids’ genuine interest in creating AI solutions that would help people, rather than divide them. I left that classroom with renewed faith in the future of innovation — especially if industry can extend technology-focused career opportunities to people from different backgrounds and with fresh perspectives.
Shifting from rhetoric to action
As the employee and public response to Google’s Project Maven illustrated, the need for ethical AI in the world is real, immediate and essential to making sure interactions with technology actually mitigate potential risks, help people and improve work. A key challenge for industry is figuring out how to move the global conversation away from AI as a threat to human jobs and safety, and toward cementing AI as an ethical complement to human ingenuity. In short, businesses need to be honest about AI’s impact on the global economy while transparently addressing public concerns about the technology.
Today’s digital literacy opportunities mostly exist outside the realm of regular education. Several programs manifest as extracurricular opportunities pursued by kids or early-career employees who are already interested in technology — and in a position to spend money to learn new skills. These opt-in courses help younger generations of people adopt necessary computational thinking and wider problem-solving, analytical and creativity skills needed to work with AI and other emerging technologies. That is precisely why they need to be accessible to more people.
Companies also need to invest in proactive retraining of technical and developer workforces to close digital skill gaps, diversify talent pools developing the technology and boost ethical AI literacy. Specifically, business leaders need to empower executives and human resources with the tools, data and space to understand the evolving skill sets needed to work with AI in an ethical way.
Companies should think critically about how to convey to current employees and future workforces the massive potential and exciting opportunities to work alongside AI. And, most importantly, AI leaders need to call on global industry and governments around the world to incorporate ethical practices into staff training throughout ranks — and hold them accountable once the commitment is made.
Defining industry’s role in helping people understand AI
In the short-term, companies should prioritize establishing working relationships with public sector partners and invest in community school programs that support digital education. After all, industry has a large stake in the successful education of young generations of people — many now born into a digitally native world — who will ascend into the workforce over the next decade.
Young people, on the other hand, are in a unique position to gain new skills from in-person mentorships offered by experts, developers and volunteers who currently work in technology and AI. Teaching diverse cohorts to code and introducing them to AI helps solve some immediate talent needs for industry, but society needs to also equip people with universally available data and adaptable skills to train for a shared future with AI.
Indeed, traditional office skills — and even software programming skills — will need to evolve in order for people to successfully and sustainably achieve workplace coexistence with AI. Companies like Infosys have already committed to retraining millions of workers in diverse fields in the path of automation. LinkedIn launched an internal AI academy for developers, engineers and technical recruits to retool them for an automated future. In general, companies should invest in teaching new generations of people interested in pursuing technology careers about ethical AI from day one — and encourage them to bring others into the fold.
My company’s corporate effort to teach younger generations about AI launched in the beginning of 2018. The program’s early work has revealed two key things: young people focus on building positive applications of AI and they approach learning about ethical AI with an open mind. Industry’s current movement to outfit people with digital skills focuses squarely on coding — completely blocking out the non-coders and creative minds needed to advance technologies that continuously learn and, eventually, self-code — like AI. That is why the program’s curriculum extends beyond how to develop digital skills needed to build AI and centers around “soft skills.”
Outfitting future generations with skills and inclusivity
At their core, AI literacy programs should teach young people how to develop traits like empathy that guide how humans interact with people — and how they will work with automated technologies like AI in the near future. However, in order to truly democratize computer and AI training opportunities for people from every background, industry should look for approachable avenues to introduce more people to emerging innovations driven by AI — and outfit them with skills needed to pursue careers in technology. After all, achieving diversity in business, preparing employees for a technology-driven future and instilling ethics into innovation requires involvement from as many people as possible. Society stands to benefit immensely from making progress on all three fronts.
This article is inspired from Techcrunch.
Facebook Comments