While folks are fixated on the journey of Sophia the robot, I’d like to point out that artificial intelligence is a technology, a platform, and a concept shared by government, industry, and academia. AI is not an individual, object, or sentient being. And AI definitely doesn’t have a gender.
The connections and distinctions between AI and robots are more nuanced as well. Indeed, some robots run on AI technology that allows them to operate independently, learn from surroundings, and interact with people. However, there are a lot of AI platforms, technologies, and innovations that have nothing to do with robots—and never will.
The fundamental—and commonly sensationalized—question of whether robots can be human also misses a crucial point. It’s not about whether AI can help robots become human. Robots should not pretend to be human at all. AI can help people solve human problems without assuming a sentient role in society. People building AI can help fellow humans by focusing on problem solving and enhancing productivity.
More from FORTUNE
AI, for its part, is not nearly advanced enough—yet—to be able to claim human-level intelligence, empathy, or possession of several fundamental qualities that make people human. Giving AI a human platform—and over-humanizing the technology, in general—creates more problems than it solves. It also presents the global community with a false sense of what AI actually is, what the technology can do, and why people like me dedicate their lives to building AI platforms.
I believe it’s significantly more important for technologists to communicate the benefits of the AI technology itself, rather than focus on examples of robots that do not solve real issues, perpetuate gender perceptions, and reveal data-driven biases. The technology community and global society need to work on developing useful and purposeful AI that solves human problems like complex health care and transportation issues, and business problems like boosting productivity and filling gaps in technical expertise across disciplines. We need AI that neutralizes biases by taking gender out of the equation completely and using objective data sources to build, grow, and learn from interactions with human counterparts.
Using AI and robots to sensationalize the human experience and scaremonger society into believing a robot takeover is an inevitable future makes life harder for everyone. For consumers, it prevents people from truly embracing the increasingly personalized benefits AI can offer to their daily lives. For technologists like me who work on AI every day, the practice of demonizing and aggrandizing AI advancement severely impedes actual innovation and technical progress.
Let’s not underestimate the importance of this debate. Talking about the ethics that surround the conversation of AI and machine learning is critical as it will help us make the best use of this emerging technology—ensuring that we don’t miss the real opportunity that AI can bring to all our lives.
So, before we think about making new, outsized claims about robots and AI integrating into society, let’s all take a breath. After all, we should be working tirelessly and together to get the basics of the self-learning technology right. My fellow technologists and I from industry, academia, and the public sector need to develop comprehensive ethical standards that hold up for the long term. And commit to them.
Engineers need to ensure that the AI they create has the ability to learn, discern bias, and avoid making the same mistakes prior to replacing traditionally human-held positions in the workforce and in society, in general. Ultimately, society’s responsibility is not to make AI more human-like, but to make AI that significantly improves human lives.
Kriti Sharma is the vice president of artificial intelligence at Sage, a global integrated accounting, payroll and payment systems provider. She is the creator of Pegg, the world’s first virtual assistant managing everything from money to people, with users in 135 countries.