The only positive I see is that Google’s AI speaks much better English than Skynet’s.
Google’s I/O conference always brings forth some amazing concepts. Most of these ideas, I can’t wait to get my hands out and try myself. This year, CEO Sundar Pichai shared some Google Assistant updates that left me feeling a bit queasy.
Within the last couple of years, I’ve got to admit that I’ve become a fairly heavy user of virtual personal assistants. From Google Assistant on my phone, to Amazon Alexa throughout my house (and yes, even on my phone). Each new update to the technology adds to the excitement of the devices I’m using these assistants on. I’m in awe of how well they understand me, and am very happy with the efficiency I gain through using these services. Sometimes, as the assistants get something wrong, or can’t come up with a response, I’m reminded that these are learned responses and an imperfect, human created product. Many times, when these “fails” occur, I’m happily presented with a humorous response that I appreciate.
In reality, I’m happy for this. I can be totally happy with a product which has been trained to give responses, and to be honest, I’m not really ready for a true AI. I’m not ready for HAL to tell me that he just can’t do that for me. Which is why the video of Pichai showing off some new Assistant functionality has me feeling uneasy.
This is not a virtual assistant reading me an rss feed of the weather forecast. It’s not Alexa telling me a canned joke, or Google Assistant guiding me to my next destination. This is conversational. This feels like, it’s thinking! It even says “uhm” and “mm-hmm”. I’m not so sure I’m comfortable with this.
We have seen some huge advancements in AI over the last couple of years. It has gone from a thing relegated to sci-fi movies to being true life. But in sci-fi movies, the AI always takes over. It always wants to kill us. But this is real life, there’s been no problems here yet? Right?
Not exactly. Remember Microsoft’s Twitter chat bot that became an oversexed Nazi in the course of 12 hours? Remember the Russian AI that decided the best course for the future was to kill humans? Or the robot woman that recently celebrated being granted citizenship in Saudi Arabia, but only after her previous violent tendencies were fixed?
I know, it seems very paranoid. No one should want to hold back technology. After all, these aren’t really mobile. It’s not like we’re going to have Terminators walking around wiping us out next year. Maybe we just need to ensure that Google’s Assistant is not allowed to be installed in Boston Dynamic’s already scary robot dogs? But it’s not that easy. I don’t think we have anything to worry about physical attacks from AI in the near future.
The problem I see is that technology companies are way too quick to point to AI to be the fix for everything. We know that Google is handing over the management of their own datacenters to powerful AI. When Facebook’s Mark Zuckerberg sat in front of the US Congress to answer for mishandling of people’s personal information, his response wasn’t that he would stop, it was that they needed more AI. We are pushing now for us to hand our keys and steering wheels over to artificial intelligence so that we don’t have to worry about how many drinks we have with dinner.
There’s no denying AI is the future. I don’t see any way that we could stop that from happening. I just wish that we’d slow down a little. Let’s be a little more cautious about AI when safety is involved. I think this is one area of technology where it should be more important to get it done right, than to get it done first.