• dot.LA
  • Posts
  • The AI Killer Robots Are Here, According to Lazy Journalists

The AI Killer Robots Are Here, According to Lazy Journalists

Sound the Alarm: The AI Killer Robots Are 'Not' Coming

.

https://assets.rbl.ms/33793209/origin.png

Earlier this week, Vice’s Motherboard blog related a story about an Air Force simulation involving an AI-enabled drone. In a scenario that felt not just indebted to but literally pulled from the pages of classic sci-fi horror storytelling, USAF Chief of AI Test and Operations Col. Tucker “Cinco” Hamilton claimed that the AI drone determined that it would more easily accomplish its mission goals without having to coordinate with a human operator. As a result, the drone circumvented its programming and attempted to kill the human operator. According to a presentation and blog post written by Col. Hamilton for the UK’s Royal Aeronautical Society, even after being told that it would lose points for killing its operator, the AI proposed destroying the entire communication tower linking it with its human counterpart.

It’s clear why this story proved tantalizing from an AI journalism perspective. It has a bit of everything: the threat of violence, an insider’s look at how AI technology is being applied in real-world scenarios, and of course a “doomsday” narrative that feels more than a little indebted to James Cameron’s beloved “Terminator” franchise and its villainous SkyNet militarized AI system. There’s just one problem with the Col. Hamilton’s story… it’s not actually true.

Updates from both Insider and Vice initially suggested the simulation was not actually conducted by the Air Force, and later confirmed no simulation had actually taken place at all. In fact, Col. Hamilton was just describing a “thought experiment” that originated outside of the US military about potential outcomes of AI drone warfare. In a new statement, Col. Hamilton says “We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome.” Which kind of sounds a lot like “I made it all up.”

AI doomerism is a new beat for journalists

If you thought that perhaps such a grievous error would give publications a moment’s pause about the breathless tone and pace of their AI coverage… you would be mistaken. A Thursday piece from USA Today leads with President Joe Biden’s comments about AI to graduates of the Air Force Academy in Colorado Springs: “It's not going to be easy decisions, guys. I met in the Oval Office with eight leading scientists in the area of AI. Some are very worried that AI can actually overtake human thinking and planning.”

It’s true that Biden met on May 5 with CEOs from leading AI companies like Google and Microsoft to discuss their technology. But just repeating an 80-year-old layperson’s vague takeaways from a meeting he had one month ago with the most passionately outspoken advocates of said technology might not be the best and most accurate way to encapsulate its challenges and dangers. Even if he is the President. Nonetheless, the headline boldly states PRESIDENT BIDEN WARNS ARTIFICIAL INTELLIGENCE COULD “OVERTAKE HUMAN THINKING”

Even tech stories that aren’t actually about AI are getting swept up in the hype, as publications attempt to goose traffic by pushing the valuable, highly-searched AI-related keywords on to every new webpage that they can.

A Fox News story on Friday describes new artificial skin research from Stanford University, which can now recognize when it’s been damaged or injured and enable a self-healing protocol. But while self-healing synthetic skin is a worthwhile scientific breakthrough all on its own with obvious beneficial applications in the field of medicine, the Fox report leads with a dire warning that “robots could soon be cloaked in human-like synthetic skin…”

Wait for it…

“Similar to the cyborg assassin of the ‘Terminator’ movie franchise.”

Never mind that the original T-800 design from the first film doesn’t even have self-healing skin. Remember? When Arnold’s face gets damaged, it stays that way for the rest of the film, and you can see his metal skull protruding from underneath.

It’s not difficult to understand why this is happening

AI proves something of a perfect storm for lazy journalism and “fake news.” There’s been a remarkable wave of venture capital and investment dollars flooding into the sector, so a lot of technologists and their backers are now heavily incentivized to promote AI and get people excited about its applications. AI apps and their outputs have repeatedly gone viral on social media and now millions of clicks and views each day, making them extraordinarily popular targets for websites and apps that rely on search traffic or trending posts on social media.

For writers, there’s just a lot more activity in the AI space today than, say, crypto or the metaverse or even traditionally reliable clickbait-y topics like streaming TV and gaming. As long as everyday readers and consumers of internet content remain fascinated by AI, and curious about what it can do, it’s unlikely we’ll see an end to the daily crush of breathlessly excited coverage.

This isn’t even a new phenomenon

A Guardian editorial from 2018 already complained about the unreliability of the media’s AI reporting, which Carnegie Mellon computer scientist Zachary Lipton referred to as “sensationalized crap.” Broad interest in topics like machine learning, according to Lipton, had led to a “misinformation epidemic” that was creating unrealistic expectations for the technology and potentially threatening its future progress. A 2019 piece from Scientific American referred to many of the press’ claims about AI’s potential as “greatly overblown.”

Then as now, the tech media has a baseline responsibility to get the details right, even when it’s eagerly collaborating in entrepreneurs’ and investors’ efforts to drive interest in a new innovation or field. As the Vice story in particular makes clear, the mad rush for fresh AI stories and content means that, at least sometimes, due diligence isn’t getting done as thoroughly as it should, and sources and claims aren’t always being properly vetted.

Everyone loves a story about killer robots

A new piece this week in The Atlantic looks specifically at the AI Apocalypse claims, which remain entirely in the realm of science fiction, despite how frequently they’re now repeated in mainstream news publications. As University of Washington computational linguist Emily Bender explains, doomsday AI scenarios all rest on the same unspoken assumption: this technology is already extremely powerful, and it’s virtually guaranteed to become even more powerful, so “you’d be a fool not to invest.” Technology strategist Rachel Coldicutt makes a similar point in a Medium post this week. If we assume that AI apps are “unworldly, goldlike, and unknowable,” this implies that “the people who created them must be more than gods.”

Rather than a runaway train speeding unavoidably toward the end of human civilization, AI Now Institute co-founder Meredith Whittaker points out that AI technology is – so far– evolving “incrementally.” It may take over more and more jobs that were formerly filled by humans, and improve at all sorts of everyday tasks over time, but there’s no reason to suspect it will suddenly break free of its bonds and decide to independently kill all humans, or that we’d at some point lose our ability to pull the plug on our AI systems and invent something else instead.

Rather than apocalyptic scenarios, Whittaker and other like-minded writers and commentators fear the more immediate dangers of AI applications that are already here: misinformation, bias, the creation of nonconsensual pornography, labor violations, copyright infringement, and so forth. These real, everyday disadvantages to pushing AI apps into every facet of our lives really could use more attention and coverage from journalists, but they lack the clickiness of stories about armed killer robots.

Even honest reporting about AI embellishes their actual threat

In late May, a fake photo that appeared to depict an explosion near the Pentagon in Washington DC circulated online that was almost certainly created in Midjourney or a similar generative AI app. But though the hoax photo was widely shared on social media, and covered by just about every major media outlet… it doesn’t appear to have fooled all that many actual human people.

A Twitter search reveals that most tweets about it were discussing the fact that it’s fake. The Washington Post notes that the image ‘appears to have done little immediate damage,” and that Twitter suspended the account – which was posting as a Bloomberg-affiliated feed – within about 20 minutes. (The building featured in the image isn’t even The Pentagon.)

A new piece from Harvard Business Review suggests that the roots of the problem might be very deep indeed, stemming from the basic way we define and discuss “artificial intelligence.” HBR argues that in most of our practical everyday modern scenarios, we don’t even need true AI, just advanced forms of machine learning. AI, writer Eric Seigel argues, is functionally too vague at this point to even be useful, and overpromises about what most of this technology is and it actually works. As other writers have previously pointed out, widespread and careless use of the term “AI” has also created confusion about Artificial General Intelligence (AGI), the still far-off notion of truly conscious, sentient machines. He suggests the straight-forward solution that we stop using AI to refer to non-AGI developments, and go back to “machine learning.”

But of course, this has the negative consequence of being a lot less sexy, and therefore clickable as a link on a search engine result or social media, and therefore less appealing to journalists, their editors, and the subjects about whom they’re writing. As long as reality remains at least somewhat at odds with public perceptions and interest about the technology, it’s sadly likely this misleading or distracting coverage will continue. - Lon Harris

Known for his incredible talent and chart-topping hits, Grammy award winning singer Miguel will be joining us at SUPERCHARGE LA: Access to Capital & Cocktails and promises to elevate the energy and excitement of the event to new heights Wednesday, June 7th, from 6-10 p.m. at 1212 Santa Monica!

An expert in design, sustainability, and philanthropic entrepreneurship, Miguel will be joined by T3MP0’s CEO, Roger Chabra, to discuss web3, the metaverse and the work they’re doing to ensure founders from all backgrounds can participate in its growth.

Brought to you by dot.LA, Pledge.LA, and the Annenberg Foundation, SUPERCHARGE LA is on a mission to bridge the gap and provide much-needed access to capital for underrepresented LA Founders.

At the event, you'll rub shoulders with esteemed leaders, influential personalities, and renowned partners across LA Tech including Los Angeles Rams running back Austin Ekeler and representatives from M13 and Zillow.

🎟️Don't hesitate! RSVP now at bit.ly/SuperchargeLA to secure your spot at this must-attend event.

We can't wait to welcome you to SUPERCHARGE LA!

On this episode of Office Hours, Apex founder and CEO Ian Cinnamon discusses the importance of investing in space exploration and shares his thoughts on the evolving space ecosystem in Los Angeles.

Global venture firm McKinsey & Company is launching InLA, an accelerator program for underrepresented founders.

Automotive manufacturing veteran David Apps joined climate tech company CarbonCapture Inc. as vice president of manufacturing.

Measurabl, a San Diego-based data management platform raised $93 million in its fourth round of funding co-led by Energy Impact Partners and Sway Ventures.

- Warner Bros. Discovery claims that 70% of HBO Max users have already switched over to the new Max platform one week after launch.

- Apple’s AR/VR headset could go into mass production as early as October for a December launch.

- In a non-binding vote, Netflix shareholders rejected proposed pay packages for the company’s top executives.

- Apollo app developer Christian Selig claims Reddit plans to charge him $20 million a year for access to its API.

--

How Are We Doing? We're working to make the newsletter more informative, with deeper analysis and more news about L.A.'s tech and startup scene. Let us know what you think in our survey, or email us!