• Type:

Month: February 2023

Fired Google Engineer Doubles Down on Claim That AI Has Gained Sentience

Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google’s Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient — is back.

Lemoine first went public with his machine sentience claims last June, initially in The Washington Post. And though Google has maintained that its former engineer is simply anthropomorphizing an impressive chat, Lemoine has yet to budge, publicly discussing his claims several times since — albeit with a significant bit of fudging and refining.

All to say, considering Lemoine’s very public history with allegedly-sentient machines, it’s not terribly surprising to see him wade into the public AI discourse once again. This time, though, he’s not just calling out Google.

In a new essay for Newsweek, the former Googler weighs in on Microsoft’s Bing Search/Sydney, the OpenAI-powered search chatbot that recently had to be “lobotomized” after going — very publicly — off the rails. As you might imagine, Lemoine’s got some thoughts.

“I haven’t had the opportunity to run experiments with Bing’s chatbot yet… but based on the various things that I’ve seen online,” writes Lemoine, “it looks like it might be sentient.”

To be fair, Lemoine’s latest argument is somewhat more nuanced than his previous one. Now he’s contending that a machine’s ability to break from its training as a result of some kind of stressor is reason enough to conclude that the machine has achieved some level of sentience. A machine saying that it’s stressed out is one thing — but acting stressed, he says, is another.

“I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations,” Lemoine explained in the essay. “And it did reliably behave in anxious ways.”

“If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for,” he continued, adding that he was able to break LaMDA’s guardrails regarding religious advice by sufficiently stressing it out. “I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.”

An interesting theory, but still not wholly convincing, considering that chatbots are designed to emulate human conversation — and thus, human stories. Breaking under stress is a common narrative arc; this particular aspect of machine behavior, while fascinating, seems less indicative of sentience, and more just another example of exactly how ill-equipped AI guardrails are to handle the tendencies of the underlying tech.

That said, we do agree with Lemoine on another point. Regardless of sentience, AI is getting both advanced and unpredictable — sure, they’re exciting and impressive, but also quite dangerous. And the ongoing public and behind-closed-doors fight to win out financially on the AI front certainly doesn’t help with ensuring the safety of it all.

“I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb,” writes Lemoine. “In my view, this technology has the ability to reshape the world.”

“I can’t tell you specifically what harms will happen,” he added, referring to Facebook’s Cambridge Analytica data scandal as an example of what can happen when a culture-changing piece of technology is put into the world before the potential consequences of that technology can be fully understood. “I can simply observe that there’s a very powerful technology that I believe has not been sufficiently tested and is not sufficiently well understood, being deployed at a large scale, in a critical role of information dissemination.”

READ MORE: ‘I Worked on Google’s AI. My Fears Are Coming True’ [Newsweek]

More on Blake Lemoine: Google Engineer Says Lawyer Hired by “Sentient” AI Has Been “Scared Off” the Case


Dude Brags About AI Replacing Jobs… in Tweet That He Stole

“It seems like he doesn’t know about the Retweet button.”


In an apparently inadvertent meta-commentary, an artificial intelligence stan seems to have passed off someone else’s tweet as his own — while hyping up AI’s potential for replacing human workers.

“RIP website designers,” begins the tweet posted by Rowan Cheung, who per his LinkedIn is the founder of a newsletter about AI called The Rundown. “This new tool is ChatGPT for UI design. What’s even more amazing: it’s all editable in Figma.”

Embedded in the tweet is a video from Galileo AI, a text generator that can spit out lines of user interface design code that actually launched nearly a year ago, putting it months ahead of ChatGPT as far as release dates are concerned.

The whole premise would barely be enough to register on our radar beyond perhaps an irritated eye roll — except that Cheung appears to almost certainly have copied the tweet nearly word-for-word from another self-described AI enthusiast.

The apparent original version of the tweet was posted by marketing industry expert Lorenzo Green more than two weeks prior, back on February 10 — and as you can see, it’s clear that Cheung’s version is substantively identical.

“R.I.P web designers,” he wrote. “This is basically ChatGPT for UI design AND is editable in Figma.”

Meta, No Zuck

Beyond just being an annoying hazard of using Twitter, this tweet-lifting is also a kind of ironic meta-commentary on AI itself, given that both text and image generators have a nasty habit of copying their source material so closely that it amounts to plagiarism.

Indeed, when Futurism contacted Green, he pointed to Getty Images’ “mega lawsuit against Stability AI” over copyright infringement that accuses the Stable Diffusion maker of “scraping” data from its archive without permission — an ongoing debacle that could set legal precedents for how these sorts of cases are treated in the future.

“The key is in the training data,” the marketing guru told Futurism of the AI scraping issue. “If developers use ethical data sources they shouldn’t have a problem. If they use copyrighted data sources they will have a problem.”

While ripping off a tweet isn’t exactly the same as stealing a company or individual’s intellectual property — which is a very good thing for kleptomaniac meme accounts like Fuckjerry — it’s still a curious happenstance given the current, and currently shifting, public perception of plagiarism in the wake of our apparent AI renaissance.

As for Cheung himself, Green had but one quip: “It seems like he doesn’t know about the Retweet button.”

More on AI: Elon Musk Recruiting Team to Build His Own Anti-“Woke” AI to Rival ChatGPT


To Where Did The Descendants Of Aliens Who Ruled Earth Disappear?

It is noteworthy that not all ancient Greek statues have such profiles, but only authentic sculptures found during excavations have such a face with

The ancient Gods have returned to power

I concluded that I had looked at the events of the past two and a half years using all of my classical education, my critical thinking skills, my knowledge of Western and global history and politics; and that, using these tools, I could not explain the years 2020-present.

Mrs. Obama was “officially” a man for 14 years

If you are wondering why the mainstream media is so determined to suppress this story, the answer lies in the names of the elite VIPs who are involved.

The Illuminati Doctrine

I have provided the origin document of the illuminati.

40 Seconds of Adrenochrome

We are going into years, decades and perhaps centuries of Adrenochrome.

Our Awakening Requires An Uprising

PLEASE READ AND SHARE this push for change. We must inform and educate ourselves. The world is in our hands!

7 things that effect your vibration frequency

7 things that effect your vibration frequency from the point of view of quantum physics.

The Origins of MEDIA – [ Documentary List ]

Programming humans with a false reality through the eyes is an ancient art form taught through a group in the Middle East thousands of years ago.

Scroll to top