<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Psychology of LLMs on Standard error</title><link>https://t-redactyl.io/series/psychology-of-llms/</link><description>Recent content in Psychology of LLMs on Standard error</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 27 Jul 2024 00:00:00 +0000</lastBuildDate><atom:link href="https://t-redactyl.io/series/psychology-of-llms/index.xml" rel="self" type="application/rss+xml"/><item><title>Are LLMs really on the path to AGI?</title><link>https://t-redactyl.io/posts/2024-07-27-are-llms-on-path-of-agi/</link><pubDate>Sat, 27 Jul 2024 00:00:00 +0000</pubDate><guid>https://t-redactyl.io/posts/2024-07-27-are-llms-on-path-of-agi/</guid><description>&lt;p&gt;The claims of &lt;a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence"&gt;artificial general intelligence, or AGI&lt;/a&gt;, have been some of the hottest and most emotionally charged discussions about large language models. In addition, these claims probably have the most intellectual weight behind them of all of the claims about LLM abilities. For example, in March 2023, Microsoft research released a paper titled &lt;a href="https://arxiv.org/pdf/2303.12712"&gt;"Sparks of Artificial General Intelligence"&lt;/a&gt;, where they claimed that a series of experiments on GPT-4 showed that this model was showing at least some signs of AGI. Later that year, researchers &lt;a href="https://en.wikipedia.org/wiki/Blaise_Ag%C3%BCera_y_Arcas"&gt;Blaise Agüera y Arcas&lt;/a&gt; (an AI research fellow at Google) and &lt;a href="https://de.wikipedia.org/wiki/Peter_Norvig"&gt;Peter Norvig&lt;/a&gt; (a Distinguished Education Fellow at the Stanford Institute for Human-Centered AI) &lt;a href="https://www.noemamag.com/artificial-general-intelligence-is-already-here/"&gt;published a piece&lt;/a&gt; stating that LLMs had already crossed the threshold for AGI.&lt;/p&gt;</description></item><item><title>Could LLMs be conscious or sentient?</title><link>https://t-redactyl.io/posts/2024-07-13-could-llms-be-sentient/</link><pubDate>Sat, 13 Jul 2024 00:00:00 +0000</pubDate><guid>https://t-redactyl.io/posts/2024-07-13-could-llms-be-sentient/</guid><description>&lt;p&gt;In June 2022, a story hit international news that &lt;a href="https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/"&gt;a Google engineer believed that one of their large language models had achieved sentience&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Blake Lemoine was testing Google's conversational LLM &lt;a href="https://blog.google/technology/ai/lamda/"&gt;LaMDA&lt;/a&gt; (the model that went on to power the original &lt;a href="https://blog.google/technology/ai/bard-google-ai-search-updates/"&gt;Bard&lt;/a&gt;) through a series of chats. Over time, as Lemoine completed more and more of these tests, he began to believe that this model was showing signs of sentience - in fact that it had a soul. Alarmed by the idea of a sentient entity being potentially exploited, Lemoine went to the press to advocate for policies to protect LaMDA and similar models. He was promptly fired for violating Google's privacy policies as he had released confidential transcripts as part of his work with the press, and &lt;a href="https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine"&gt;Google released their own statement&lt;/a&gt;, saying:&lt;/p&gt;</description></item><item><title>Can LLMs use language at a human-like level?</title><link>https://t-redactyl.io/posts/2024-06-29-humanity-in-llms-language/</link><pubDate>Sat, 29 Jun 2024 00:00:00 +0000</pubDate><guid>https://t-redactyl.io/posts/2024-06-29-humanity-in-llms-language/</guid><description>&lt;p&gt;Over the past two years, since the initial release of &lt;a href="https://openai.com/chatgpt/"&gt;ChatGPT&lt;/a&gt; in November 2022, we've been in a hype cycle around &lt;a href="https://en.wikipedia.org/wiki/Large_language_model"&gt;large language models&lt;/a&gt;, or LLMs. Due to the very human "feeling" of LLM outputs, a lot of people have been convinced that these models are showing human-level abilities, or are even at the point of exceeding humans in certain skills. For example, when I asked ChatGPT-4 to write me a Shakespearean sonnet about the night sky, it came up with something the bard himself could have written:&lt;/p&gt;</description></item></channel></rss>