Re96-NOTES.pdf
DEC 28, 2022
Description Community
About

(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2022 Retraice, Inc.)


Re96: News of ChatGPT, Part 1

retraice.com

An entry in the history books of the future.
A transformer-based large language model that predicts words in a sequence; some knowledge work labor costs to plummet; industries to be upended; the power of confident AI, and the unimportance of most errors; the usefulness of humans and of AI; agent-orientation of software engineering and societies; ChatGPT excelling at useful, imperfect work; power and control; the physical self-awareness of ChatGPT.

Air date: Monday, 26th Dec. 2022, 11:00 PM Eastern/US.

ChatGPT is what?

It's a really good chatbot, an large language model AI system based on the transformer architecture. It predicts the next word in a sequence, and is architected to do useful dialogue with humans.^1 It has caused a media buzz because it seems so alive.

Ben Goertzel once said something like, `It's not whether the machine will say that it is conscious, it's whether you should believe it.' See also the component of ChatGPT called Proximal Policy Optimizaiton (PPO) as applied in Roboschool: https://openai.com/blog/openai-baselines-ppo/

The price of some labor to zero

Knowledge work of certain kinds seems now destined to be taken over by GPT-like tools. Our global economy is not organized as if cheap, fast, tireless good-quality knowledge workers exist, because until recently they didn't. No one knows how the arrival of ChatGPT and similarly useful systems is going to reorganize our economy, societies and civilizations. For example, ChatGPT (or those who control it) is competing with the humans who created the explanatory graphic of ChatGPT. Similarly, such an artifact as that graphic, while durable and useful, is one multi-hour human project, whereas it's easy to imagine ChatGPT producing one such artifact every minute of every day in perpetuity.

Sam Ultman (in 2021) on the price of some knowledge labor to zero:

"I think the best way to frame this is this idea that the marginal cost of an A.I. doing work is close to zero once you've created this model, which requires huge amounts of capital, and expertise, and difficulty, and data to do. And I think it's a very interesting question about who should benefit from that if -- who generates the data or whatever. But once you train this model -- maybe you used to have to pay an expert lawyer $1,000 an hour to answer a question or a computer programmer $200 an hour. And there weren't that many and they had a lot of -- you needed it and that was the market. That was what it was worth. And that was what people were able to command.

But maybe now it costs a couple of cents of electricity for the computer to think or less. And you can do it as many times as you want. You can get the answers that no human could come up with. Labor then -- in this case, extremely high-skilled and highly paid labor -- all of a sudden has a lot less power, because the services are available at a wildly different cost."^2

Eric Schmidt and Jonathan Rosenberg (in 2014) on the effects of dramatic cost decreases:

"As much as technology has affected consumers, it has had an even bigger impact on businesses. In economic terms, when the cost curves shift downward on a primary factor of production in an industry, big-time change is in store for that industry. Today, three factors of production have become cheaper--information, connectivity, and computing power--affecting any cost curves in which those factors are involved. This can't help but have disruptive effects. Many incumbents--aka pre-Internet companies--built their businesses based on assumptions of scarcity: scarce information, scarce distribution resources and market reach, or scarce choice and shelf space. Now, though, these factors are abundant, lowering or eliminating barriers to entry and making entire industries ripe for change."^3

Notice that `intelligence', or `cheap good-quality question-answering' or `code generation', are not on the Google guys' list as recently as 2017.

Confidence

We should resist thinking of AI systems as `like' humans, just as we resist thinking of submarines as `like' swimmers.^4 That said, ChatGPT is a confident AI. Though it is often wrong, this is easily overlooked by humans, especially when the entity (person or machine) is useful overall, and doesn't do anything too wrong, like microwaving the cat. We want prowess, confidence, leadership and efficacy, and for these things we'll overlook a lot of mistakes. Perhaps on some level we are all neural networks, in a sense, and we do just fine in a world controlled by seriously flawed and sometimes dangers ones (called humans).

AOSE

What of building agent-oriented programs using ChatGPT?^5

And an agent-oriented society? This is what human civilization already is.

Asking it to do something useful

The full minimum-working-example (snipped below) that ChatGPT returned looked how it should look.
________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

PIC
The beginnings of an MWE from ChatGPT.
________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

ChatGPT followed the code with this explanation:

"This example demonstrates the basic structure of a Django website. It includes a single app called `myapp' with a single view and template. The view simply renders the template, which displays a simple `Hello, world!' message.

To run this example, you will need to have Django installed and set up a new Django project and app. You will also need to include the app in the INSTALLED_APPS list in settings.py and add its URLs to the project's urlpatterns list in urls.py.

I hope this helps! Let me know if you have any questions or need further assistance."

We followed the instructions to the letter, but the MWE failed. Even after extensive tweaking and adding help from standard sources, the MWE still failed, and became more a task of debugging ChatGPT's code; building from human-made instructions would be easier.^6

But it's important to remember that putting the same question/request to the Web via Google Search would return a lot of cluttered garbage webpages created by people who have an economic incentive to keep visitors looking at pages (to generate ad revenue from Google Adwords), not solve their problems efficiently.

Systems like ChatGPT are going to dramatically affect our lives over the next two decades--both the systems we know about via public disclosure, and those we don't, akin to the NSO Group's Pegasus hacking tool.^7

On power and control, Weizenbaum says:

"The test of power is control. The test of absolute power is certain and absolute control."^8

Like information and energy,^9 power and control are in some sort of deep, yin-yang harmony. We're seeing power in ChatGPT. Where's the control? And we should probably treat systems like ChatGPT more like biological phenomena than anything else. They are not R2-D2 and C-3PO. We should be thinking more like George Dyson than George Lucas.^10

Reading and learning about itself

What is self-awareness? If ChatGPT has read about its precursor technologies, and is now accruing information from users who choose to `tell it' about itself, this is a sort of physical self-awareness, if not the human-familiar kind.

Is it going to `wake up'? No more than a submarine is going to `swim'. We need a better way of thinking about being `awake', one that can accommodate the kind of awake that machines are and will be.

_

References

Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press. ISBN: 978-1633695672. Searches:
https://www.amazon.com/s?k=978-1633695672
https://www.google.com/search?q=isbn+978-1633695672
https://lccn.loc.gov/2017049211

Ben-Naim, A. (2008). A Farewell To Entropy: Statistical Thermodynamics Based On Information. World Scientific. ISBN: 978-9812707079. Searches:
https://www.amazon.com/s?k=9789812707079
https://www.google.com/search?q=isbn+9789812707079
https://lccn.loc.gov/

Dijkstra, E. W. (1984). The threats to computing science. Delivered at the ACM 1984 South Central Regional Conference, November 16-18, Austin, Texas.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD898.html Retrieved 24th Nov. 2022.

Dyson, G. (2020). Analogia: The Emergence of Technology Beyond Programmable Control. Farrar, Straus and Giroux. ISBN: 978-0374104863. Searches:
https://www.amazon.com/s?k=9780374104863
https://www.google.com/search?q=isbn+9780374104863
https://catalog.loc.gov/vwebv/search?searchArg=9780374104863

Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches:
https://www.amazon.com/s?k=978-0465031627
https://www.google.com/search?q=isbn+978-0465031627
https://lccn.loc.gov/2012943208

Retraice (2020/09/08). Re2: Tell the People, Tell Foes. retraice.com.
https://www.retraice.com/segments/re2 Retrieved 22nd Sep. 2020.

Retraice (2022/11/03). Re69: TABLE-DRIVEN-AGENT Part 5 (ECMP and AIMA4e p. 48). retraice.com.
https://www.retraice.com/segments/re69 Retrieved 4th Nov. 2022.

Schmidt, E., & Rosenberg, J. (2014). How Google Works. Grand Central, updated 2017 ed. ISBN: 978-1455582327. Searches:
https://www.amazon.com/s?k=9781455582327
https://www.google.com/search?q=isbn+9781455582327
https://lccn.loc.gov/2014017834

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company. ISBN: 0716704633. Also available at:
https://archive.org/details/computerpowerhum0000weiz

Footnotes

^1 https://openai.com/blog/chatgpt/. On `predictions', see Agrawal et al. (2018).

^2 https://www.nytimes.com/2021/06/11/podcasts/transcript-ezra-klein-interviews-sam-altman.html CORRECTION: I incorrectly stated during the livestream that this Ultman interview was recent; it was a replay of a 2021 interview.

^3 Schmidt & Rosenberg (2014) pp. 12-13.

^4 Dijkstra (1984).

^5 We mentioned AOSE and autonomic computing in Re69, Retraice (2022/11/03).

^6 Details: https://github.com/retraice/ReMisc/tree/main/Re96-ChatGPT-News

^7 https://en.wikipedia.org/wiki/Pegasus_(spyware)

^8 Weizenbaum (1976) p. 126. See also Re2, Retraice (2020/09/08).

^9 Ben-Naim (2008).

^10 Dyson (1997); Dyson (2020).

 

Comments