Re106-NOTES.pdf
JAN 05, 2023
Description Community
About

(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)


Re106: Elitism, Culling, Coercion, Adversaries, Strategy
(Day 4, AIMA4e Chpt. 4)

retraice.com

Technical terms from loaded words.
Evolutionary algorithms and their techniques; sensorless agent problems and coercing environments; adversaries in online search; conditional plans as strategy in complex environments; Vallee, `Major Murphy', science, the price of information, counterespionage and AI.

Air date: Wednesday, 4th Jan. 2023, 10:00 PM Eastern/US.

Elitism and culling in
evolutionary algorithms

"There are endless forms of evolutionary algorithms, varying in the following ways:" ... "The makeup of the next generation. This can be just the newly formed offspring, or it can include a few top-scoring parents from the previous generation (a practice called elitism, which guarantees that overall fitness will never decrease over time). The practice of culling, in which all individuals below a given threshold are discarded, can lead to a speedup (Baum et al., 1995)."^1

Also mentioned during the livestream: Dawkins (2016)

Coercion in sensorless (conformant) problems

"Consider a sensorless version of the (deterministic) vacuum world. Assume that the agent knows the geography of its world, but not its own location or the distribution of dirt. In that case, its initial belief state is {1,2,3,4,5,6,7,8} (see Figure 4.9). Now, if the agent moves Right it will be in one of the states {2,4,6,8}--the agent has gained information without perceiving anything! After [Right,Suck] the agent will always end up in one of the states {4,8}. Finally, after [Right,Suck,Left,Suck] the agent is guaranteed to reach the goal state 7, no matter what the start state. We say that the agent can coerce the world into state 7."^2

Also mentioned: http://aima.cs.berkeley.edu/figures.pdf

Adversaries and dead ends in
online search

"Online explorers are vulnerable to dead ends: states from which no goal state is reachable. If the agent doesn't know what each action does, it might execute the `jump into bottomless pit' action, and thus never reach the goal. In general, no algorithm can avoid dead ends in all state spaces. Consider the two dead-end state spaces in Figure 4.20(a). An online search algorithm that has visited states S and A cannot tell if it is in the top state or the bottom one; the two look identical based on what the agent has seen. Therefore, there is no way it could know how to choose the correct action in both state spaces. This is an example of an adversary argument--we can imagine an adversary constructing the state space while the agent explores it and putting the goals and dead ends wherever it chooses, as in Figure 4.20(b)."^3

Also mentioned: http://aima.cs.berkeley.edu/figures.pdf

Strategy in partially observable and nondeterministic environments

"In partially observable and nondeterministic environments, the solution to a problem is no longer a sequence, but rather a conditional plan (sometimes called a contingency plan or a strategy) that specifies what to do depending on what percepts agent [sic] receives while executing the plan."^4

Also mentioned: Freedman (2013)

AI can handle what science can't(?)

Vallee's `Major Murphy' makes the point that science can't handle investigating adversarial intelligent agents (e.g. aliens); only counterespionage (spies) can. He also argues that science has no concept of the `price' of information--Hitler had 95% of the information about the D-Day invasion, but the missing 5% was more valuable than the 95%.^5

Maybe strategic intelligence (espionage, counterespionage and covert action^6) can handle adversaries, but so can artificial intelligence.

_

References

Dawkins, R. (2016). The Selfish Gene. Oxford, 40th anniv. ed. ISBN: 978-0198788607. Searches:
https://www.amazon.com/s?k=9780198788607
https://www.google.com/search?q=isbn+9780198788607
https://lccn.loc.gov/2016933210

Freedman, L. (2013). Strategy: A History. Oxford University Press. ISBN: 978-0190229238. Searches:
https://www.amazon.com/s?k=9780190229238
https://www.google.com/search?q=isbn+9780190229238
https://lccn.loc.gov/2013011944

Retraice (2020/09/07). Re1: Three Kinds of Intelligence. retraice.com.
https://www.retraice.com/segments/re1 Retrieved 22nd Sep. 2020.

Retraice (2022/10/31). Re36: Notes on Conspiracy. retraice.com.
https://www.retraice.com/segments/re36 Retrieved 4th Nov. 2022.

Retraice (2022/11/17). Re53: Big Questions About Strategic Intelligence. retraice.com.
https://www.retraice.com/segments/re53 Retrieved 18th Nov. 2022.

Retraice (2022/12/12). Re79: Recap of Strategic Intelligence (Re1-Re5). retraice.com.
https://www.retraice.com/segments/re79 Retrieved 13th Dec. 2022.

Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson, 4th ed. ISBN: 978-0134610993. Searches:
https://www.amazon.com/s?k=978-0134610993
https://www.google.com/search?q=isbn+978-0134610993
https://lccn.loc.gov/2019047498

Vallee, J. (1979). Messengers of Deception: UFO Contacts and Cults. And/Or Press. ISBN: 0915904381. Different edition and searches:
https://archive.org/details/MessengersOfDeceptionUFOContactsAndCultsJacquesValle1979/mode/2up
https://www.amazon.com/s?k=0915904381
https://www.google.com/search?q=isbn+0915904381
https://catalog.loc.gov/vwebv/search?searchArg=0915904381

Footnotes

^1 Russell & Norvig (2020) p. 116.

^2 Russell & Norvig (2020) p. 126.

^3 Russell & Norvig (2020) pp. 135-136.

^4 Russell & Norvig (2020) p. 122.

^5 Vallee (1979) pp. 66 ff. See also, e.g.: Retraice (2020/09/07); Retraice (2022/10/31); Retraice (2022/11/17).

^6 Retraice (2022/12/12).

 

Comments