Re113-NOTES.pdf
JAN 12, 2023
Description Community
About

(The below text version of the notes is for search purposes and convenience. See the PDF version for proper formatting such as bold, italics, etc., and graphics where applicable. Copyright: 2023 Retraice, Inc.)


Re113: Uncertainty, Fear and Consent
(Technological Danger, Part 1)

retraice.com

Beliefs, and the feelings they cause, determine what chances we take; but possibilities don't care about our beliefs.
A prediction about safety, security and freedom; decisions about two problems of life and the problem of death; uncertainty, history, genes and survival machines; technology to control the environment of technology; beliefs and feelings; taking chances; prerequisites for action; imagining possibilities; beliefs that do or don't lead to consent; policing, governance and motivations.

Air date: Wednesday, 11th Jan. 2023, 10:00 PM Eastern/US.

Prediction: freedom is going to
decrease

The freedom-security-safety tradeoff will continue to shift toward safety and security.

Over the next 20 years, 2023-2032, you'll continue to be asked, told, and nudged into giving up freedom in exchange for safety (which is about unintentional danger), in addition to security (which is about intentional danger).^1

(Side note: We have no particular leaning, one way or another, about whether this will be a good or bad thing overall. Frame it one way, and we yearn for freedom; frame it another way, and we crave protection from doom.)

For more on this, consider:
o Wiener (1954); o Russell (1952); o Dyson (1997), Dyson (2020); o Butler (1863); o Kurzweil (1999); o Kaczynski & Skrbina (2010); o Bostrom (2011), Bostrom (2019).

Decisions: two problems of life
and the problem of death

First introduced in Re27 (Retraice (2022/10/23)) and integrated in Re31 (Retraice (2022/10/27)).

Two problems of life:

1.
To change the world?

2.
To change oneself (that part of the world)?

Problem of death:

1.
Dead things rarely become alive, whereas alive things regularly become dead. What to do?

Uncertainty

We just don't know much about the future, but we talk and write within the confines of our memories and instincts.

We know the Earth-5k well via written history, and our bodies `know', via genes, the Earth-2bya, about the time that replication and biology started. But the parts of our bodies that know it (genes, mechanisms shared with other animals), are what would reliably survive, not us. Most of our genes can survive in other survival machines, because we share so much DNA with other creatures.^2

But there is hope in controlling the environment to protect ourselves (vital technology), though we also like to enjoy ourselves (other technology). There is also irony in it, to the extent that technology itself is the force from which we may need to be protected.

Beliefs and feelings

* a cure, hope;
* no cure, fear;
* a spaceship, excitement;
* home is the same, longing;
* home is not the same, sadness;
* she loves me, happiness;
* she hates me, misery;
* she picks her nose, disgust.

Chances

Even getting out of bed--or not--is somewhat risky: undoubtedly some human somewhere has died by getting out of bed and falling; but people in hospitals have to get out of bed to avoid skin and motor problems.

We do or don't get out of bed based on instincts and beliefs.

Side note: von Mises' three prerequisites for human action:^3

1.
Uneasiness (with the present);

2.
An image (of a desirable future);

3.
The belief (expectation) that action has the power to yield the image. (Side note: technology in the form of AI is becoming more necessary to achieve desirable futures, because enough humans have been picking low-hanging fruit for enough time that most of the fruit is now high-hanging, where we can't reach without AI.)

Possibilities

* radically good future because of technology
(cure for everything);
* radically bad future because of technology
(synthetic plague);
* radically good future because of humans
(doctors invent cure);
* radically bad future because of humans
(doctors invent synthetic plague).

The important point is to remember the venn: there is a large space of possibilities, within which a small dot is what any individual human can imagine.

If you believe x, do you consent to y?

* no one has privacy, privacy invasion;
* entity e is not malicious, open interaction with entity e;
* VWH (the vulnerable world hypothesis), global police state.

"VWH: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi- anarchic default condition."^4

The "the semi-anarchic default condition":

1.
limited capacity for preventive policing;

2.
limited capacity for global governance;

3.
diverse motivations: "There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) - in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (`the apocalyptic residual') who would act in ways that destroy civilization even at high cost to themselves."^5

_

References

Bostrom, N. (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, 10, 44-79. Citations are from Bostrom's website copy:
https://www.nickbostrom.com/information-hazards.pdf Retrieved 9th Sep. 2020.

Bostrom, N. (2019). The Vulnerable World Hypothesis. Global Policy, 10(4), 455-476. Nov. 2019. Citations are from Bostrom's website copy:
https://nickbostrom.com/papers/vulnerable.pdf Retrieved 24th Mar. 2020.

Brockman, J. (Ed.) (2019). Possible Minds: Twenty-Five Ways of Looking at AI. Penguin. ISBN: 978-0525557999. Searches:
https://www.amazon.com/s?k=978-0525557999
https://www.google.com/search?q=isbn+978-0525557999
https://lccn.loc.gov/2018032888

Butler, S. (1863). Darwin among the machines. The Press (Canterbury, New Zealand). Reprinted in Butler et al. (1923).

Butler, S., Jones, H., & Bartholomew, A. (1923). The Shrewsbury Edition of the Works of Samuel Butler Vol. 1. J. Cape. No ISBN.
https://books.google.com/books?id=B-LQAAAAMAAJ Retrieved 27th Oct. 2020.

Dawkins, R. (2016). The Selfish Gene. Oxford, 40th anniv. ed. ISBN: 978-0198788607. Searches:
https://www.amazon.com/s?k=9780198788607
https://www.google.com/search?q=isbn+9780198788607
https://lccn.loc.gov/2016933210

Dyson, G. (2020). Analogia: The Emergence of Technology Beyond Programmable Control. Farrar, Straus and Giroux. ISBN: 978-0374104863. Searches:
https://www.amazon.com/s?k=9780374104863
https://www.google.com/search?q=isbn+9780374104863
https://catalog.loc.gov/vwebv/search?searchArg=9780374104863

Dyson, G. B. (1997). Darwin Among The Machines: The Evolution Of Global Intelligence. Basic Books. ISBN: 978-0465031627. Searches:
https://www.amazon.com/s?k=978-0465031627
https://www.google.com/search?q=isbn+978-0465031627
https://lccn.loc.gov/2012943208

Kaczynski, T. J., & Skrbina, D. (2010). Technological Slavery: The Collected Writings of Theodore J. Kaczynski. Feral House. No ISBN.
https://archive.org/details/TechnologicalSlaveryTheCollectedWritingsOfTheodoreJ.KaczynskiA.k.a.TheUnabomber/page/n91/mode/2up Retrieved 11 Jan. 2023.

Koch, C. G. (2007). The Science of Success. Wiley. ISBN: 978-0470139882. Searches:
https://www.amazon.com/s?k=9780470139882
https://www.google.com/search?q=isbn+9780470139882
https://lccn.loc.gov/2007295977

Kurzweil, R. (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Books. ISBN: 0140282025. Searches:
https://www.amazon.com/s?k=0140282025
https://www.google.com/search?q=isbn+0140282025
https://lccn.loc.gov/98038804

Retraice (2022/10/23). Re27: Now That's a World Model - WM4. retraice.com.
https://www.retraice.com/segments/re27 Retrieved 24th Oct. 2022.

Retraice (2022/10/27). Re31: What's Happening That Matters - WM5. retraice.com.
https://www.retraice.com/segments/re31 Retrieved 28th Oct. 2022.

Retraice (2022/11/27). Re63: Seventeen Reasons to Learn AI. retraice.com.
https://www.retraice.com/segments/re63 Retrieved Monday Nov. 2022.

Russell, B. (1952). The Impact Of Science On Society. George Allen and Unwin Ltd. No ISBN.
https://archive.org/details/impactofscienceo0000unse_t0h6 Retrieved 15th, Nov. 2022. Searches:
https://www.amazon.com/s?k=The+Impact+Of+Science+On+Society+Bertrand+Russell
https://www.google.com/search?q=The+Impact+Of+Science+On+Society+Bertrand+Russell
https://lccn.loc.gov/52014878

Schneier, B. (2003). Beyond Fear: Thinking Sensibly About Security in an Uncertain World. Copernicus Books. ISBN: 0387026207. Searches:
https://www.amazon.com/s?k=0387026207
https://www.google.com/search?q=isbn+0387026207
https://lccn.loc.gov/2003051488
Similar edition available at:
https://archive.org/details/beyondfearthinki00schn_0

von Mises, L. (1949). Human Action: A Treatise on Economics. Ludwig von Mises Institute, 2010 reprint ed. ISBN: 978-1610161459. Searches:
https://www.amazon.com/s?k=9781610161459
https://www.google.com/search?q=isbn+9781610161459
https://lccn.loc.gov/50002445

Wiener, N. (1954). The Human Use Of Human Beings: Cybernetics and Society. Da Capo, 2nd ed. ISBN: 978-0306803208. This 1954 ed. missing `The Voices of Rigidity' chapter of the original 1950 ed. See 1st ed.:
https://archive.org/details/humanuseofhumanb00wien/page/n11/mode/2up. See also Brockman (2019) p. xviii. Searches for the 2nd ed.:
https://www.amazon.com/s?k=9780306803208
https://www.google.com/search?q=isbn+9780306803208
https://lccn.loc.gov/87037102

Footnotes

^1 Schneier (2003) pp. 12, 52.

^2 On creatures as gene (replicator) `survival machines', see Dawkins (2016) pp. 24-25, 30.

^3 von Mises (1949) pp. 13-14. See also Koch (2007) p. 144. See also Retraice (2022/11/27).

^4 Bostrom (2019) p. 457.

^5 Bostrom (2019) pp. 457-458.

 

Comments