A Human's Guide to Machine Intelligence
Who Should Read It:
Anyone with a vested interested in the future of technology and the world.
Why Should We Read It:
Artificial Intelligence already impacts much of our day-to-day lives, and it is poised to become even more ubiquitous in shaping society.
What Will We Learn:
We will learn the history of the development of this field, its current state, and its societal implications.
Book Reflections
"...an intuitive and non-technical introduction to Machine Learning and Artificial Intelligence"
I am not a "tech guy". The primary use case for my smart phone seems to be no different than my Nokia block phone from high school: texting people. Thanks to Cal Newport, my social media consumption is also minimal. Therefore, it should be no surprise that I have zero knowledge base in Machine Learning. All I can imagine is a malicious mainframe in someone's basement that is poised to subjugate humanity with its army of Terminators. There are plenty of serious nonfiction books, and even more entertaining movies, warning about digital doomsday at the hands of Artificial Intelligence. I was not convinced that those books are the place for a complete novice to start learning about this topic. Instead, I went with a book with a milder title that explains the technology in a more intuitive, non-technical manner: A Human's Guide to Machine Intelligence, by Professor Kartik Hosanagar.
Artificial Intelligence: constantly refining its "if-then" algorithms as they receive new data
The key to understanding this technology is to recognize that humans have lived by a series of "if-then" rules since Hammurabi's Code (the quintessential "if-then" algorithm). All the modern world has done is enable computers to automatically calculate those algorithms for us. The Quicken tax filing program is a great example- if we tell the software we make this amount of money, then it will tell us we get this amount of tax refund. Take this concept to the next level, and we get the juggernauts of Amazon.com and Netflix: if we click on this product or movie, then we might also like that product or movie. How does the program know that we might like this product or that movie? These software programs are also instructed to reanalyze its database when there is a new data point (i.e. what did we click on before buying that product?). Therefore, the company's understanding of its clients exponentially grows with each consumer click, resulting in higher engagement and profit.
AlphaGo: the moment humanity's monopoly on beauty and creativity ended
The above examples highlight the first stage of artificial intelligence: supervised learning, where humans train a software program with prespecified data, and instruct the program to build upon that data. Perhaps its most spectacular application came with Google's AlphaGo, a program that was fed more than 30 million past moves by expert players of the game Go. AlphaGo was then instructed to play against itself millions of times- "[more than] the number of games played by the entire human race since the creation of Go" as its research team puts it. In March 2016, AlphaGo played Lee Sedol, the reigning Go world champion from Korea, and made its infamous "move 37." No one had ever seen such a brilliant move in the history of Go. Sedol never recovered, and lost the match. AlphaGo's software engineers themselves do not know how AlphaGo, a computer program, learned how to produce that "move 37". What we do know is Go players worldwide have proclaimed "move 37" to be beautiful, creative…and utterly human.
Humanity's Bill of Rights for the Technological Age
Our ability to build such advanced artificial intelligence systems and paradoxical inability to explain how they work lead to many pressing questions for society. Where do we draw the line for artificial intelligence if it can perform a task with greater efficiency and safety (fly a plane, drive a car, manage retirement funds) than a human? Who gets to have the final decision making power: a human or a machine with computing powers that far exceeds a human brain? What should happen next if an artificial intelligence algorithm led to an unexpected drawback, a perverse result, or worse (the book has multiple examples)? Professor Hosanagar emphasizes that for society to benefit the most from machine learning while minimizing its excesses, a new Bill of Rights must be drafted for those who use algorithms and those who are impacted by decisions made by algorithms. Namely, such individuals should have:
A right to a description of the data used to train the algorithms and details as to how that data was collected.
A right to a simple explanation regarding the procedures used by the algorithms.
Some level of control over the way the algorithms work. There should always be a feedback loop between the user and the algorithm.
The responsibility to be aware of the unanticipated consequences of automated decision making.
AlphaGo's boss is AlphaGo Zero. Who is Alopha Go Zero's boss?
Professor Hosanagar's A Human's Guide to Machine Intelligence is a timely read, for we are already well into the next era of machine learning. Researchers have now developed artificial intelligence that explores and draws conclusions with absolutely no data sets provided beforehand (called reinforcement learning). For example, whereas the program AlphaGo was trained by studying games from expert human players, AlphaGo Zero was simply taught the rules of Go and told to play millions of games against itself. AlphaGo Zero beat the original AlphaGo with 100 straight victories. Clearly, we are on the cusp of a new age in human history, one where our world will not be not solely shaped by human decisions. I highly recommend this book, for it offers a grounded perspective of this future while neither adulating its brightest benefits nor fearmongering its darkest potential.
Comments