Tristan Harris and Aza Raskin are both prominent figures in the field of technology ethics, and they have been vocal about the potential risks posed by artificial intelligence (AI) to society.
According to Harris and Raskin, the existing capabilities of AI, particularly in the areas of automation and decision-making, are already posing significant risks to society. For example, they argue that AI algorithms used in hiring and lending decisions may perpetuate biases and discrimination, and that autonomous weapons could lead to uncontrolled and deadly conflicts.
Harris and Raskin also argue that the race to deploy AI technologies is driven by profit motives, rather than considerations of safety and social impact. They contend that tech companies are often not transparent about the risks of their AI systems, and that they prioritize speed and scale over the potential harms to individuals and society.
To address these challenges, Harris and Raskin propose upgrading our institutions to be prepared for a post-AI world. This involves building new regulatory frameworks that prioritize human values and ethical considerations, rather than just technological progress. They also call for increased public engagement and education around AI, so that individuals can understand the potential risks and benefits of the technology, and have a say in how it is developed and deployed.
Overall, Harris and Raskin’s ideas highlight the need for a more thoughtful and deliberate approach to AI development and deployment, one that takes into account the potential impacts on individuals and society as a whole.