
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He was one of TIME's inaugural list of 100 most influential people in AI. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was also among the first to show how machine learning reflects cultural stereotypes. He was awarded the Privacy Enhancing Technology Award for showing how publicly available social media and web information can be cross-referenced to find customers whose data has been "anonymized" by companies. Narayanan prototyped and developed Do Not Track in HTTP header fields. He is a co-author of the book AI Snake Oil and a newsletter of the same name which is read by 50,000 researchers, policy makers, journalists, and AI enthusiasts.
by Arvind Narayanan
Rating: 4.3 ⭐
• 2 recommendations ❤️
Bitcoin and Cryptocurrency Technologies provides a comprehensive introduction to the revolutionary yet often misunderstood new technologies of digital currency. Whether you are a student, software developer, tech entrepreneur, or researcher in computer science, this authoritative and self-contained book tells you everything you need to know about the new global money for the Internet age.How do Bitcoin and its block chain actually work? How secure are your bitcoins? How anonymous are their users? Can cryptocurrencies be regulated? These are some of the many questions this book answers. It begins by tracing the history and development of Bitcoin and cryptocurrencies, and then gives the conceptual and practical foundations you need to engineer secure software that interacts with the Bitcoin network as well as to integrate ideas from Bitcoin into your own projects. Topics include decentralization, mining, the politics of Bitcoin, altcoins and the cryptocurrency ecosystem, the future of Bitcoin, and more.An essential introduction to the new technologies of digital currencyCovers the history and mechanics of Bitcoin and the block chain, security, decentralization, anonymity, politics and regulation, altcoins, and much moreFeatures an accompanying website that includes instructional videos for each chapter, homework problems, programming assignments, and lecture slidesAlso suitable for use with the authors' Coursera online courseElectronic solutions manual (available only to professors)
by Arvind Narayanan
Rating: 3.9 ⭐
Confused about AI and worried about what it means for your future and the future of the world? You're not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works, why it often doesn't, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don't work, and probably never will.While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it's being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice. The book explains the crucial differences between types of AI, why organizations are falling for AI snake oil, why AI can't fix social media, why AI isn't an existential risk, and why we should be far more worried about what people will do with AI than about anything AI will do on its own. The book also warns of the dangers of a world where AI continues to be controlled by largely unaccountable big tech companies.By revealing AI's limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home.
by Arvind Narayanan
2009 doctorate of philosophy dissertation from University of Texas AustinThe Internet has enabled the collection, aggregation and analysis of personal data on a massive scale. It has also enabled the sharing of collected data in various ways: wholesale outsourcing of data warehousing, partnering with advertisers for targeted advertising, data publishing for exploratory research, etc. This has led to complex privacy questions related to the leakage of sensitive user data and mass harvesting of information by unscrupulous parties. These questions have information-theoretic, sociological and legal aspects and are often poorly understood. There are two fundamental paradigms for how the data is released: in the interactive setting, the data collector holds the data while third parties interact with the data collector to compute some function on the database. In the non-interactive setting, the database is somehow "sanitized" and then published. In this thesis, we conduct a thorough theoretical and empirical investigation of privacy issues involved in non-interactive data release. Both settings have been well analyzed in the academic literature, but simplicity of the non-interactive paradigm has resulted in its being used almost exclusively in actual data releases. We analyze several common applications including electronic directories, collaborative filtering and recommender systems, and social networks. Our investigation has two main foci. First, we present frameworks for privacy and anonymity in these different settings within which one might dene exactly when a privacy breach has occurred. Second, we use these frameworks to experimentally analyze actual large datasets and quantify privacy issues. The picture that has emerged from this research is a bleak one for noninteractivity. While a surprising level of privacy control is possible in a limited number of applications, the general sense is that protecting privacy in the non-interactive setting is not as easy as intuitively assumed in the absence of rigorous privacy definitions. While some applications can be salvaged either by moving to an interactive setting or by other means, in others a rethinking of the tradeoffs between utility and privacy that are currently taken for granted appears to be necessary.