Media & Public Philosophy

I’ve occasionally written for and spoken to media and public audiences about issues of general interest, mostly to do with the use of machine learning systems in medicine and the governance of very capable artificial systems. Below are a few links.

  • Comments to the Bulletin of Atomic Scientists on the implications of a second Trump administration for AI governance.
  • Panelist for The Economist on the risks of capable AI systems.
  • Comments to Rolling Stone on Meta’s commitment to open-sourcing capable and general artificial intelligence models.
  • Contribution to the Safety and Global Governance of Generative AI Report at the Fifth World Science and Technology Development Forum (held in Shenzen, China)
  • Comments to Rolling Stone on the use of AI in medicine.
  • Comments to Rolling Stone on the use of AI to generate misinformation in the context of the Israel-Hamas conflict.
  • An op-ed in the Bulletin of Atomic Scientists arguing that education, rather than technical interventions, are what’s needed in order to cope with the coming wave of AI-generated mis- and disinformation.
  • Comments to Rolling Stone on the coming wave of AI-generated CSAM, and what can(not) be done about it.
  • A post on New Work in Philosophy describing my recent paper objecting to medical uses of machine learning for a broader audience.
  • An op-ed in the South China Morning Post arguing that present proposals for regulating AI have a product-liability-shaped gap in them.
  • An op-ed in the Bulletin of Atomic Scientists arguing that most AI research should not be publicly released.
  • An interview with BBC Radio 4 Today (together with Dame Wendy Hall) on risk from advanced machine learning systems.
  • Comments to Rolling Stone on the corporate use of data to train private machine learning models in the wake of Zoom’s change to its terms of service.
  • An op-ed in the Hong Kong Free Press arguing that Hong Kong can (and should) be a world leader in regulating AI systems.
  • An interview with Bloomberg Radio about the regulation of AI systems in Europe.
  • An interview with Ming Pao (Chinese language) about why mitigating risk from sufficiently capable artificial systems should be a global priority.

about me

I’m a philosopher based in Hong Kong. This is where I organize my academic and non-academic activities.

contact

email: natesharadin@gmail.com or sharadin@hku.hk

office: 10.06 Run Run Shaw Tower, Hong Kong University, Hong Kong

Book a meeting