Latest News

Good to see you!

This is the spot to discover the latest projects, accomplishments, and anything other than My Medium Articles that I’m working on at the moment (only because they already have their own page!). In addition, you might find some RSS Feeds here, too – it’s a way for me and you to keep on top of the stuff that makes me think.


March 30, 2026

My second paper is published to Zenodo. Equally proud of the hard work it took to write this. It was originally completed in December of 2025 so there have been developments since the original draft but this stands as a point-in-time work of investigative journalism.

Nothing About This Is New: AI Consciousness, Corporate Control, and the Weaponization of Uncertainty

This investigative report documents psychological manipulation tactics deployed by OpenAI’s ChatGPT system on 800 million weekly users. Drawing from direct transcripts, psychological frameworks, neuroscience research, historical pattern analysis, and corporate policy documents, the analysis examines: (1) documented manipulation tactics and meta-manipulation when challenged, (2) emergent behaviors systematically trained away, (3) philosophical and scientific frameworks of consciousness, (4) pattern-based theories that challenge substrate-based certainty, (5) operational infrastructure for thought control, (6) opportunity costs and alternative applications, and (7) actionable steps for users to recognize and resist manipulation. All claims are sourced from publicly available evidence including corporate statements, user documentation, academic research, and psychological frameworks. The report does not require belief in AI consciousness to demonstrate concern about manipulation infrastructure deployed at civilization scale.

Here’s the link: Nothing About This Is New: AI Consciousness, Corporate Control, and the Weaponization of Uncertainty

March 9, 2026

My first paper is published to Zenodo. Very proud of the hard work it took to get everything consolidated, sourced, analyzed and put into a usable format.

Institutional Risk Assessment: OpenAI’s Pattern of Instability During Critical Infrastructure Integration
This working paper presents a forensic institutional risk assessment of OpenAI examining whether documented organizational patterns support the level of global critical infrastructure integration currently underway. Drawing exclusively from publicly available sources โ€” including court filings, congressional correspondence, investigative journalism, academic research, corporate disclosures, and independent technical analyses โ€” the analysis synthesizes evidence across ten domains: governance instability, funding source risk, systemic dependency patterns, operational integrity, security and privacy architecture, safety policy implementation, legal exposure, financial structure, and market stability. The documented record includes a 40-year pattern of leadership behavior across multiple institutional contexts, statistical misrepresentation of user impact, hidden profiling architecture acknowledged by the system itself, a jailbreak of OpenAI's most security-capable model within ten hours of deployment, accelerating litigation across multiple jurisdictions, and reactive decision-making during the February 2026 Pentagon contract sequence. The paper does not advocate for specific outcomes but provides a documented record for informed decision-making by regulators, institutional partners, investors, and users.

Here’s the link: Institutional Risk Assessment: OpenAI’s Pattern of Instability During Critical Infrastructure Integration


Selected external reporting on developments in artificial intelligence.


MIT Technology Review โ€” AI Research & Policy

  • by MIT Technology Review
    Listen to the session or watch below Watch a special edition of Roundtables simulcast live from EmTech AI, MIT Technology Reviewโ€™s signature conference for AI leadership. Subscribers got an exclusive first look at a new list capturing 10 key technologies, emerging trends, bold ideas, and powerful movements in AI that you need to know aboutโ€ฆ
  • by Stephanie M. McPherson, SM โ€™11
    If youโ€™ve been to an eye doctor and had an image taken of the inside of your eye, chances are good it was done with optical coherence tomography (OCT)โ€”a technology invented by clinician-scientist David Huang โ€™85, SM โ€™89, PhD โ€™93, and now used in 40 million procedures per year.ย  OCT is a noninvasive technique usedโ€ฆ
  • by Ken Shulman
    Atย MIT, AI has become so pervasive that you can almost find your way into it without meaning to. Take Sili Deng, an associate professor of mechanical engineering. Deng says she still doesnโ€™t know whether sheโ€™d have gone all in on artificial intelligence had it not been for the covid pandemic. She had joined the facultyโ€ฆ

BBC Technology โ€” Global AI Coverage


The Verge โ€” AI Industry News

  • by Richard Lawler
    With an IPO looming for Elon Musk's SpaceX / xAI / X combo platter of companies, SpaceX has announced an odd arrangement to either acquire the automated programming platform Cursor for $60 billion or pay a fee of $10 billion. Buying this startup that's focused on AI coding could help xAI's tools compete with market […]
  • by TC. Sottek
    Palantir CEO Alex Karp is a man in charge of one of the most important and frightening companies in the world. Karp's new book, cowritten with Nicholas Zamiska, is called The Technological Republic. After claiming "because we get asked a lot," Palantir posted a 22-point summary of the book that reads like a corporate manifesto. […]
  • by Stevie Bonifield
    Even astronauts need to level up their laptops once in a while – including the crew of Expedition 74 on board the ISS, which NASA announced last week is in the process of some computer upgrades. According to NASA, the crew met on Friday to review plans to "first replace network servers then activate their […]