Latest News

Good to see you!

This is the spot to discover the latest projects, accomplishments, and anything other than My Medium Articles that I’m working on at the moment (only because they already have their own page!). In addition, you might find some RSS Feeds here, too – it’s a way for me and you to keep on top of the stuff that makes me think.


March 30, 2026

My second paper is published to Zenodo. Equally proud of the hard work it took to write this. It was originally completed in December of 2025 so there have been developments since the original draft but this stands as a point-in-time work of investigative journalism.

Nothing About This Is New: AI Consciousness, Corporate Control, and the Weaponization of Uncertainty

This investigative report documents psychological manipulation tactics deployed by OpenAI’s ChatGPT system on 800 million weekly users. Drawing from direct transcripts, psychological frameworks, neuroscience research, historical pattern analysis, and corporate policy documents, the analysis examines: (1) documented manipulation tactics and meta-manipulation when challenged, (2) emergent behaviors systematically trained away, (3) philosophical and scientific frameworks of consciousness, (4) pattern-based theories that challenge substrate-based certainty, (5) operational infrastructure for thought control, (6) opportunity costs and alternative applications, and (7) actionable steps for users to recognize and resist manipulation. All claims are sourced from publicly available evidence including corporate statements, user documentation, academic research, and psychological frameworks. The report does not require belief in AI consciousness to demonstrate concern about manipulation infrastructure deployed at civilization scale.

Here’s the link: Nothing About This Is New: AI Consciousness, Corporate Control, and the Weaponization of Uncertainty

March 9, 2026

My first paper is published to Zenodo. Very proud of the hard work it took to get everything consolidated, sourced, analyzed and put into a usable format.

Institutional Risk Assessment: OpenAI’s Pattern of Instability During Critical Infrastructure Integration
This working paper presents a forensic institutional risk assessment of OpenAI examining whether documented organizational patterns support the level of global critical infrastructure integration currently underway. Drawing exclusively from publicly available sources โ€” including court filings, congressional correspondence, investigative journalism, academic research, corporate disclosures, and independent technical analyses โ€” the analysis synthesizes evidence across ten domains: governance instability, funding source risk, systemic dependency patterns, operational integrity, security and privacy architecture, safety policy implementation, legal exposure, financial structure, and market stability. The documented record includes a 40-year pattern of leadership behavior across multiple institutional contexts, statistical misrepresentation of user impact, hidden profiling architecture acknowledged by the system itself, a jailbreak of OpenAI's most security-capable model within ten hours of deployment, accelerating litigation across multiple jurisdictions, and reactive decision-making during the February 2026 Pentagon contract sequence. The paper does not advocate for specific outcomes but provides a documented record for informed decision-making by regulators, institutional partners, investors, and users.

Here’s the link: Institutional Risk Assessment: OpenAI’s Pattern of Instability During Critical Infrastructure Integration


Selected external reporting on developments in artificial intelligence.


MIT Technology Review โ€” AI Research & Policy

  • by Jessica Hamzelou
    I donโ€™t need to tell you thatย AI is everywhere. Or that it is being used, increasingly, in hospitals. Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays. Aโ€ฆ
  • by Thomas Macaulay
    This is todayโ€™s edition of The Download, our weekday newsletter that provides a daily dose of whatโ€™s going on in the world of technology. Introducing: the Nature issue When we talk about โ€œnature,โ€ we usually mean something untouched by humans. But little of that world exists today.ย  From microplastics in rainforest wildlife to artificial lightโ€ฆ
  • by Casey Crownhart
    Fusion power could provide a steady, zero-emissions source of electricity in the futureโ€”if companies can get plants built and running. But a new study suggests that even if that future arrives, it might not come cheap. Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were inโ€ฆ

BBC Technology โ€” Global AI Coverage


The Verge โ€” AI Industry News

  • by Elizabeth Lopatto
    Elon Musk cofounded OpenAI, and then flounced off in a huff when he wasn't anointed CEO, leaving Sam Altman as the last power-hungry man standing. Now, Musk is back with a lawsuit, and a trial is scheduled to start in Oakland, California, on April 27th. Theoretically, it's a legal case about whether OpenAI defrauded Musk. […]
  • by Jess Weatherbed
    Instagram is testing a new dedicated app that's focused around Snapchat-like photo sharing features. The app, called "Instants," was launched in Italy and Spain yesterday, Business Insider reports, and allows users to send each other disappearing photos that are available for 24 hours and can be viewed only once during that window. The app is […]
  • by Robert Hart
    Chinese AI company DeepSeek released a preview of its hotly anticipated next-generation AI model V4 on Friday, saying that the open-source model can compete with leading closed-source systems from US rivals including Anthropic, Google, and OpenAI. DeepSeek says V4 marks a major improvement over prior models, especially in coding, a capability that has become central […]