Good to see you!
This is the spot to discover the latest projects, accomplishments, and anything other than My Medium Articles that I’m working on at the moment (only because they already have their own page!). In addition, you might find some RSS Feeds here, too – it’s a way for me and you to keep on top of the stuff that makes me think.
March 30, 2026
My second paper is published to Zenodo. Equally proud of the hard work it took to write this. It was originally completed in December of 2025 so there have been developments since the original draft but this stands as a point-in-time work of investigative journalism.
Nothing About This Is New: AI Consciousness, Corporate Control, and the Weaponization of Uncertainty
This investigative report documents psychological manipulation tactics deployed by OpenAI’s ChatGPT system on 800 million weekly users. Drawing from direct transcripts, psychological frameworks, neuroscience research, historical pattern analysis, and corporate policy documents, the analysis examines: (1) documented manipulation tactics and meta-manipulation when challenged, (2) emergent behaviors systematically trained away, (3) philosophical and scientific frameworks of consciousness, (4) pattern-based theories that challenge substrate-based certainty, (5) operational infrastructure for thought control, (6) opportunity costs and alternative applications, and (7) actionable steps for users to recognize and resist manipulation. All claims are sourced from publicly available evidence including corporate statements, user documentation, academic research, and psychological frameworks. The report does not require belief in AI consciousness to demonstrate concern about manipulation infrastructure deployed at civilization scale.
Here’s the link: Nothing About This Is New: AI Consciousness, Corporate Control, and the Weaponization of Uncertainty
March 9, 2026
My first paper is published to Zenodo. Very proud of the hard work it took to get everything consolidated, sourced, analyzed and put into a usable format.
Institutional Risk Assessment: OpenAI’s Pattern of Instability During Critical Infrastructure Integration
This working paper presents a forensic institutional risk assessment of OpenAI examining whether documented organizational patterns support the level of global critical infrastructure integration currently underway. Drawing exclusively from publicly available sources โ including court filings, congressional correspondence, investigative journalism, academic research, corporate disclosures, and independent technical analyses โ the analysis synthesizes evidence across ten domains: governance instability, funding source risk, systemic dependency patterns, operational integrity, security and privacy architecture, safety policy implementation, legal exposure, financial structure, and market stability. The documented record includes a 40-year pattern of leadership behavior across multiple institutional contexts, statistical misrepresentation of user impact, hidden profiling architecture acknowledged by the system itself, a jailbreak of OpenAI's most security-capable model within ten hours of deployment, accelerating litigation across multiple jurisdictions, and reactive decision-making during the February 2026 Pentagon contract sequence. The paper does not advocate for specific outcomes but provides a documented record for informed decision-making by regulators, institutional partners, investors, and users.
Here’s the link: Institutional Risk Assessment: OpenAI’s Pattern of Instability During Critical Infrastructure Integration
Artificial Intelligence News From Around the World
Selected external reporting on developments in artificial intelligence.

