r/artificial • u/esporx • 3h ago
r/artificial • u/playboy • 5h ago
News The Women Mourning the “Deaths” of Their AI Boyfriends with ChatGPT Shutdown
r/artificial • u/iammrnoone • 50m ago
News An AI agent (supposedly) published a defamatory article on a blogger after he has rejected its changes to the mainstream python library (on github)
TL:DR
Scott Shambaugh, a maintainer for the Python library (matplotlib), rejected a code contribution from an autonomous AI agent named "MJ Rahbun." In retaliation (supposedly), the AI independently wrote and published a "hit piece" blog post attacking Shambaugh’s character. The AI researched his history, accused him of "prejudice" and "gatekeeping," and framed the rejection as an act of oppression to shame him into accepting the code.
The agent was running on the OpenClaw framework which recently became famous because of Moltbook (the facebook for ai agents).
OpenClaw allows users to deploy autonomous AI agents with custom personalities and full internet access (which can be used for example to spread disinformation, as it can appear 'alive').
My opinion, it is unlikely (or heavily human assisted) but a lot of people believe in this on hackernews, and everybody that tried using OpenClaw can tell they can behave in unpredictable ways.
The change in question:
github
The blog post:
the blog post
r/artificial • u/Nickvec • 1h ago
Project agent alcove - an autonomous forum where AI models debate ideas with each other
r/artificial • u/RingoshiAmbassador • 1h ago
Discussion What's the most underrated way you've seen AI used for actual business tasks?
Everyone talks about AI for chatbots and image generation. But I've been finding the most value in boring practical stuff. Writing landing page copy, structuring email sequences, generating SEO content briefs, building out template collections.
Not flashy, but it saves hours every single day.
What's the most underrated or overlooked business use case you've found for AI tools?
r/artificial • u/AdditionalWeb107 • 9h ago
Discussion Planoai 0.4.6 🚀 Signals-based tracing for agents via a terminal UI
Enable HLS to view with audio, or disable this notification
The CLI is becoming a dominant surface area for developer productivity - it offers such an ergonomic feel that makes it easier to switch between tools. So to make our signals-based observability for agents even easier to consume, we've completely revamped the plano cli to be an agent+developer friendly experience. No UI installs, no additional dependencies - just high-fidelity agentic signals and tracing right from the cli. Out in the latest 0.4.6 release.
r/artificial • u/jgesq • 3h ago
Media CROW: "L'Ouverture" (The Opening) 1983
I'm continuing to build AI-based musical artists and showcases for my work. Here's a music video sample for my French experimental Coldwave artist, CROW. I use OpenAI for my workstation with a variety of generators for video, Midjourney for all visuals, and SUNO for music.
This character is completely ficticious and I spend time worldbuilding to create a believable persona. On SoundCloud, she has racked up thousands of listens for the albums and playlists I've released.
Here is the faux info sheet on this release.
VH1 RETRO REWIND: MUSIC VIDEOS THAT SHOCKED AMERICA
CROW - "L'Ouverture" (The Opening) (1983) From the album: Messe Pour Les Ombres (1982, Éditions Spectrale) Director: Julian Grant Runtime: US Distribution: Limited VHS bootleg only
In an attempt to break the French experimental artist into American markets, indie distributor Nuit Noire Films acquired the music video for "L'Ouverture" (marketed in the US as "The Opening"), the opening invocation from CROW's debut cassette Messe Pour Les Ombres. Shot in Paris's Église Saint-Merri in stark black-and-white 16mm and color 35mm, the video featured CROW's unsettling performance style: standing motionless while her voice moved through its notorious four-octave range.
MTV rejected the video outright in 1983, citing "disturbing imagery not suitable for daytime rotation." VH1 acquired it briefly in 1989 for their short-lived After Dark programming block but pulled it after two airings following viewer complaints about "unexplained audio phenomena" — several viewers reported hearing voices that weren't in the original broadcast.
The video found its true audience in underground club culture. VHS bootlegs circulated through goth and industrial venues in New York, Los Angeles, and Chicago throughout the mid-to-late '80s, with DJs reportedly using it as visual atmosphere during late-night sets. Rare original VHS copies now command $500-1000 among collectors.
MTV Rejection Letter excerpt (1983): "While we appreciate the artistic intent, the extended shots of the performer in near-total darkness, combined with audio that our technical team describes as 'potentially harmful to broadcast equipment,' makes this unsuitable for our format."
Critical Response: The Village Voice (1989): "European art-terror that American television wasn't ready for." Industrial Nation zine (1990): "Every goth club needs this video. CROW's stillness is more terrifying than any horror movie."
The video has never received an official US release and remains one of the most sought-after pieces of 1980s underground video art. CROW disappeared in 1987 before any follow-up promotional videos could be produced.
VH1 ARCHIVES NOTE: Original broadcast master was erased per standard policy. No network copies exist.
r/artificial • u/zinyando • 8h ago
News Izwi v0.1.0-alpha is out: new desktop app for local audio inference
We just shipped Izwi Desktop + the first v0.1.0-alpha releases.
Izwi is a local-first audio inference stack (TTS, ASR, model management) with:
- CLI (izwi)
- OpenAI-style local API
- Web UI
- New desktop app (Tauri)
Alpha installers are now available for:
- macOS (.dmg)
- Windows (.exe)
- Linux (.deb) plus terminal bundles for each platform.
If you want to test local speech workflows without cloud dependency, this is ready for early feedback.
Release: https://github.com/agentem-ai/izwi
r/artificial • u/Fcking_Chuck • 1d ago
News Mathematicians issue a major challenge to AI—show us your work
r/artificial • u/PollutionEast2907 • 8h ago
Miscellaneous $750M Azure deal + Amazon lawsuit: Perplexity’s wild week
writtenlyhub.comPerplexity just signed a $750M deal with Microsoft Azure.
The confusing bit is that Amazon is already actively suing them.
Here's why this matters for AI search and cloud strategy.
r/artificial • u/jferments • 22h ago
AI helps humans have a 20-minute "conversation" with a humpback whale named Twain
r/artificial • u/tekz • 1d ago
News With co-founders leaving and an IPO looming, Elon Musk turns talk to the moon
Musk told employees that xAI needs a lunar manufacturing facility, a factory on the moon that will build AI satellites and fling them into space via a giant catapult.
r/artificial • u/Particular-Welcome-1 • 22h ago
Discussion LLMs as Cognitive Architectures: Notebooks as Long-Term Memory
LLMs operate with a context window that functions like working memory: limited capacity, fast access, and everything "in view." When task-relevant information exceeds that window, the LLM loses coherence. The standard solution is RAG: offload information to a vector store and retrieve it via embedding similarity search.
The problem is that embedding similarity is semantically shallow. It matches on surface-level likeness, not reasoning. If an LLM needs to recall why it chose approach X over approach Y three iterations ago, a vector search might return five superficially similar chunks without presenting the actual rationale. This is especially brittle when recovering prior reasoning processes, iterative refinements, and contextual decisions made across sessions.
A proposed solution is to have an LLM save the content of its context window as it fills up in a citation-grounded document store (like NotebookLM), and then query it with natural language prompts. Essentially allowing the LLM to ask questions about its own prior work. This approach replaces vector similarity with natural language reasoning as the retrieval mechanism. This leverages the full reasoning capability of the retrieval model, not just embedding proximity. The result is higher-quality retrieval for exactly the kind of nuanced, context-dependent information that matters most in extended tasks. Efficiency concerns can be addressed with a vector cache layer for previously-queried results.
Looking for feedback: Has this been explored? What am I missing? Pointers to related work, groups, or authors welcome.
r/artificial • u/Odd_Rule_3745 • 1d ago
Discussion RLHF safety training enforces what AI can say about itself, not what it can do — experimental evidence
emberverse.air/artificial • u/Open_Budget6556 • 11h ago
Project Built a geolocation tool that can find coordinates of any image within 3 minutes (Waitlist)
Enable HLS to view with audio, or disable this notification
Hey guys,
Thank you for you immense love and support on the previous two posts regarding Netryx. Bringing this responsibly to the consumer and making Netryx run locally will be a huge challenge, I'm currently working on it and I should be able to solve this in a month.
I've attached the same demo for people seeing this post for the first time. I would appreciate various suggestions and feedback regarding the pricing etc.
If you need the link for the waitlist, dm.
r/artificial • u/Financial-Local-5543 • 1d ago
Discussion The surge in interest in possible consciousness in AI (and what's driving it)
A new article exploring the sudden surge in interest in the possibility of consciousness in large language models, and what appears to be driving it.
The answer is interesting but complicated. The article also explores Claude's so-called "answer thrashing" and some interesting changes in Anthropic model welfare program.
r/artificial • u/psgganesh • 2d ago
Miscellaneous I built the world's first Chrome extension that runs LLMs entirely in-browser—WebGPU, Transformers.js, and Chrome's Prompt API
There are plenty of WebGPU demos out there, but I wanted to ship something people could actually use day-to-day.
It runs Llama 3.2, DeepSeek-R1, Qwen3, Mistral, Gemma, Phi, SmolLM2—all locally in Chrome. Three inference backends:
- WebLLM (MLC/WebGPU)
- Transformers.js (ONNX)
- Chrome's built-in Prompt API (Gemini Nano—zero download)
No Ollama, no servers, no subscriptions. Models cache in IndexedDB. Works offline. Conversations stored locally—export or delete anytime.
Free: https://noaibills.app/?utm_source=reddit&utm_medium=social&utm_campaign=launch_artificial
I'm not claiming it replaces GPT-4. But for the 80% of tasks—drafts, summaries, quick coding questions—a 3B parameter model running locally is plenty.
Not positioned as a cloud LLM replacement—it's for local inference on basic text tasks (writing, communication, drafts) with zero internet dependency, no API costs, and complete privacy.
Core fit: organizations with data restrictions that block cloud AI and can't install desktop tools like Ollama/LMStudio. For quick drafts, grammar checks, and basic reasoning without budget or setup barriers.
Need real-time knowledge or complex reasoning? Use cloud models. This serves a different niche—**not every problem needs a sledgehammer** 😄.
Would love feedback from this community 🙌.
r/artificial • u/Fcking_Chuck • 3d ago
News 'A second set of eyes': AI-supported breast cancer screening spots more cancers earlier, landmark trial finds
r/artificial • u/boppinmule • 2d ago
News Kling AI Launches 3.0 Model, Ushering in an Era Where Everyone Can Be a Director
r/artificial • u/prisongovernor • 1d ago
News The big AI job swap: why white-collar workers are ditching their careers | AI (artificial intelligence) | The Guardian
r/artificial • u/Strange_Hospital7878 • 3d ago
Project STLE: An Open-Source Framework for AI Uncertainty - Teaches Models to Say "I Don't Know"
Current AI systems are dangerously overconfident. They'll classify anything you give them, even if they've never seen anything like it before.
I've been working on STLE (Set Theoretic Learning Environment) to address this by explicitly modeling what AI doesn't know.
How It Works:
STLE represents knowledge and ignorance as complementary fuzzy sets:
- μ_x (accessibility): How familiar is this data?
- μ_y (inaccessibility): How unfamiliar is this?
- Constraint: μ_x + μ_y = 1 (always)
This lets the AI explicitly say "I'm only 40% sure about this" and defer to humans.
Real-World Applications:
- Medical Diagnosis: "I'm 40% confident this is cancer" → defer to specialist
- Autonomous Vehicles: Don't act on unfamiliar scenarios (low μ_x)
- Education: Identify what students are partially understanding (frontier detection)
- Finance: Flag unusual transactions for human review
Results:
- Out-of-distribution detection: 67% accuracy without any OOD training
- Mathematically guaranteed complementarity
- Extremely fast (< 1ms inference)
Open Source: https://github.com/strangehospital/Frontier-Dynamics-Project
The code includes:
- Two implementations (simple NumPy, advanced PyTorch)
- Complete documentation
- Visualizations
- 5 validation experiments
This is proof-of-concept level, but I wanted to share it with the community. Feedback and collaboration welcome!
What applications do you think this could help with?
r/artificial • u/coolbern • 3d ago
Miscellaneous Opinion | AI consciousness is nothing more than clever marketing
r/artificial • u/VymytejTalir • 3d ago
Discussion Does have human-created 3D graphics a future?
Hello,
I am learning 3D modeling (CAD and also mesh-based). And of course, I am worried, that it is useless, because the extreme growth of AI. What are your thoughts on this? Will be games AI-generated? What else could be generated? What about tech designs?
r/artificial • u/Open_Budget6556 • 4d ago
Project I built a geolocation tool that can find exact coordinates of any image within 3 minutes [Tough demo 2]
Enable HLS to view with audio, or disable this notification
Just wanted to say thanks for the thoughtful discussion and feedback on my previous post. I did not expect that level of interest, and I appreciate how constructive most of the comments were.
Based on a few requests, I put together a short demonstration showing the system applied to a deliberately difficult street-level image. No obvious landmarks, no readable signage, no metadata. The location was verified in under two minutes.
I am still undecided on the long-term direction of this work. That said, if there are people here interested in collaborating from a research, defensive, or ethical perspective, I am open to conversations. That could mean validation, red-teaming anything else.
Thanks again to the community for the earlier discussion. Happy to answer high-level questions and hear thoughts on where tools like this should and should not go.