AI & ML interests

🤖🤗multi media inputs and outputs to create augmented culture and better outcomes for humans everywhere.❤️🚀

mindchain 
posted an update 9 days ago
view post
Post
2898
Claude Code Self & Continual Learning

Hey everyone! 👋

30 GitHub Stars in 4 Days - Thank You!

I'm really grateful for the positive response to the Claude Reflect System. In just 4 days, 30 developers have shown interest by starring the project. Thank you so much!

What Is Claude Reflect?

Correct once, never again. Claude Reflect helps Claude Code remember your corrections and preferences across sessions. Instead of repeating the same feedback, the system learns and applies it automatically.

Main Features:

🧠 Learning System
- Detects corrections and preferences from conversations
- Stores them permanently in skill files
- Applies learnings in future sessions

🔒 Safety First
- Automatic backups before changes
- YAML validation
- Git version control

⚡ Two Modes
- Manual: Run /reflect when you want
- Auto: Reflects automatically at session end

How It Works

If you correct Claude to use pytest instead of unittest, this preference gets saved. Next time, Claude will remember and use pytest automatically. It's that simple.

Getting Started

1. Clone the repository
2. Install dependencies
3. Activate the skill
4. Try it out!

The python-project-creator example shows how the system learns from your feedback.

Give It a Try

https://github.com/haddock-development/claude-reflect-system

Feel free to check it out, give feedback, or contribute. Every bit of input helps improve the project!

Thank you so much for your support!

---
#ClaudeCode #AI #MachineLearning #ContinualLearning #OpenSource #Developer #Coding #Python #Productivity #DevTools #GitHub #SoftwareDevelopment #Programming #AIAssistant #DeveloperTools #CodeQuality #Tech



Feel free to give it a try by yourself.
https://github.com/haddock-development/claude-reflect-system
  • 2 replies
·
Nymbo 
posted an update 13 days ago
view post
Post
1763
Genuine recommendation: You should really use this AutoHotKey macro. Save the file as macros.ahk and run it. Before sending a prompt to your coding agent, press Ctrl + Alt + 1 and paste your prompt to any regular chatbot. Then send the output to the agent. This is the actual, boring, real way to "10x your prompting". Use the other number keys to avoid repeating yourself over and over again. I use this macro prolly 100-200 times per day. AutoHotKey isn't as new or hype as a lot of other workflows, but there's a reason it's still widely used after 17 years. Don't overcomplicate it.

; Requires AutoHotkey v1.1+

; All macros are `Ctrl + Alt + <variable>`

^!1::
    Send, Please help me more clearly articulate what I mean with this message (write the message in a code block):
return

^!2::
    Send, Please make the following changes:
return

^!3::
    Send, It seems you got cut off by the maximum response limit. Please continue by picking up where you left off.
return


In my experience the past few months, Ctrl + Alt + 1 works best with Instruct models (non-thinking). Reasoning causes some models to ramble and miss the point. I've just been using GPT-5.x for this.
mindchain 
posted an update 13 days ago
view post
Post
1716
Scaling Physical AI: SAM 3D, NVIDIA Cosmos, and Unreal Engine!

The "Sim-to-Real" gap is officially history. In early 2026, we are no longer just rendering data; we are simulating reality. By bridging Meta’s SAM 3D, Unreal Engine, and the NVIDIA Cosmos suite, we’ve built an autonomous pipeline for Physical AI that evolves itself.

The 2026 Tech Stack:
SAM 3D: Generates high-fidelity digital twins from 2D photos in seconds.

Unreal Engine + MCP: The AI "Director" orchestrates environments via the Model Context Protocol, providing perfect Ground Truth.

NeMo Data Designer: The orchestration hub on GitHub. Following NVIDIA’s acquisition of Gretel in early 2025, its leading generative privacy and tabular tech are now fully integrated here.

NVIDIA Cosmos Transfer: Neural rendering that adds hyper-realism to Unreal Engine outputs.

NVIDIA Cosmos Predict: Predicts physically accurate motion (falling, sliding) without manual animation.

NVIDIA Cosmos Reason: The automated supervisor checking every frame for logical and physical consistency.

The Workflow:
Asset Capture: SAM 3D turns real-world photos into Nanite meshes for Unreal Engine.

Orchestration: NeMo Data Designer (with Gretel-powered integrity) defines the data schema, while AI builds the world in Unreal Engine.

Completion: NVIDIA Cosmos (Transfer & Predict) adds photorealism and physics, while NVIDIA Cosmos Reason guarantees quality.

By combining Gretel’s data heritage with the visual power of Unreal Engine, we generate 100,000 perfect frames per hour. Weights and tools are on Hugging Face. Stop labeling. Start simulating.

#PhysicalAI #SAM3D #NVIDIACosmos #UnrealEngine #NeMo #Gretel #SyntheticData #HuggingFace #Robotics #AI #ComputerVision
mindchain 
posted an update 14 days ago
view post
Post
2051
Skill Reflect: A Concept for Automated AI Skill Mastery

Let’s be real for a second: most of us are using AI all wrong. We send a prompt, get a "meh" answer, and then spend twenty minutes fixing it ourselves. That’s not a workflow; that’s just a digital chore. I wanted to see if I could push Claude further—to see if I could build a system that actually learns and refines itself. That’s how the Claude-Reflect-System (Skill Reflect) was born.

But here’s the thing: this isn’t some polished, final product. It’s a concept. It’s a blueprint. I’ve built the foundation of a recursive reflection loop that forces the AI to step back, look at its work, and act as its own harshest critic. It identifies the "skill delta"—the gap between "okay" and "mastery"—and closes it. This logic isn't just for Claude; you can grab this architecture and drop it right into codex-cli, terminal agents, or whatever stack you're building.

I’m a big believer in the law of causality. Action, reaction. Cause and effect. If you control the cause—the way the AI thinks about its mistakes—you dictate the effect: a perfected skill. This is a playground for builders who are tired of stochastic guessing. I want you to take this. Fork it. Break it. Make it better. This is an open invitation to the community to take this reflection loop and see how far we can push the boundaries of agentic reasoning. Whether you're building Claude Code plugins or just want to automate your self-learning, the code is there for you to smash. Stop accepting the first draft. Let’s build something that actually thinks.

https://github.com/haddock-development/claude-reflect-system

#Skills #ClaudeCode #ClaudeCodeSkills #ClaudeCodePlugins #ClaudeCodeMarketplace #CodexCLI #AI #SelfLearning #Automation #OpenSource #LLM #Reasoning #Causality #Matrix #Concept
mindchain 
posted an update 15 days ago
view post
Post
1835
Neural Traffic Control: Orchestrating Multi-Path Reasoning 🚥
The future of AI isn't just about "better" models—it’s about high-precision orchestration. We are moving from linear processing to Parallel MTP-Reasoning, where we manage neural traffic across stabilized, transparent, and recursive highways.

1️⃣ The Backbone: Stabilized High-Dimensional Routing (arXiv:2512.24880) Using DeepSeek’s mHC (Manifold-Constrained Hyper-Connections), we solve the instability of deep MoE architectures. By projecting weight updates onto the Birkhoff Polytope, we ensure that our "Simpsons-style" expert lanes maintain mathematical identity. This is the hardware-level stability needed to run multiple reasoning paths without collapse.

2️⃣ The Vision: Gemma Scope 2 & Feature Steering You can't steer what you can't see. Gemma Scope 2 provides the "X-ray" for our highways. By using Sparse Autoencoders (SAEs), our Meta-Controller identifies the active features in each expert lane. We don't just route data; we route intent by monitoring feature-drift in real-time.

3️⃣ The Logic: Recursive Open Meta-Agents (arXiv:2512.24601) We integrate the ROMA (Recursive Open Meta-Agent) framework. Instead of a flat response, the model operates in a recursive loop, refining its internal state before any output occurs. This is the "brain" of our [Meta-Controller GitHub Repo], enabling the model to simulate and discard weak logic internally.

4️⃣ The Simulation: Parallel MTP-Reasoning This is where it comes together: Multi-Token Prediction (MTP) meets Parallel Simulation. Our Python-driven controller runs three parallel Gemma 3 instances.

The Process: 3 paths generated simultaneously.

The Filter: A 500-token lookahead window.

The Decision: The Meta-Controller uses SAE-data from Gemma Scope to select the path with the highest logical fidelity.

The Result: A self-correcting, transparent, and multi-threaded reasoning engine. We aren't just scaling parameters; we are scaling architectural precision. 🧠

mindchain 
posted an update 17 days ago
view post
Post
3656
The Architecture of 2026: Beyond the Token Trap 🚀

We are witnessing a tectonic shift in Transformer architecture. It’s no longer just about "predicting the next token"—it’s about executing latent plans on a high-speed data highway.

What happens when we combine DeepSeek’s stability with Google’s strategic intelligence?

1️⃣ The Infrastructure: DeepSeek’s mHC Moving from a single-lane residual stream to a multi-lane highway. Using the Birkhoff Polytope, mHC ensures mathematical stability (Identity Mapping) while routing specialized data through dedicated lanes.

2️⃣ The Intelligence: Google’s Meta-Controller An internal AI unit that lives inside the Transformer. It escapes the "Token Trap" by extracting data to create a latent plan, steering the model via Temporal Abstraction.

The Synergy: In a Topological Transformer, the Meta-Controller finally has the "dedicated lanes" it needs to steer complex reasoning without causing gradient explosions.

We aren't just making models bigger; we are making them architecturally smarter. 🧠

#MachineLearning #DeepSeek #GoogleAI #Transformer #AIArchitecture
Reubencf 
posted an update 17 days ago
view post
Post
3192
Happy New Year 2026
i have planned to build many things this year , most of them will be cheaper or free alternative's to paid products

i am looking forward to release some useful spaces ✌️ Stay Tuned !
  • 1 reply
·
Reubencf 
posted an update 22 days ago
view post
Post
2694
As 2025 is ending i would like to thank everyone for trying out
Reubencf/Nano_Banana_Editor

looking forward to build and release more in the future for the open source community

daavoo 
posted an update about 1 month ago
view post
Post
1847
2025: The Year of Agents.
2026: The Year of Local Agents?

Relying on cloud-hosted LLMs is often overkill. While frontier models still lead in complex coding, local models are now more than capable of handling many agentic workflows—with zero latency and total privacy.

To help bridge the gap between local inference and usable agents, I’m releasing agent.cpp: https://github.com/mozilla-ai/agent.cpp

It provides minimal, high-performance building blocks for agents in C++, built directly around the awesome llama.cpp ecosystem.
Stop sending your data to a remote API. Start building and running agents on your own hardware.
  • 1 reply
·
Nymbo 
posted an update about 1 month ago
view post
Post
2166
🚨 New tool for the Nymbo/Tools MCP server: The new Agent_Skills tool provides full support for Agent Skills (Claude Skills but open-source).

How it works: The tool exposes the standard discover/info/resources/validate actions. Skills live in /Skills under the same File_System root, and any bundled scripts run through Shell_Command, no new infrastructure required.

Agent_Skills(action="discover")  # List all available skills
Agent_Skills(action="info", skill_name="music-downloader")  # Full SKILL.md
Agent_Skills(action="resources", skill_name="music-downloader")  # Scripts, refs, assets


I've included a music-downloader skill as a working demo, it wraps yt-dlp for YouTube/SoundCloud audio extraction.

Caveat: On HF Spaces, Shell_Command works for most tasks, but some operations (like YouTube downloads) are restricted due to the container environment. For full functionality, run the server locally on your machine.

Try it out ~ https://www.nymbo.net/nymbot
Reubencf 
posted an update about 1 month ago
KingNish 
posted an update about 1 month ago
view post
Post
2567
Muon vs MuonClip vs Muon+Adamw

Muon has gone from an experiment to a mainstream optimizer, but does it hold up for fine‑tuning? We ran head‑to‑head tests on Qwen3‑4B (10k+ high‑quality instruction rows) to find out.

Short story: Pure Muon converged fastest at the start, but its gradient‑norm spikes made training unstable. MuonClip (Kimi K2’s clipping) stabilizes long pretraining runs, yet in our small‑scale fine‑tune it underperformed, lower token accuracy and slower convergence. The winner was the hybrid: Muon for 2D layers + AdamW for 1D layers. It delivered the best balance of stability and final performance and even beat vanilla AdamW.

Takeaway: for small-scale fine-tuning, hybrid = practical and reliable.

Next Step: scale to larger models/datasets to see if Muon’s spikes become catastrophic or if clipping wins out.

Full Blog Link: https://huggingface.co/blog/KingNish/optimizer-part1
KingNish 
posted an update about 1 month ago
hesamation 
posted an update about 2 months ago
view post
Post
2950
this is big... 50 AI researchers from Bytedance, Alibaba, Tencent, and other labs/universities just published a 300-page paper with surprising lessons about coding models and agents (data, pre and post-training, etc).

key highlights:

> small LLMs can beat proprietary giants
RL (RLVR specifically) gives small open-source models an edge over big models in reasoning. a 14B model trained with RLVR on high-quality verified problems can match the performance of OpenAI's o3.

> models have a hard time learning Python.
mixing language models during pre-training is good, but Python behaves different from statically typed languages. languages with similar syntax (Java and C#, or JavaScript and TypeScript) creates high positive synergy. mixing Python heavily into the training of statically typed languages can actually hurt because of Python's dynamic typing.

> not all languages are equal (coding scaling laws)
the amount of data required to specialize a model on a language drastically depends on the language. paper argues like C# and Java are easier to learn (less training data required). languages like Python and Javascript are actually more tricky to learn, ironically (you see AI most used for these languages :)

> MoE vs Dense (ability vs stability)
MoE models offer higher capacity, but are much more fragile during SFT than dense models. hyperparams in training have a more drastic effect in MoE models, while dense models are more stable. MoE models also require constant learning rate schedules to avoid routing instability.

> code models are "insecure" by default (duh)
training on public repos makes models learn years of accumulated insecure coding patterns. safety fine-tuning often fails to work much on code. a model might refuse to write a hate speech email but will happily generate a SQL-injection vulnerable function because it "works."

read the full paper:
From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence (2511.18538)
  • 1 reply
·
Nymbo 
posted an update about 2 months ago
view post
Post
5214
🚀 I've just shipped a major update to the Nymbo/Tools MCP server: the Agent_Terminal, a single "master tool" that cuts token usage by over 90%!

Anthropic found 98.7% context savings using code execution with MCP, Cloudflare published similar findings. This is my open-source implementation of the same idea.

# The Problem

Traditional MCP exposes every tool definition directly to the model. With 12 tools, that's thousands of tokens consumed *before the conversation even starts*. Each tool call also passes intermediate results through the context window — a 10,000-row spreadsheet? That's all going into context just to sum a column.

# The Solution: One Tool to Rule Them All

Agent_Terminal wraps all 12 tools (Web_Search, Web_Fetch, File_System, Generate_Image, Generate_Speech, Generate_Video, Deep_Research, Memory_Manager, Obsidian_Vault, Shell_Command, Code_Interpreter) into a single Python code execution gateway.

Instead of the model making individual tool calls, it writes Python code that orchestrates the tools directly:

# Search for Bitcoin price
result = Web_Search("current price of bitcoin", max_results=3)
print(result)


Don't know what tools are available? The agent can discover them at runtime:

print(search_tools('image'))  # Find tools by keyword
print(usage('Generate_Image'))  # Get full docs for a specific tool


The individual direct tool calls are all still there, but they can be disabled if using the Agent_Terminal. Try it now - https://www.nymbo.net/nymbot
  • 1 reply
·
Reubencf 
posted an update about 2 months ago
view post
Post
2513
Hey everyone! 👋

I am thrilled to present MCP-1st-Birthday/Reuben_OS my submission for the Hugging Face MCP 1st Birthday Hackathon (Creative Track).

ReubenOS is a virtual cloud-based operating system designed specifically to act as a backend for Claude Desktop via the Model Context Protocol (MCP). It gives Claude a persistent environment to work in!

✨ Key Features

* 📱 Flutter IDE: Claude can write Flutter code and I can view/execute the files directly in the ReubenOS dashboard.
* 🎵 AI Audio Studio: Integrated with ElevenLabs to generate songs and voiceovers from text prompts within Claude.
* 🔒 Secure File System: A passkey-protected file system (private & public folders) to store code, JSON, and documents.
* 🧠 Gemini Integration: Access Google's Gemini model directly inside the OS.
* 📝 Quiz Engine: Ask Claude to "Create a Python quiz," and it deploys a graded interactive quiz to the web instantly.
·
lunarflu 
posted an update 2 months ago
lunarflu 
posted an update 2 months ago
lunarflu 
posted an update 2 months ago
view post
Post
2783
💸🤑You don’t need 100 GPUs to train something amazing!

Our Smol Training Playbook teaches you a better path to world-class LLMs, for free!

Check out the #1 trending space on 🤗 :
HuggingFaceTB/smol-training-playbook
Nymbo 
posted an update 3 months ago
view post
Post
1135
I've added an 11th tool to the Nymbo/Tools MCP server, it's for your Obsidian_Vault. I'd argue it's far more context-efficient than any other Obsidian MCP I've seen, and doesn't require any plugins. Also some big improvements to the Web_Search and Web_Fetch tools.

# Obsidian_Vault Tool

It's basically a read-only version of the File_System tool, but it works so well for navigating Obsidian without unnecessary context. It supports recursive (full-text) search across the entire vault, and supports offset so the agent can "scroll" through a document without re-consuming tokens.

Run the server locally and set the OBSIDIAN_VAULT_ROOT environment variable to your vault's root path. If you don't use Obsidian, this is perfectly usable as simply a read-only filesystem.

# Web_Search Improvements

The Web_Search tool previously just used DuckDuckGo as a backend search engine, but now it also supports Bing, Brave, Yahoo, and Wikipedia. Default engine is auto which provides results from all backends in recommended order. Still doesn't require any kind of API or auth for Web_Search.

There's also a new date filter to limit results to those created in the past day, week, month, or year. Oh, and uhh, SafeSearch is now off by default :)

# Web_Fetch Improvements

As context-efficient as the Markdown mode is for web browsing, sometimes it does lose important context in the conversion from HTML to Markdown. So I've added a new HTML mode to the Web_Fetch tool that basically executes a cURL request on the URL, returning the full HTML page if necessary.

# A Note on Claude Skills

I've been having fun with the new File_System and Shell_Command tools. Using Claude Skills doesn't currently work in the public HF space because of environment restrictions, but using Skills works perfectly well running locally.

Happy building ~