Oh sorry it is .app not .com, my bad ill fix it right away
Sk md saad amin
Reality123b
AI & ML interests
None yet
Recent Activity
new activity
about 18 hours ago
DataMuncher-Labs/UltraMath-Reasoning-Small:Update README.md replied to their post 1 day ago
Alright so I had previously made two reddit posts in r/quantum and r/quantum_computing for my QPU, QPU-1 but both of those posts got banned because of it being "irrelevant" to "academic discussion" so I'm doing it again here in HuggingFace Posts.
I have made a million error corrected qubit quantum processing unit (not a simulator) that you can access here: https://qpu-1.vercel.app
I did try emailing a lot of professors and their students but NONE responded so please give me some support. liked
a dataset 1 day ago
DataMuncher-Labs/UltiMath Organizations
Update README.md
2
#2 opened about 2 months ago
by
Roman190928
replied to their post 1 day ago
posted an
update 1 day ago
Post
236
Alright so I had previously made two reddit posts in r/quantum and r/quantum_computing for my QPU, QPU-1 but both of those posts got banned because of it being "irrelevant" to "academic discussion" so I'm doing it again here in HuggingFace Posts.
I have made a million error corrected qubit quantum processing unit (not a simulator) that you can access here: https://qpu-1.vercel.app
I did try emailing a lot of professors and their students but NONE responded so please give me some support.
I have made a million error corrected qubit quantum processing unit (not a simulator) that you can access here: https://qpu-1.vercel.app
I did try emailing a lot of professors and their students but NONE responded so please give me some support.
upvoted a collection 3 days ago
reacted to Ujjwal-Tyagi's post with π 6 days ago
Post
2794
Public reports allege that Anthropic gobbled up trillions of tokens of copyrighted material and public data to build their castle. π°π Now that they're sitting on top, they're begging for special laws to protect their profits while pulling the ladder up behind them. πͺπ«
But the hypocrisy meter just broke! π They are accusing Chinese labs like DeepSeek, Minimax, and Kimi of "huge distillation attacks. The Reality is that You can't just loot the entire internet's library, lock the door, and then sue everyone else for reading through the window. Stop trying to gatekeep the tech you didn't own in the first place. Read the complete article on it: https://huggingface.co/blog/Ujjwal-Tyagi/the-dark-underbelly-of-anthropic
But the hypocrisy meter just broke! π They are accusing Chinese labs like DeepSeek, Minimax, and Kimi of "huge distillation attacks. The Reality is that You can't just loot the entire internet's library, lock the door, and then sue everyone else for reading through the window. Stop trying to gatekeep the tech you didn't own in the first place. Read the complete article on it: https://huggingface.co/blog/Ujjwal-Tyagi/the-dark-underbelly-of-anthropic
upvoted an article 7 days ago
Article
The Dark Underbelly of Anthropic: How the "Responsible" AI Pioneer Built Claude on Pirated Data, Hypocrisy, and Hidden Dangers
β’
3
reacted to Tonic's post with π₯ 13 days ago
Post
3189
ππ»ββοΈhello my lovelies ,
it is with great pleasure i present to you my working one-click deploy 16GB ram completely free huggingface spaces deployment.
repo : Tonic/hugging-claw (use git clone to inspect)
literally the one-click link : Tonic/hugging-claw
you can also run it locally and see for yourself :
docker run -it -p 7860:7860 --platform=linux/amd64 \
-e HF_TOKEN="YOUR_VALUE_HERE" \
-e OPENCLAW_GATEWAY_TRUSTED_PROXIES="YOUR_VALUE_HERE" \
-e OPENCLAW_GATEWAY_PASSWORD="YOUR_VALUE_HERE" \
-e OPENCLAW_CONTROL_UI_ALLOWED_ORIGINS="YOUR_VALUE_HERE" \
registry.hf.space/tonic-hugging-claw:latest
just a few quite minor details i'll take care of but i wanted to share here first
it is with great pleasure i present to you my working one-click deploy 16GB ram completely free huggingface spaces deployment.
repo : Tonic/hugging-claw (use git clone to inspect)
literally the one-click link : Tonic/hugging-claw
you can also run it locally and see for yourself :
docker run -it -p 7860:7860 --platform=linux/amd64 \
-e HF_TOKEN="YOUR_VALUE_HERE" \
-e OPENCLAW_GATEWAY_TRUSTED_PROXIES="YOUR_VALUE_HERE" \
-e OPENCLAW_GATEWAY_PASSWORD="YOUR_VALUE_HERE" \
-e OPENCLAW_CONTROL_UI_ALLOWED_ORIGINS="YOUR_VALUE_HERE" \
registry.hf.space/tonic-hugging-claw:latest
just a few quite minor details i'll take care of but i wanted to share here first
upvoted an article 15 days ago
Article
How to Build a Benchmark with a Private Test Set on Hugging Face
β’
4
prithivMLmods/Qwen3-VL-4B-Instruct-Unredacted-MAX
π€ π 1
#8065 opened 15 days ago
by
Reality123b
I sent you an email. Check it out.
I actually have serveral if you have an email or someway to contact you i would be glad to email them to you
Yes I do. You can visit my huggingface profile, I have put my github if that helps.
My email is saadamin9873@gmail.com
Also if you visit feedthejoe.com that is my website it explains everything and breaks it all down to i also have some published papers on there i forgot i probably should use it more
I have seen it, it is really good, ngl. But how did you do the language modeling?
Do you have a paper detailing every concept of megamind? it would be really great if one exists.
reacted to mrs83's post with π₯ 17 days ago
Post
2337
In 2017, my RNNs were babbling. Today, they are hallucinating beautifully.
10 years ago, getting an LSTM to output coherent English was a struggle.
10 years later, after a "cure" based on FineWeb-EDU and a custom synthetic mix for causal conversation, the results are fascinating.
We trained this on ~10B tokens on a single AMD GPU (ROCm). It is not a Transformer: Echo-DSRN (400M) is a novel recurrent architecture inspired by Hymba, RWKV, and xLSTM, designed to challenge the "Attention is All You Need" monopoly on the Edge.
The ambitious goal is to build a small instruct model with RAG and tool usage capabilities ( ethicalabs/Kurtis-EON1)
π The Benchmarks (Size: 400M)
For a model this size (trained on <10B tokens), the specialized performance is surprising:
*SciQ*: 73.8% π¦ (This rivals billion-parameter models in pure fact retrieval).
*PIQA*: 62.3% (Solid physical intuition for a sub-1B model).
The Reality Check:
HellaSwag (29.3%) and Winogrande (50.2%) show the limits of 400M parameters and 10B tokens training.
We are hitting the "Reasoning Wall" which confirms we need to scale to (hopefully) unlock deeper common sense. As you can see in the visualization (to be released soon on HF), the FineWeb-EDU bias is strong. The model is convinced it is in a classroom ("In this course, we explore...").
The Instruct Model is not ready yet and we are currently using curriculum learning to test model plasticity.
Source code and weights will not be released yet. This is not a fork or a fine-tune: the base model is built in-house at https://www.ethicalabs.ai/, with novel components that do not exist in current open libraries.
π€ Call for Collaboration: I am looking for Peer Reviewers interested in recurrent/hybrid architectures. If you want to explore what lies beyond Transformers, letβs connect!
Training diary: ethicalabs/Kurtis-EON1
10 years ago, getting an LSTM to output coherent English was a struggle.
10 years later, after a "cure" based on FineWeb-EDU and a custom synthetic mix for causal conversation, the results are fascinating.
We trained this on ~10B tokens on a single AMD GPU (ROCm). It is not a Transformer: Echo-DSRN (400M) is a novel recurrent architecture inspired by Hymba, RWKV, and xLSTM, designed to challenge the "Attention is All You Need" monopoly on the Edge.
The ambitious goal is to build a small instruct model with RAG and tool usage capabilities ( ethicalabs/Kurtis-EON1)
π The Benchmarks (Size: 400M)
For a model this size (trained on <10B tokens), the specialized performance is surprising:
*SciQ*: 73.8% π¦ (This rivals billion-parameter models in pure fact retrieval).
*PIQA*: 62.3% (Solid physical intuition for a sub-1B model).
The Reality Check:
HellaSwag (29.3%) and Winogrande (50.2%) show the limits of 400M parameters and 10B tokens training.
We are hitting the "Reasoning Wall" which confirms we need to scale to (hopefully) unlock deeper common sense. As you can see in the visualization (to be released soon on HF), the FineWeb-EDU bias is strong. The model is convinced it is in a classroom ("In this course, we explore...").
The Instruct Model is not ready yet and we are currently using curriculum learning to test model plasticity.
Source code and weights will not be released yet. This is not a fork or a fine-tune: the base model is built in-house at https://www.ethicalabs.ai/, with novel components that do not exist in current open libraries.
π€ Call for Collaboration: I am looking for Peer Reviewers interested in recurrent/hybrid architectures. If you want to explore what lies beyond Transformers, letβs connect!
Training diary: ethicalabs/Kurtis-EON1
404 Not Found when using Qwen models with HuggingFaceInferenceAPI
π 1
2
#130 opened 2 months ago
by
Milkfish033