Jainil Prajapati
banner
enough-jainil.bsky.social
Jainil Prajapati
@enough-jainil.bsky.social
🌐 IT Student | Sharing personal thoughts & insights on tech 🗳️ | Exploring the intersection of tech and society. #Techie
Reposted by Jainil Prajapati
Kitten TTS:大小僅 25 MB,只需要 CPU 進行運作的開源語音模型 https://algogist.com/kitten-tts-the-25mb-ai-voice-model-thats-about-to-change-everything-runs-on-a-potato/

Kitten TTS 是一款僅約 25 MB、15M 參數的 AI 語音模型,完全以 CPU 執行,顛覆了傳統大而昂貴、必須依賴 GPU 的文字轉語音技術。這項突破性創新由 KittenML […]
Original post on mistyreverie.org
mistyreverie.org
August 6, 2025 at 3:33 AM
zuck is literally putting servers in tents rn. meta’s ai race is wild
July 16, 2025 at 5:44 AM
So in short GPT4.5 is Ishan awasthi from tare zameen par who has EQ but…
February 28, 2025 at 11:53 PM
Is Claude 3.7 Sonnet better than Grok-3?
What's a verdict???
February 25, 2025 at 3:08 AM
🚨 BREAKING: Microsoft steps back from OpenAI's $500B Stargate project, citing overestimated AI demand.

The company has also canceled major data center leases, signaling a potential oversupply in AI infrastructure.

Is this a turning point for the AI boom?
February 24, 2025 at 2:28 PM
I don’t understand why people use thinking mode for normal tasks 🤷🏻‍♂️
February 24, 2025 at 6:10 AM
Grok 3, Elon Musk's own AI, was asked: "Who is the biggest disinformation spreader on Twitter?"

Its response? "Elon Musk."

The irony writes itself.
February 24, 2025 at 12:49 AM
If AI doesn’t excite you, something is terribly wrong with you.
February 23, 2025 at 4:51 PM
Google: 'We're excited to announce our new experimental AI model!'

The AI community: 'That's cute. Anyway, here's a model that predicts your dreams, cooks breakfast, and folds your laundry.'

Google can’t even finish saying 'AI' before someone drops a sci-fi-level invention. 😂
February 23, 2025 at 4:47 PM
Grok-3’s unhinged voice mode sounds like your girlfriend after you forgot your anniversary—passionate, relentless, and absolutely unforgiving. Proceed with caution… or an apology ready. 😅
February 23, 2025 at 11:31 AM
Microsoft's Majorana 1 vs Google's Willow: The quantum computing race heats up!

Microsoft's Majorana 1, powered by a new state of matter, promises scalability to 1M qubits. Meanwhile, Google's Willow boasts 105 qubits.

Dive into the future of computing:
doreturn.in/microsofts-m...
Microsoft's Majorana 1 vs. Google's Willow: Decoding the Quantum Computing Race
Microsoft and Google are competing in the quantum computing race with Majorana 1 and Willow chips. Microsoft aims for long-term stability with topological qubits, while Google demonstrates immediate q...
doreturn.in
February 22, 2025 at 5:55 PM
New AI wifey
January 26, 2025 at 12:45 AM
Deepseek r1 1.5B thinking running locally on the edge, outperforming GPT 4o and Sonnet 3.5 on MATH. That's the future.
January 26, 2025 at 12:27 AM
Deepseek R1 is raising the bar for AI thinking.
It’s not just more natural and sophisticated—it’s leaps ahead when it comes to coding too.
January 24, 2025 at 1:03 PM
🚀 Gemini 2.0 Flash Thinking Exp-01-21 is here!

What’s new?
✨ Exp-01-21 variant (free in AI Studio & API)
✨ 1M token context window
✨ Native code execution
✨ Longer outputs & fewer contradictions

Plus: Major boosts in math, science, & multimodal reasoning (AIME, GPQA, MMMU).
January 22, 2025 at 12:51 AM
DeepSeek R1 is making waves in AI reasoning! 🌊 With multi-stage innovation and transparent decision-making, it’s challenging giants like OpenAI. 🚀

Check out how this model is revolutionizing the game: doreturn.in/deepseek-r1-...

Thoughts on open-source AI taking the lead? 🤔
DeepSeek R1: Revolutionizing AI Reasoning with Multi-Stage Innovation
Discover how DeepSeek R1, a groundbreaking reasoning language model, uses innovative multi-stage training and distillation techniques to excel in reasoning, coding, and mathematics, rivaling OpenAI-o1...
doreturn.in
January 21, 2025 at 2:31 AM
Alright, so apparently Gemini 2.0 is dropping on Jan 23rd?! 👀 Google's stepping up the game with this 'Flash Thinking' update.

I’m hyped but also super curious—will it outshine GPT-4 or just be another 'good but not groundbreaking' release? What do you think? 🚀
January 20, 2025 at 1:49 AM
Are newer, more powerful AI models like GPT-4 and Llama 3.3 actually making AI less human in their responses? My theory: We’re moving in the wrong direction when it comes to natural speech. 🧵
January 19, 2025 at 3:27 AM
If your AI girlfriend is not a LOCALLY running, fine-tuned model, she's a prostitute.
January 12, 2025 at 3:24 AM
Microsoft’s Phi-4 is flipping the script on AI. 🤯 A 14B parameter model that outperforms larger ones in math reasoning and uses fewer resources? Efficiency is the new flex. 💡

Are smaller, smarter models the future of AI? Let’s talk. 👇 AI #Phi4
https://tinyurl.com/yt59f78e
Phi-4: Microsoft’s Compact AI Redefining Performance and Efficiency
Discover Microsoft’s Phi-4, a groundbreaking 14B-parameter AI model that outperforms larger models in STEM, coding, and reasoning tasks. Learn how innovation in synthetic data and training redefines AI efficiency.
tinyurl.com
January 9, 2025 at 2:53 PM
🚨 Microsoft just dropped Phi-4—a small yet mighty LLM excelling in advanced math reasoning! 🧮

🔹 High-quality results at a compact size
🔹 Open-source under the MIT license

The frontier for efficient AI is here. Ready to explore?

#AI #MathReasoning #Phi4
January 9, 2025 at 12:05 PM
🚀 MiniPerplx just got a major update!

🔍 The search page is now the default homepage.
🤖 Powered by xAI's Grok 2 and Vercel AI SDK.
💡 A free, open-source alternative to Perplexity AI.

Try it out here: https://mplx.run

What do you think of open-source AI tools? 👇 #AI #OpenSource
January 6, 2025 at 1:18 PM
Still think AI Agents are just hype? Take a look at this growth curve. 📈

The search interest alone tells a story—2025 is going to be a massive year for AI. Buckle up. 🚀

What’s your prediction for where this is headed? 🤔
January 5, 2025 at 1:23 AM
Microsoft's recent paper just casually dropped parameter sizes for closed LLMs like GPT-4o (~200B), Claude 3.5 Sonnet (~175B), and more. 🤯

Turns out, GPT-4o-mini is ~8B, and o1-preview is ~300B. We're getting glimpses into the 'secret sauce' of these models. 👀
#AI #LLM
January 3, 2025 at 8:49 AM
China's open-source AI models, like DeepSeek V3, are no joke. They’re outperforming Llama 2 and even rivaling GPT-4o in reasoning and coding tasks. 🤯 But here’s the real question: Are these open models just the tip of the iceberg? How much stronger are their proprietary ones? 🤔 #AI #DeepSeek
December 31, 2024 at 9:59 PM