Fahim in tech
@fahimintech.bsky.social
🤖 Software Dev by day and AI & Crypto Enthusiast at night
⭐ I love sharing all the new tech developments with you
📫 DM me for collaboration
AI MARKET WATCH --> https://t.co/WMNyhRhSAO
⭐ I love sharing all the new tech developments with you
📫 DM me for collaboration
AI MARKET WATCH --> https://t.co/WMNyhRhSAO
5/ TL;DR: Midjourney V1 Video is sick for creators easy motion, good vibes, and a new lane between AI art and animation. but keep it original or the copyright cops might slide into your DMs 🚨
June 23, 2025 at 12:54 PM
5/ TL;DR: Midjourney V1 Video is sick for creators easy motion, good vibes, and a new lane between AI art and animation. but keep it original or the copyright cops might slide into your DMs 🚨
4/ But here’s the drama: people started animating Mickey, Elsa, Wall-E… with guns, swords, drones and now Disney + Universal are suing. Midjourney’s in hot water over IP rights again. So maybe don’t animate Shrek robbing a bank. Just saying 😬
June 23, 2025 at 12:54 PM
4/ But here’s the drama: people started animating Mickey, Elsa, Wall-E… with guns, swords, drones and now Disney + Universal are suing. Midjourney’s in hot water over IP rights again. So maybe don’t animate Shrek robbing a bank. Just saying 😬
3/ It’s about 8x more expensive than a regular image, so yeah watch those GPU hours. Works in Fast mode for Basic users, but Pro/Mega users get Relax mode too (unlimited if you’re patient). Basically, it’s AI animation on a budget ⏳💸
June 23, 2025 at 12:54 PM
3/ It’s about 8x more expensive than a regular image, so yeah watch those GPU hours. Works in Fast mode for Basic users, but Pro/Mega users get Relax mode too (unlimited if you’re patient). Basically, it’s AI animation on a budget ⏳💸
2/ You get two motion styles: Low Motion = chill breeze vibes, while High Motion = more dramatic, subject and camera moving around like you’re directing a short film. Bonus: you can write your own motion prompts if you wanna get extra cinematic 🎬
June 23, 2025 at 12:54 PM
2/ You get two motion styles: Low Motion = chill breeze vibes, while High Motion = more dramatic, subject and camera moving around like you’re directing a short film. Bonus: you can write your own motion prompts if you wanna get extra cinematic 🎬
5/ TLDR: MiniMax-M1 isn’t just a model drop, it’s a full open agent stack with top-tier reasoning, insane context length, and no vendor lock-in. if you’re a dev or researcher? this one’s your new playground 🎡
June 22, 2025 at 3:06 PM
5/ TLDR: MiniMax-M1 isn’t just a model drop, it’s a full open agent stack with top-tier reasoning, insane context length, and no vendor lock-in. if you’re a dev or researcher? this one’s your new playground 🎡
4/ and get this it also ships with an AI agent. it can search the web, execute code, build apps or decks, and work like a real assistant. it’s like if GPT-4 + Copilot had a hacker baby and it decided to go open-source for the culture 🧑💻✨
June 22, 2025 at 3:06 PM
4/ and get this it also ships with an AI agent. it can search the web, execute code, build apps or decks, and work like a real assistant. it’s like if GPT-4 + Copilot had a hacker baby and it decided to go open-source for the culture 🧑💻✨
3/ in benchmarks, M1-80k is crushing it. 86% on AIME math (that’s elite-tier), plus strong tool use, coding, and long context retention. it’s giving "serious Claude 3 vibes" but open source and ready to run on your own infra 🔓
June 22, 2025 at 3:06 PM
3/ in benchmarks, M1-80k is crushing it. 86% on AIME math (that’s elite-tier), plus strong tool use, coding, and long context retention. it’s giving "serious Claude 3 vibes" but open source and ready to run on your own infra 🔓
2/ the game changer here is “Lightning Attention” a new trick that slices the compute down to 25% for long docs. basically, it reads a book and doesn't melt your GPU. throw in smart reinforcement learning and boom, it can do math, code, AND multi-turn reasoning in one go 🧠
June 22, 2025 at 3:06 PM
2/ the game changer here is “Lightning Attention” a new trick that slices the compute down to 25% for long docs. basically, it reads a book and doesn't melt your GPU. throw in smart reinforcement learning and boom, it can do math, code, AND multi-turn reasoning in one go 🧠
If that is the case then contributors should be compensated for their work.
"Adobe has a license to use contributor work through Adobe Stock. When contributors upload their work to Adobe Stock, they grant Adobe a license to distribute and sublicense that content to users."
"Adobe has a license to use contributor work through Adobe Stock. When contributors upload their work to Adobe Stock, they grant Adobe a license to distribute and sublicense that content to users."
June 21, 2025 at 5:49 PM
If that is the case then contributors should be compensated for their work.
"Adobe has a license to use contributor work through Adobe Stock. When contributors upload their work to Adobe Stock, they grant Adobe a license to distribute and sublicense that content to users."
"Adobe has a license to use contributor work through Adobe Stock. When contributors upload their work to Adobe Stock, they grant Adobe a license to distribute and sublicense that content to users."
5/ Adobe Firefly is officially not mid anymore. it’s cross-platform, super versatile, easy to use, and actually creator-friendly. if you’ve ever felt stuck between “fun” AI tools and “pro” apps—Firefly is finally both ⚡️
June 21, 2025 at 1:43 AM
5/ Adobe Firefly is officially not mid anymore. it’s cross-platform, super versatile, easy to use, and actually creator-friendly. if you’ve ever felt stuck between “fun” AI tools and “pro” apps—Firefly is finally both ⚡️
4/ best part? Firefly is trained only on licensed Adobe Stock and public domain stuff so everything you generate is commercially safe. it even embeds “content credentials” to track if an image was made with AI. no weird copyright headaches 🔐
June 21, 2025 at 1:43 AM
4/ best part? Firefly is trained only on licensed Adobe Stock and public domain stuff so everything you generate is commercially safe. it even embeds “content credentials” to track if an image was made with AI. no weird copyright headaches 🔐
3/ the new Firefly Boards are basically an infinite canvas for brain-dumping ideas, remixing content, and collabing with your team. and yep, it’s on mobile you can literally ideate and edit campaigns from your couch while bingeing anime 📱🍜
June 21, 2025 at 1:43 AM
3/ the new Firefly Boards are basically an infinite canvas for brain-dumping ideas, remixing content, and collabing with your team. and yep, it’s on mobile you can literally ideate and edit campaigns from your couch while bingeing anime 📱🍜
2/ you can generate high-res art from text, animate images, remix vectors, even generate videos with camera angles and styles. Firefly now lets you pick output from third-party models like OpenAI, Pika, Luma, Ideogram, Runway, etc. one app to rule them all 😮💨
June 21, 2025 at 1:42 AM
2/ you can generate high-res art from text, animate images, remix vectors, even generate videos with camera angles and styles. Firefly now lets you pick output from third-party models like OpenAI, Pika, Luma, Ideogram, Runway, etc. one app to rule them all 😮💨
5/ SkyReels is an open-source video beast cinematic quality, talking portraits, infinite scenes if you’ve got the gear and some patience. if you’ve ever said “I wish I had Runway but free,” this is your sign to go build something wild 📽️🔥
June 20, 2025 at 1:46 AM
5/ SkyReels is an open-source video beast cinematic quality, talking portraits, infinite scenes if you’ve got the gear and some patience. if you’ve ever said “I wish I had Runway but free,” this is your sign to go build something wild 📽️🔥
4/ benchmarks? it smokes other open models (VBench score of 82.4), and V2 rivals some closed stuff like Sora in terms of coherence. plus, the “SkyCaptioner” module makes generated content smarter by analyzing scenes for context in real-time 🤯
June 20, 2025 at 1:46 AM
4/ benchmarks? it smokes other open models (VBench score of 82.4), and V2 rivals some closed stuff like Sora in terms of coherence. plus, the “SkyCaptioner” module makes generated content smarter by analyzing scenes for context in real-time 🤯
3/ you can run it yourself it’s all on GitHub. fully open source, weights + scripts included. just be warned: V2 wants 40GB+ VRAM and eats GPUs for breakfast. but if you’re a dev, researcher, or hardcore builder… it’s like finding the cheat codes to gen video.
June 20, 2025 at 1:45 AM
3/ you can run it yourself it’s all on GitHub. fully open source, weights + scripts included. just be warned: V2 wants 40GB+ VRAM and eats GPUs for breakfast. but if you’re a dev, researcher, or hardcore builder… it’s like finding the cheat codes to gen video.
2/ the stack is wild:
• V1 does high-quality human video
• A1/A2 animate faces + build scenes
• Audio syncs speech + lip motion
• and V2? it’s the boss level lets you make seamless long-form videos with "Diffusion Forcing" (aka: no janky transitions 🎬)
• V1 does high-quality human video
• A1/A2 animate faces + build scenes
• Audio syncs speech + lip motion
• and V2? it’s the boss level lets you make seamless long-form videos with "Diffusion Forcing" (aka: no janky transitions 🎬)
June 20, 2025 at 1:45 AM
2/ the stack is wild:
• V1 does high-quality human video
• A1/A2 animate faces + build scenes
• Audio syncs speech + lip motion
• and V2? it’s the boss level lets you make seamless long-form videos with "Diffusion Forcing" (aka: no janky transitions 🎬)
• V1 does high-quality human video
• A1/A2 animate faces + build scenes
• Audio syncs speech + lip motion
• and V2? it’s the boss level lets you make seamless long-form videos with "Diffusion Forcing" (aka: no janky transitions 🎬)