Jonathan Solichin
banner
jsolichin.bsky.social
Jonathan Solichin
@jsolichin.bsky.social
if you think about it, that street sign is augmented reality. 🇮🇩🇺🇸. Currently @snapchat; Previously @Akamai, @UCLA, @GSOC, @ohDMA. My tweets are my own
👓P.s. Did you catch the name? Bifocals: glasses with two perspectives, an alternative to Monocle, get it??
Follow me for more tech inspired ideas, and bad jokes!
January 29, 2025 at 5:27 AM
👶Overall, I think this is a good start. BUT, it makes me wonder, how to make exploration even easier. E.g. change criteria and attributions of interest, branch destinations based on exploration, change info fidelity depending on interest. WDYT?
January 29, 2025 at 5:27 AM
🐒Lastly, we can combine these 2 sources to figure out what’s realistic for a daytrip! #GPT can tell us how long we should stay, and #GoogleMaps can tell us how long it takes to get there. A trip to Jigokudani Park would probably take at least 5 hours. So no monkey business!
January 29, 2025 at 5:27 AM
⏱️GMaps can also give us information that not even the most curated guidebooks can provide, like recency! Here we can see both day/night perspectives & learn unique details like the experience of Hummingbirds at this specific park.
January 29, 2025 at 5:27 AM
🥼With #ChatGPT, we can generate a list of things to do, and tailor the list based on specific interests. I love historical fun facts because I believe they enrich experiences beyond "everyone's doing it"--so that's in my specific request.
January 29, 2025 at 5:27 AM
📸With #LLM, it’s trivial to get custom plans based on your interest. E.g. To make my 100th trip to Las Vegas more interesting, I asked AI to recommend weird and unique things to do. However, while this gives good destinations, it doesn’t give context: pictures/reviews/etc.
January 29, 2025 at 5:26 AM
Ah and I can’t believe I forgot to thank @joulesqrd and @maxgoedjen who gave so much great feedback and feedback as I was learning the ropes! Hope y’all forgive me 🙇
January 29, 2025 at 5:26 AM
P.s It’s amazing how fast #webdev have gotten. @GoHugoIO makes a static website super ez. More insane is @digitalocean's deploy from github. I’ve been with DO for 6yrs, but this is my 1st time using their Apps service and it took all the headaches out and gave me https in return!
January 29, 2025 at 5:26 AM
I’d ❤️to hear what you think about these budgeting/design ideas! For me: we don't have to automate everything; software can be about collaboration b/w humans & machines. Learn more of my thoughts here https://t.co/9MnGQLshoH and get the free app: https://t.co/NWuYlKl5KF!
January 29, 2025 at 5:26 AM
On design: in today’s world of flat icons, I was racking for ways to make something stand out. Thanks to @Blender for making it easy to build unique designs quickly. Since it uses PBR, we get details such as light/color reflections on surfaces that traditional software wouldn't.
January 29, 2025 at 5:26 AM
Using #ios widget, I can make sure to keep budget top of mind and stay accountable. One discovery is that using the target, I can get a quick summary of how much I spent without the stress of seeing $. If you see 250% and your budget is $200, then you know you’ve spent $500.
January 29, 2025 at 5:26 AM
TBH I also never really figured out how to set your budget: what’s the right limit? A while back, @GrahamStephan recommended “The Richest Man in Babylon” (Clason, 1926) & I liked this section of the book in particular. While sometimes clickbaity, Graham does have salient tips.
January 29, 2025 at 5:25 AM
I figured that by making the process more manual, we can leverage reinforcement learning to improve our spending habits. When the app opens, you get a keyboard to type, then you press “enter” by choosing a category. No bank linking, AND you become an active participant!
January 29, 2025 at 5:25 AM
That’s it for now! Let me know what you think :) P.s. why yes, this whole thing is just an excuse to make yet another keyboard :p
January 29, 2025 at 5:25 AM
Protip: this was also a Behavior script drop down! Since we’ve tracked the world accurately with our various trackers, we can simply use #physics collisions to know whether the tip of the solder has touched a certain key switch’s lead!
January 29, 2025 at 5:25 AM
You might have noticed that the colors of the ball representing the switches’ leads changed after some contact with the soldering pen. I thought this might be useful to help me remember which leads might be hot and provide visual feedback, since it’s hidden in the real world.
January 29, 2025 at 5:24 AM
Now that the keyboard is not using an Image Marker, I can use the Image Marker to track the soldering pen!
January 29, 2025 at 5:24 AM
After all, big objects far away can look like small objects nearby!
January 29, 2025 at 5:24 AM
I used the “Extended Marker Tracking” Asset which leaves an object tracked by an image marker where it is, even when the image has disappeared. The tip here is to make sure that you size the marker right (in CM) to make sure that the digital twin is placed in the right place.
January 29, 2025 at 5:24 AM
When soldering a circuit, you don’t want to apply the heat for too long, lest you burn something. Maybe AR can also help my terrible skills? I added some real-time feedback on how long I've been on a lead, as well as historical feedback (bar chart) to track how consistent I am.
January 29, 2025 at 5:24 AM
Since we’re dealing with hands and real objects, it also felt natural to pretend to press buttons. In this case “two finger press” to turn-off x-ray mode to get a cleaner look at our keyboard. This thread is really just about pretending #oscar #bait
January 29, 2025 at 5:24 AM
Fun fact! Most of this project is all done with drop down :) Specifically, you can use a Behavior script to respond to most computer vision things. E.g. We can respond to hand tracking detecting an “index_finger” by disabling the “wireframe” layer.
January 29, 2025 at 5:23 AM