La Vita è Bear
b.yuxuan.org.web.brid.gy
La Vita è Bear
@b.yuxuan.org.web.brid.gy
Yuxuan "fishy" Wang's new blog
Samsung's design issues
Or, why I don’t like Samsung’s products. ## Example 1: Emoji In Chinese there’s a saying of “画蛇添足”. Its literal meaning is “draw a snake then add legs”. We use it on people who do extra things that are wrong. It’s the perfect phrase for this Samsung example. It was 2017, long before Covid, and I still commute by Caltrain plus bicycle 3 days a week. One day, on my evening commute, I messaged my wife that I’ll be arriving home at roughly 6:30pm. When typing that message, Gboard suggested the 6:30 clock face emoji. I didn’t use that emoji, because I noticed that the emoji on Android was wrong (and also she’s not very good at reading analog clock faces). Noticing it wrong piqued my curiosity, so I checked on Emojipedia to see whether other platform’s emojis are also wrong. To my surprise, most of them were wrong, so I took a screenshot on Emojipedia and tweeted about it. I’m not going to link my tweet here, but here’s the screenshot I took back then: Screenshot of the 6:30 emojis at 2017 Why are they wrong? You might ask. On most analog clocks, the hour hand does not just “jump” every hour. It gradually advances every minute or even second. So, at 6:30, the hour hand should be halfway between 6 and 7, instead of pointing directly at 6. So, in this screenshot, only Microsoft got it right, and every other big company was wrong. But why do I pick at Samsung here? Because in this screenshot Samsung was _extra_ wrong. They added a second hand, pointing at roughly 50. People use this emoji to mean 6:30, not 6:30:50. If you really want to add a second hand, point it at 0. To be fair Samsung is no longer _extra_ wrong _today_ , they are just as wrong as others. I checked Emojipedia again at 2022 and tooted about it: Screenshot of the 6:30 emojis at 2022 Basically: Google fixed it. Samsung is still wrong but at least they removed the second hand to be just as wrong as others, no longer extra wrong. Microsoft somehow decided that being correct is boring and joined the dark side. (Today the correctness of this type of emojis among big companies is roughly the same as 2022.) ## Example 2: Vesa mount on monitors A few years ago I bought a Samsung monitor to be used on my standing desk. On my standing desk I use a monitor arm, so when purchasing monitors I made sure they have vesa mount. The Samsung monitor I bought does have one in its technical specs. But what they didn’t say in the tech specs is that they have a rounded/arched cutout around the vesa mount. VESA mount plates from the monitor arms are rectangles. Rectangles have 4 corners. At least 2 of the corners don’t really fit into the round cutout of the Samsung monitor. So in the end, I have to pad the 4 mounting screws up to make the mounting plate out of the cutout, like this: VESA mount on a Samsung monitor Funny that when looking at the wikipedia page of vesa mount, it’s also called “Flat Display Mounting Interface”. It has “flat” in its name, but Samsung decided that being flat is boring and need to screw that. ## Example 3: Kitchen appliances Fast forward to 2025. We just bought a new house from a builder. The builder put Dacor appliances in the kitchen. And Dacor is _supposed_ to be Samsung’s “high-end” kitchen appliance brand. Being a Samsung brand, they support SmartThings via WiFi (vs. Z-Wave/Zigbee/Matter/etc. that needs a hub). Both the induction cooktop and the oven have a touchscreen that I can connect to my wifi from there, without needing any app. But, contrary to what me or most people would think, connecting them to WiFi via the touchscreen doesn’t really help adding them to SmartThings. When you try to add them to SmartThings, SmartThings asks you to scan some QR code. They don’t have any QR code sticked on them, and there’s no way to make them display a pairing QR code on the touchscreen, despite having a touchscreen. Instead, you have to manually choose to skip QR code scanning on the SmartThings app, wait forever, then it will instruct you to pair them to your phone manually. And it won’t auto find them just because they are in the same WiFi as your phone. You have to manually put them into some pairing mode via the touchscreen, which essentially makes them disconnect from the WiFi and enter a hotspot mode, then your phone must connect to their hotspot WiFi (which doesn’t provide internet), in order to finish the pairing in the SmartThings app. In the process you would need to give them your WiFi information via the SmartThings app on your phone again. So what does connecting to WiFi via the touchscreen actually do? Maybe NTP to get the correct time? I can’t really think of any other use. The dishwasher has a QR code stick on it, but that QR code is not for pairing, and you still need to make it enter pairing mode. It also lacks the touchscreen, so you need to use some magic key combination to make it enter the hotspot pairing mode. Then comes the fridge. After you open the door, there’s a “platform” with some touch sensitive buttons you can use to turn on/off the ice maker, adjust the temperature, etc.. Those buttons are touch sensitive, not really physical buttons. They are inside a fridge, which can often have condensing water. Touch sensitive things hate water because to them water looks very similar to fingers. Those buttons are very hard to press correctly because of that. And after added all of them to SmartThings, in SmartThings when I click any of them it just takes forever to load and don’t give me any useful controls of them. So I just added them to my Home Assistant via SmartThings integration, and I immediately get useful actions and data for them in Home Assistant. At least I no longer to touch any of the buttons inside the fridge ever again because all those controls are available inside Home Assistant. ## Conclusion I really don’t like Samsung’s product design.
b.yuxuan.org
January 19, 2026 at 6:43 AM
Comment on this post on Fediverse to see your comment on my blog post
I have been using Cactus for the commenting system since migrated to the current blog system (PandaBlog), but that’s changing. Recently, there were some issues with the backend of Cactus, and on the matrix room it’s stated that the shared hosting of it is currently in an unmaintained state, and the developers’ focus have shifted to other projects. While I could still self-host it to keep using it, self-hosting Cactus is quite a pain. So while I’m sad to see it came to this (I really liked the idea of generating a matrix room for each blog post’s comments). So I was looking for alternatives. Since I already am using Bridgy Fed to bridge the blog posts to Fediverse (and also AT protocol), the obvious choice is to bring back Fediverse comments to the blog posts, so I made a feature request on the project. Ryan, the author of Bridgy Fed, pointed out that it’s already possible via Webmention, a W3C standard that’s kind of like pingback from 20 or so years ago. Having a web standard for this feature is certainly nice! But to implement it also means I have to handle and _store_ all the comments/interactions received in some kind of database, which would be quite a pain for PandaBlog, or other statically generated blog systems (like Jekyll). Luckily, as Ryan pointed out, there are also long running 3rd party systems you can use to handle webmentions for you, the most notable one being Webmention.io. So I implemented support for Webmention.io on PandaBlog yesterday. As the nature of using a third-party system, a lot of things cannot be tested locally, so I have to do it kinda publicly in 3 steps: ## Auth The first step is to be able to auth with webmention.io, so I can create a dashboard there and start receiving webmentions: `1aec408`. The authentication they support is called IndieLogin, so what that commit does is add an IndieLogin URI to the configuration of the blog, and add that data to the pages, so Webmention.io can get that data and do the auth. ## Handle webmentions After authenticated with Webmention.io, the next step is to start using Webmention.io to handle webmentions on the blog posts: `cf0bd86`. After that’s deployed, when there are new interactions from bridged blog post on the Fediverse, the webmentions will be send to Webmention.io, and it will store them for me (I also tried interactions from AT protocol, but those are not sent to Webmention.io, or not stored correctly, but I don’t care much about AT protocol anyways 🤷). One nice thing about Webmention.io is that it also handles moderation, so you can go to your dashboard and delete spam or otherwise unwanted webmentions. Also important to note is that this obviously can only store webmentions _after_ the blog system enabled Webmention.io to handle webmentions. All the interactions before that were not stored. ## Render webmentions The final step is to render webmentions stored on Webmention.io back under the blog posts: `752700e`. For that I used webmention.js. You can see the results at the back of the previous post I used for testing: Screenshot of webmention.js rendered webmentions So, when you see this post in your Fediverse feed, comment it there to see your comment back to this page.
b.yuxuan.org
January 3, 2026 at 6:46 PM
2025
## 旅行 2025 flight map 今年继续超越去年成为我有史以来飞得最多的一年。主要来自去去英国看了三次球:上半年去温布利看我队几十年来第一冠,然后下半年买了个欧冠四场主场套票(还剩一场一月去看)。 然后今年飞了 Alaska Milkrun,算是划去了 bucket list 里面的一项。 然后今年去日本的时候去了以前没去过的九州岛,感觉福冈这个地方在日本有点类似国内的成都:有大城市的便利,但是人们又活得非常安逸没多卷,吃得也好/有特色,难怪现在在日本也算热门搬入城市。 ## 影视 今年看了六十几部电影,跟去年一样因为长途飞得多有挺大比例是在飞机上看的。印象比较深的几部是: * 出走的决心 * Mickey 17 * 0.5 mm * Materialists * 破·地狱 * One Battle After Another * Baby Assassin 三部曲:1, 2, 3 然后今年以为飞机坐得实在太多最后没片可看了就看了些曾大热但之前一直拖着没看的电影,像是卧虎藏龙, Crazy Rich Asians, Anora 之类的。 电视剧方面,热点和 Common Side Effects 都相当不错。 ## 娱乐 今年大热的 Expedition 33 和 Silksong 都相当不错。另外我玩了 Citizen Sleeper 1 和 2,这个系列算是一种低配版 Disco Elysium 代餐,但是在代餐里面算是文本相当不错的。 然后今年玩了很多重制的老游戏,像是时隔多年终于又能在掌机上玩 Patapon 1+2, 和逆转裁判 456 加上逆转检事合集。年底又刚开始玩重制的 Final Fantasy Tactics. ## 其它 今年在前司干满八年后终于下定决心走出舒适区跳槽了。八年已经是我在同一个雇主那里的最长记录了。 两个猫都开始吃处方粮了,但是至少 Kaylee 拉稀的问题基本得到了控制,算是松了一口大气。 年底又买了个房,在 2025 最后一天拿到了钥匙。
b.yuxuan.org
January 1, 2026 at 4:02 AM
Get IPv6 working with my ISP and Unifi
Years ago when I was still at Google I used Comcast Business as my ISP. They have IPv6 support (at least at 2016-ish), and everything just works with Google Wifi system. Later, a small local ISP, Sail, became available at my area, so I switched over. They are almost better than Comcast at everything (just being “not Comcast” is already good enough for me), except one thing: they don’t support IPv6. Or at least, they don’t support IPv6 _officially_. Things don’t work automatically in either Google/Nest Wifi system nor Unifi. When I talk to their tech support, they confirm that they don’t support IPv6 and don’t have a timeline for IPv6 support. But occasionally I plug my laptop to the RJ45 cable directly (bypass the router) to debug some issues, I always get an IPv6 address on my laptop that works. Which seem to suggest that they do have IPv6 support _to some level_ , just not ready to “officially support it” for customers yet. So this long weekend I tried to get it working on my Unifi system, and succeeded. ## The setup that worked in the end Note that this is just to document how I got it to work on this particular ISP (Sail). This is pretty much a YMMV situation and different ISPs would likely require different settings. If your ISP supports IPv6 officially, you should ask them for the configurations instead of following my notes here. 🙂 First, in Unifi’s WAN (`Internet`) settings: * Change `Advanced` from `Auto` to `Manual` * Change `IPv6 Connection` from `Disabled` to `SLAAC` * Make sure `IPv6 Type` is `Single Network` * Make sure `Network` below is `Default` (I do wonder if there’s a way to also enable IPv6 for the guest network, though) After that, I can see that there’s an IPv6 address showing up with the IPv4 address on my ISP, and LAN (`Networks`) automatically configured the IPv6 part for the `Default` network. My local devices start to get an IPv6 IP with the same prefix as the IPv6 address I got on my WAN, and egress routing seem to work (`traceroute6 ipv6.google.com` works, `curl -6 https://ifconfig.me` shows the IPv6 address on that device, etc.). But ingress routing doesn’t seem to work. When I traceroute6 to a device’s IPv6 address from outside (a server with IPv6 address I have access to), it ends at the router. It turned out (that’s to Ed) that’s because the default firewall rules on Unifi blocks all traffic originated from external to internal. It makes some sense for IPv4 (does it though? for IPv4 the “internal” zone is all behind NAT so not routable anyways?), but it does not make much sense when you want to run servers on IPv6 in the internal zone. So, I needed a second step, to add a firewall rule, to allow traffic from external zone to internal zone on IPv6. After that, everything works. ## Other setups I tried that didn’t work Before that, I also tried: * WAN setting as SLAAC with Prefix Delegation, with guessed PD size of 64: this gets the IPv6 address on the router from the ISP, local devices can get IPv6 addresses, but IPv6 doesn’t seem to be routable. * I also tried to change LAN setting to use prefix delegation on IPv6. That changed local devices to get a local IPv6 instead, but still unroutable. * From some unifi forum discussion someone suggest to change LAN setting to use static IPv6 as the one you get on WAN from ISP with netmask, that returned my local devices to public IPv6 addresses with the same prefix, but still unroutable. * DHCPv6 on WAN won’t get IPv6 address at all.
b.yuxuan.org
June 22, 2025 at 3:06 AM
Alaska Milkrun
Just finished my Alaska Milkrun trip. The flights I took are: The milkrun stops With an overnight stop at Juneau (JNU), as Alaska Airlines no longer sells the whole milk run as a single flight, it’s on either side of Juneau now (the other side is direct between Juneau and Seattle or Anchorage). And also you are not allowed to buy the whole flight between Seattle (SEA) and Anchorage (ANC) anyways. This was the 737-700 plane for the first half (SEA-KTN-SIT-JNU) at Seattle airport. Unfortunately I didn’t get the angle for the tail number: The 737-700 for SEA-KTN-SIT-JNU half Currently all Alaska milkrun flights are operated by 737-700, and Alaska Airlines kept those 737-700s only for milkruns and other small hops inside the state of Alaska (otherwise it’s only -800s, -900s, and MAXes). They used to use 737-400 combi for milkruns, which is a special configuration that’s half cargo and half passengers. This was near the US-Canada border before arriving at the first stop, Ketchikan. The border is that “thin” river at the top left of the photo: The US-Canada border near KTN This is a glacier at Baranof Island, where the 2nd stop, Sitka, is at. The name of the islands around Sitka sound very Russian, a reminiscence/reminder that the state of Alaska was purchased from Russia: A glacier at Baranof Island After Sitka, I arrived at Juneau for the night. My original plan is to go to Tracy’s Crab Shack for dinner. I was really looking forward to it. I took my parents to Juneau in 2016 for the Glacier Bay National Park, and that’s the only meal my dad enjoyed. He’s not accustomed to western food, but he can always appreciate crab. But unfortunately the first half of my milkrun flight was delayed and when I arrived at Juneau it was already past 10pm and Tracy’s Crab Shack was already closed. The next morning, I went back to JNU for my second half of the milkrun. Juneau also has a lot of seaplanes parked there: A seaplane at JNU And while waiting for my flight arrive from Seattle, I also saw a few seplanes took off from the runway (not from water) there. Before my flight arrived, another milkrun flight arrived from Anchorage on the next gate: N611AS at JNU Then, my flight also arrived: N609AS at JNU For the majority part of the flight between Yakutat and Cordova, the scene is about the snow peaks above the clouds, from the mountains of Wrangell-St. Elias National Park (or Klaune National Park on the Canadian side, which from the look of it is the same mountains just on the other side of the imaginary line): Snow peaks above the clouds But there’s also a big glacier right before we arrive at Cordova: Glacier just before Cordova After Cordova, we arrived at the final destination of the milkrun, Anchorage. This is the view of ANC before we landed: ANC from above I also took my camera with me, but none of the photos I took from the window of the plane with it was actually good, so I ended up only used the photos of my phone instead (except the 2 photos at JNU). At Anchorage I drove my rental car to Potter Marsh to see some birds, and finally put my camera into use. I saw a goose mama with some chicks: A goose mama with some chicks And some beautiful magpies (or at least I think they are magpies, I’m not 100% sure though): A magpie at Potter Marsh The whole album is at Google Photos.
b.yuxuan.org
June 15, 2025 at 12:23 AM
Combine UniFi API with dynamic DNS client
I had been using Google/Nest Wifi routers since the original OnHub days. Later upgraded to Google Wifi, then Nest Wifi Pro. While there are a lot of good things from those systems, there are also some really annoying things. The original OnHub had broken/non-existing NAT loopback implementation, which took them several months to finally fix. Then when Nest Wifi Pro is released NAT loopback is broken again for several months until they finally fixed it; With Nest Wifi Pro they also switched from the Google Wifi android app to Google Home app, which often gives you lies (a mesh point is down but the app says everything is fine), stale info, or every action takes several seconds to refresh. So, earlier this year, we finally had enough and decided to move to other wifi routers. I asked my colleagues for suggestions, and decided to buy into Ubiquity’s UniFi system. In particular, we bought a Dream Router 7 for the main router, and an Express 7 for meshing and providing wired connection to PS5 Pro, because PS5 Pro’s wifi chip is not great. Among several features it provided, one is to set up VPN clients on the router level and auto route traffic from the devices to the VPN client. This also comes with a challenge, because I run a Linux server at home (an Intel NUC box), with dynamic DNS client to run in an hourly cron job to make sure I have the domain pointing to the correct IP (as I don’t have static IP). The dynamic DNS client just gets the “correct” IP by getting its egress IP from the API, so if the request is routed via VPN, then it won’t be able to get the correct IP address to set. Of course I can set routing rules to make sure those requests don’t go through VPN. But domain based routing rules are brittle with DNS security used, and IP based routing rules are infeasible because that requires me to keep an up-to-date list of all the possible IPs the API endpoint can resolve to. So instead, I turned to another new feature provided by UniFi: their REST API can tell me what my external IP is. I just need to generate an API key from site manager, then this can be done via a simple `curl` + `jq` pipe: apikey="..." ip=$(curl --header "X-API-Key: ${apikey}" --header 'Accept: application/json' "https://api.ui.com/v1/hosts" | jq -r '.data.[0].reportedState.ip') And then I can feed `${ip}` to my dynamic DNS client to use. But things get complicated when you want to use that in cron jobs. In cron jobs, you don’t want to output anything to `stdout`/`stderr` when things are OK, as anything output will be treated as an error and send a mail to admin, and I don’t need hourly mails. In cron job you also want it to fail early if things go wrong, so you don’t set a wrong IP to your DNS. With all those in mind, the simple one line piping command needs to be expanded into something like: apikey="..." curlout=$(curl --silent --show-error --fail --header "X-API-Key: ${apikey}" --header 'Accept: application/json' "https://api.ui.com/v1/hosts" 2>&1) if [ $? -ne 0 ]; then echo "$curlout" exit 1 fi ip=$(echo "$curlout" | jq -r '.data.[0].reportedState.ip') At that point, it no longer makes sense to pipe `curl` + `jq`. It makes more sense to implement that in the dyndns client itself. This way, I also can handle more logic like filtering through the IPs returned by the API to find the first public v4 IP. In the end, I just add the new unifi api key to my cron job as another arg: exec /path/to/ddporkbun --apikey="${apikey}" --secretapikey="${seckey}" --unifi-apikey="${unifikey}" --domain="mydomain.com" --subdomain="dyndns" --log-level=ERROR
b.yuxuan.org
June 8, 2025 at 4:33 AM
Finally, an external trackpad that's not from Apple
I mentioned last year that how I almost always preferred trackpads over mice, and Apple was always the only manufacturer of external trackpads, and my 2 Apple Magic Trackpads both have swollen batteries and need to be replaced. I preordered one Ploopy trackpad back then, it finally arrived this February, and after used it for a few days I order a second one, and the second one also arrived yesterday. My black Ploopy trackpad This is a mini review of it, with comparison to Apple Magic Trackpad: The texture/feeling is quite different, the Ploopy trackpad feels rough (as it’s 3d printed). I, personally, actually like this feeling. The Ploopy trackpad does not have “click” at all (it cannot be clicked), and you always need to use tap-to-click. I always use tap-to-click with Apple’s or laptops’ trackpads anyways, so I don’t mind that. But if you don’t like tap-to-click, then this is likely a dealbreaker for you. This might be related to the no click, or related to the rough texture, I haven’t figured out exactly why yet, but I noticed that with Ploopy trackpad, when I use tap-to-drag for a longer distance, I’m more likely to fail (the drag stops middle way). So far this is only a minor annoyance for me. I guess it would likely work easier if I use the stylus comes with it to do tap-and-drag (yes it comes with a stylus stowed in), but I need to retrain my muscle memory to use the stylus to do that 😅. Sometimes, when I use two finger scrolling, it will mistake me as pinch to zoom and zoom the page instead. This is also only a minor annoyance for me so far. The final one actually also applies to Apple’s trackpad, but it’s just more obvious on the Ploopy one as it has no click: When I use it with my linux server, I started to notice that it actually does not support tap-to-click out-of-the-box, even after I turned on that setting in KDE’s system settings. I didn’t noticed that issue with Apple’s trackpad previously because I mostly just use the linux box as a server and rarely use the GUI on it. This is also likely an X11 only issue (that it likely already works out-of-the-box with Wayland), but wayland broke on my linux server sometime last year and I didn’t dig into that yet. Anyways, following Debian wiki, I added this config file and it fixed the issue: $ cat /etc/X11/xorg.conf.d/40-libinput.conf Section "InputClass" Identifier "libinput touchpad catchall" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Driver "libinput" Option "Tapping" "on" EndSection Overall I’m happy with it (otherwise I won’t buy a second one). Now I am finally free of Apple’s trackpads and lightning cables.
b.yuxuan.org
March 6, 2025 at 4:36 PM