By popular request, here's how to add Bluesky replies as your blog's comment section! This requires some technical know-how for now, but I'm hoping that we see some no-code solutions for this pop up soon, like Ghost or Wordpress plugins. emilyliu.me/blog/comments
November 25, 2024 at 5:49 AM UTCStream
-
I Wish We Had Tear-Off Hotbars
One of the major reasons I avoid screenshots/videos in my posts generally is because of how cumbersome it is to take them. And I wish I could kinda "tear off" commands from different apps and place it onto a temporary menu bar.
For example, when I was writing this post All The Menu Bar Weather Apps for MacOS — I really wanted to convert all screenshots to text. Why, you might ask? Because the workflow requires so many clicks in order.
So, I usually have my clop optimize on clipboard on. But given this required stitching of different images — I didn’t want to optimize each image and have it progressively get worse. So, this is what it looked like —
Pause
auto-optimizations in Clop, thenshow desktop
in Supercharge. Use Shottr tocapture any window
(note that this doesn't create a file).Copy
the image in Shottr, thencapture additional windows
(like settings or preferences).Open clipboard history
in Raycast (which doesn’t have a paste stack) multiple times toPaste
all images into the last Shottr image,turn optimization back on
in Clop, andcopy
the final image from Shottr.Oh, now I want to generate image captions, here is what that looked like —
First,
paste
the image into Notion, thenopen
ChatGPT andpaste
the image into ChatGPT to generate a caption.Copy
the generated caption into Notion,replace
line breaks using Shift+Enter, and finallycut
andpaste
the text into the image caption field that is opened by aclick
on the caption button.I wish there was a way to create a workflow hotbar on the go, one where I could put all the actions I need to take across various apps into one place, and then discard the hotbar. Sure, I can assign keyboard shortcuts to all these actions — but I possibly never need to use these actions again, and if I need to, I am not going to remember the shortcuts I assigned to them. I don’t even know if tear-off hotbars are possible in MacOS, but I wish they were.
-
Safari has an Inbuilt Link Preview
Remember how I was excited about the preview options in Arc, and then when I stopped using Arc, I shifted to Chrome and found an extension called MaxFocus to give me the same preview options (Read more here: Link Previews in Chrome using MaxFocus)? Well, Safari has an inbuilt option for link previews (and I learnt about it here) — you can use force touch on your touchpad or use
⌃+⌘+D
while hovering over the link (similar to bringing up the dictionary) to bring up the preview popup. That is awesome! Honestly, there are so many shortcuts with^
and⌥
that I am still learning them (see other shortcuts you might like hereShow information for the linked content ). -
Bluesky comments now work on Webtrotion
Two days ago, I added this comment to Making Webtrotion:
After Emily’s post, there was a flurry of efforts to use bluesky as a commenting system. I am actually surprised more people didn’t use Twitter as a commenting system when their API was free for all. That led to Cory creating a package, which was react based. So, I tried finding if people had previously added bluesky comments to astro, and of course they had: see graysky, snorre (also has the option to reply from the comment section but needs bluesky app password, I’ll revisit this after OAuth), and Jade’s (statically builds comments, and doesn’t update in real-time). Then I found Lou’s post here and was relieved to know that you can do this without react. I ended up using the version by Matt here (the person whose code I am using for Bluesky post embeds now work on Webtrotion as well). The good part about Matt’s version is that it has an option to search for
did:{xx}plc:{{yy}
format, instead of expecting people to find the exactat
format URI. And lastly, I used this post from Jack to auto-search a post (and the idea of using aecho-feed-emoji
). There is a react version of this (modified from Cory’s version) here by Nico.Update Nov 27, 2024, 04:20 AM: I saw this cool trick by Garrick here to filter out hidden, blocked or not found replies and added that to the script.
All this together, and webtrotion now has its bluesky commenting system. I am not going to add an explicit URL here to show that auto-search works, while I added an explicit URL to the Bluesky post embeds post.
How to use Bluesky Comments in WebtrotionRemember, these are currently only read-only comments, interaction needs to happen on Bluesky (there is a handy-dandy link added on top of comments). Once Bluesky has OAuth, I’ll try to make it so that people can also comment through your webpage.
Step 1: If you are just now duplicating the template here, you should be fine. If you have already been using Webtrotion, add a property called
Bluesky Post Link
to the database with type asURL
. Make sure that the property name matches.Step 2: In your
constants-config.json
file, you will see this:"bluesky-comments": { "show-comments-from-bluesky": false, "auto-search-for-match": { "turn-on-auto-search": false, "author": "", "echo-feed-emoji": "" } }
-
show-comments-from-bluesky
decides whether you want to show a bluesky comment section at all or not. It isfalse
by default, but if you turn it totrue
, bluesky comment rendering script will be executed. - Now, whichever link you paste into
Bluesky Post Link
for a post, that thread will be used to render the comments on your post. - If we used this system as is, given Webtrotion builds every 8 hours (configurable), we would need to wait for it to compile to post, then make a post on Bluesky, then copy that URL and paste it into the
Bluesky Post Link
field, and wait for it to build again.auto-search-for-match
allows you to automatically search your profile for posts that mention the link (only parent posts, not in replies).-
turn-on-auto-search
decides whether you want to turn auto-search on or off. -
author
is your bluesky handle. You can specify this as handle (do not include@
sign) or as thedid:
protocol value. If you do not specify a handle here, auto-search will not be executed. -
echo-feed-emoji
: Only searches posts that mention the echo-feed character. If you set it to empty, it will search all your parent posts.
-
The variables look like this:
const post_slug = new URL( getPostLink(post.Slug), import.meta.env.SITE ).toString(); const bluesky_url = post.BlueSkyPostLink || ""; const BLUESKY_COMM = { "show-comments-from-bluesky": true, "auto-search-for-match": { "turn-on-auto-search": false, "author": "", "echo-feed-emoji": "", }, };
Here is the latest version of the script:
-
-
Bluesky post embeds now work on Webtrotion
Initially, I hadn't even considered that Bluesky posts wouldn’t embed correctly on Webtrotion. I'd spent lots of time working on tweet embeds to make them look more aligned with Notion's style. Interestingly, Bluesky embeds don't even work in Notion right now—they just show up as cut-off web pages.
Then I saw this post (which I'll embed below) that lets you use Bluesky comments as post comments and shows threads in those comments. That sounded awesome! While implementing that feature will take some time, I tried to mention the post and discovered that embeds actually work. So now Bluesky post embeds work on Webtrotion—yay!
They still don't have dark mode support, and I need to adjust the line spacing and font sizing, but I wanted to share this now since, you know, just in case the platform takes off.
Anyhow, the post in question:
How it looks like on the web
And, how it looks like on Notion
How it previously looked like on Webtrotion
And now, with the update
-
I tried Stackie
Stackie is a prompt your own database to obtain prompted structured extraction app, and I am here for it
As of Nov 5, 2024, Stackie has multiple Twitter posts that could be interpreted as endorsements of Trump. I want to clarify that my testing of this app occurred before I was aware of that position, and it doesn't reflect the values I hold dear.
Stackie (FAQ) is a new app that was suggested to me on Twitter feed from HeyDola (another app that I use for scheduling from images and text, because it works with whatsapp and is free). And it looked fun! It is very similar in idea to what I have implemented in Use Notion’s Property Description As Text → DB add-itor (the example there being a money tracker) and comes halfway through what I mentioned in The Components of a PKMS. But this is a self contained app, has way better UX than hosting your own script, is clean and somehow really clicked for me, because it comes really close to what I wanted (want?) to make in Trying to Build a Micro Journalling App.
To be honest, Notion's AI properties and Notion’s AI add option will get you there pretty often. It is probably too much for you would want if all you are looking for is tracking. There have been other apps that do something similar — hints being the one I can recall off the top of my head, but they all integrate with external apps or are meant for power users or developers (for example, AI add to Supabase database).
When you open the app it starts with a baseline of inbox database. It comes with its own templates, and ideally you should be prompted to select at least one during onboarding to get a feel of how it works. The templates are prompted databases, where each field can either be a date/time, number, boolean or text. The templated database and properties are all customizable which is a huge win!
The entry box when you have created all your “stacks” let's you type in anything and chooses which stack it is most likely to belong to — another affordance I really appreciate. It works with photos too, both understanding the text in the photo (so you can capture a snippet of an event you attended if you are tracking all events you attend in a month), and understands the objects in the photo — so you can click a photo of a cheeseburger and it will understand that it should go to the calorie tracking stack and figuring out the breakdown of nutrients for that log. And it works with voice, so you can speak and it will transcribe and process that information. It seems to use internal dictate option, so doesn't seem to be as good as whisper (proper nouns are hard for example) — but I might be wrong about their processing mechanism.
It can process into multiple databases and add multiple entries at once! It seems to only be additive at the moment though, you cannot edit entries through the universal text box (you can go to the entry and edit it though). There is no export option, but that disappointingly seems to be the norm for iOS and beta apps. You currently cannot do anything with the data you record like you can do in Notion (add it up, set limits etc), so it might not be satisfying to use as a habit tracker and hard to get a view of data you might want, but it is a great starting point. It is what Collections DB could look like, with integrated AI. The app is iOS only, so wouldn’t be something I use, but definitely something worth looking at.
Some images from the app -
I tried Xylect
Disclaimer: I received a free license key to test this app and post a review. The review content is entirely my own, with no conditions placed on its content or sentiment.
Popclip is the usual app that people use for interactions with content on MacOS. Xylect takes it one step further, it aims to function like Google’s smart lens AI feature, it tries to predict what someone might want to do with selected text, rather than giving you a list of all possible options.
So, let’s go through all the five (six?) claimed abilities at the moment.
[Expand toggles to see screenshots]The summarization feature works fine, but with the new writing tools in the latest version of Apple Mac OS, you probably won't need it. You can simply right-click on non-native apps or click the magic button on native apps to summarize content, and it will show up in a pop-up. I know it's an intentional step rather than an automatic pop-up, but I don't think it's worth getting the app just for this feature. Xylect is faster than Apple’s summarization, and that time savings adds up when you use the tool multiple times a day.I checked out the translation feature, and it seems to send the text to a backend model for translation, which is fine as it includes contextual information. However, it responds with a lot of text, even when I'm trying to translate just a single word. I wish this was more thoughtfully designed.The "Add to Calendar" feature works, but it requires a very specific date and time format. It doesn't fit in the title itself; it just creates a Google Calendar or Apple Calendar link based on the time. I'm not a big fan of that. Instead, I use Hey Dola for these purposes. It's free right now, though it might not stay that way. I'll link it here if you want calendaring with text or images, which makes it much easier.The spell check feature in Xylect works per word and doesn't understand context. So, while it can correctly spell "enable," it corrects "flght" to "fight" instead of "flight" because it's choosing the nearest correct word rather than the one that fits the context, even though you have the word "tracking" nearby.The calculation option for the selected math text didn't work for me at all. It seems like it's sending the text as a prompt to a machine learning model and returning the answer, rather than performing the calculation on the backend, so I wouldn't rely on this feature.The flight feature is nice, but I often end up using Raycast for flight tracking anyway. Some flights work, some don't. It's integrated with FlightAware, so I'm not sure how much of it is a Xylect’s issue versus a FlightAware issue. It seems to work for US flights but not for Indian flights, whereas I can use Raycast for both types of flights. -
Your Background Remover May Just Be Setting the Background to Be Transparent
I came across this post on reddit today where they use birefnet for background removal and then Florence for object detection, and the Florence model ends up restoring their background. No, this isn’t a training data issue. TIL, Apparently some background removal apps just set the transperancy of background pixels to 0 — they still have all the original color information in them. And all someone would need to do to restore the image would be to set all pixels to be opaque.
A pretty huge privacy risk, especially for someone like me, who uses this as an alt account
-
MacOS Inbuilt Screensharing and Document Templates
Aug 1, 2024, I saw ProperHonestTech’s video on my Youtube timeline titled “Mac owner? STOP doing these 11 things!” — and I don’t usually watch these, but this one I did. And I learnt two new things.
- MacOS has inbuilt screensharing
- You can assign documents to behave as templates
Screensharing
You can learn more about screensharing on MacOS here. It seems like it internally uses a Facetime API because you see those controls pop up in the menu bar.
[Expand toggles to see screenshots]
Screensharing is a hidden application. It doesn’t show up on your Application or Launchpad. You need to open screensharing app using Raycast or Spotlight.It opens up a screen showing all the connections you have. You can also create groups to manage all connections you have had.Then you can enter a hostname or apple id (yes, any apple id) to connect to.And voila, now you are screensharing.
You can also modify what you see on the toolbarAnd modify the settings for display scaling and qualityScreensharing shares microphone by default. You can mute the microphone in the
Connection
menu item. It also shares clipboard by default which can be changed in theEdit
menu item.You can copy paste or drag text, images and files during screensharing. Universal clipboard is not available during screensharing. You can’t have both Screen Sharing and Remote Management on at the same time.
Document Templates
You can create document templates, such that you don’t need to worry about modifying the original document, hence, called “stationery” in MacOS. To do this, select the document and use
Get Info
or⌘ + I
. In the general section, selectStationery pad
. Make sure it is a document you can edit, and not a folder or an alias.Now, everytime you open that document, it will duplicate itself — without you needing to worry about modifying the original.
-
Keyboard Shortcut for Text Input in Shortcuts
I have this little shortcut that takes an URL, gets HTML of the page, takes a screenshot, tags it based off some heuristics and saves it to Notion. The most frustrating part has been to use my trackpad to click done. Well, apparently you can press
fn + return
or🌐 + return
to do it! And no,⌘ + return
or justreturn
doesn’t work here.Side note: Accepting text replacement on MacOS doesn’t happen with
tab
orreturn
; it happens withspace
. -
Scale Font Based on Screen Size With Tailwind
Today I learnt that Tailwind doesn’t automatically scale base font sizes based off screen or viewport sizes. Sure there are plugin options and complicated stuff, but I am not a front end wizard, and adding this to my
global.css
helped. You check out the difference in font size for this website on mobile and desktop.html { font-size: 14px; @media screen(sm) { font-size: 16px; } }