Stream
-
Using val.town for the first time
Honestly, val.town is cool and fun and I have never been able to get something up and running as quickly as I have on val.town. Once repl.it stopped allowing you to run flask apps without upgrading, I knew it was only a matter of time, when I would need to switch to a different cloud function app.
I was able to move my reference manager from repl to val.town Jan 4, 2024: Notion Reference Manager
Here is a quick friction log:
I tried to use it as an IDE (based on around 3 hours of exploration), but there were a few challenges:
- The all-scripts-on-one-page layout made it feel less like an IDE.
- There was a noticeable delay between clicking run and seeing the output.
- The auto-format code feature often didn't work as expected.
- Every save increased the version, which was a personal irritation as I prefer versions to represent actual changes.
- Sometimes, saving didn't work if I collapsed the tab group.
- For it to feel like a true script IDE, it might need code folding functionality.
On the positive side, I appreciated the TypeScript argument suggestions. So, I've been using it more as an executable pastebin. Overall, I like how compact and clean the interface is, considering how much you can do with it.
-
Fixing stream overflow issues
One benefit of owning my codebase for notion based blog is -- I can fix things that no other developer would be interested in fixing.
For example, my post pages were fine, but my stream page had an overflow of code block, which I hacked a fix for in an hour 😅BEFORE
The image displays a webpage titled "Nerdy Momo Cat," featuring a post under the "Stream" section dated January 5, 2024, about a "Notion Reference Manager." The post includes a list of instructions for setting up a Notion paper manager and a corresponding script for automation. A code snippet in a black box extends beyond the right edge of the visible page area, indicating that the content is not fully contained within the layout of the webpage. AFTER
The image shows a webpage with text describing a Notion Reference Manager. It includes instructions for adding a paper name, setting an environment variable for the API key, adding a database ID, sharing the database for integration, and changing the script to run automatically. A section of code is displayed in a black box, indicating a script for hosting the automation elsewhere. The code block, which previously overflowed to the right, has been corrected to fit within the display area. -
Notion Reference Manager
Notion Reference Manager and the corresponding valtown script
- Add just the paper name with semicolon at the end ; and watch it fill in!
- Add
NOTION_API_KEY
as env variable in val.town - Add your database id to here
const PAPERPILE_DB_ID = "DB_ID_GOES_HERE";
- Make sure you have shared the database with the integration (and hence the API key)
- Remember to change the script to run automatically every 15 minutes
- Have fun?
Code in case you want to host somewhere elseimport process from "node:process"; import { Client } from "npm:@notionhq/client"; import { fetch } from "npm:cross-fetch"; var currentDate = new Date(); currentDate.setMonth(currentDate.getMonth() - 2); var lastCheckDate = currentDate.toISOString().split("T")[0]; export default async function (interval: Interval) { const NOTION_API_KEY = process.env.NOTION_API_KEY; const PAPERPILE_DB_ID = "DB_ID_HERE"; if (!NOTION_API_KEY || !PAPERPILE_DB_ID) { throw new Error("Please fill in your API key and database ID"); } let dont_update = []; const notion = new Client({ auth: NOTION_API_KEY }); const databaseId = PAPERPILE_DB_ID; const queryResponse = await notion.databases.query({ database_id: databaseId, page_size: 100, filter: { and: [ { property: "Name", rich_text: { contains: ";", }, }, { property: "Created Time", date: { on_or_after: lastCheckDate, }, }, ], }, }); const relevant_results = queryResponse.results.filter( (i) => !dont_update.includes(i.id) ); console.log( `Checked database, found ${relevant_results.length} items to update.` ); const all_updated = []; for (var i of relevant_results) { let semscholar_query = i.properties.Name.title[0].plain_text.replace( /[^\w\s]/gi, " " ); /*+ " " + i.properties["Author(s)"].multi_select.map(x => x.name).join(", ")*/ console.log(semscholar_query); let fields = `url,title,abstract,authors,year,externalIds`; const j = await fetch( `https://api.semanticscholar.org/graph/v1/paper/search?query=${encodeURIComponent( semscholar_query )}&limit=1&fields=${encodeURIComponent(fields)}` ) .then((r) => r.json()) .catch(function () { console.log("Promise Rejected"); }); if (!(j.total > 0)) { console.log("No results found for " + semscholar_query); continue; } const paper = j.data[0]; // get bibtex let all_external_ids = paper.externalIds; // console.log(paper) // console.log(all_external_ids) let doi_to_add = null; let bibtext_to_add = null; if (paper.externalIds.DOI) { doi_to_add = paper.externalIds.DOI; } else if (paper.externalIds.PubMed) { doi_to_add = paper.externalIds.PubMed; } else if (paper.externalIds.PubMedCentral) { doi_to_add = paper.externalIds.PubMedCentral; } else if (paper.externalIds.ArXiv) { doi_to_add = "arxiv." + paper.externalIds.ArXiv; } if (doi_to_add) { // const bib = await fetch('https://api.paperpile.com/api/public/convert', { // method: 'POST', // headers: { // 'Accept': 'application/json, text/plain, */*', // 'Content-Type': 'application/json' // }, // body: JSON.stringify({fromIds: true, input: doi_to_add.replace("arxiv.",""), targetFormat: "Bibtex"}) // }).then((r) => r.json()); // if (!(bib.error) && bib.withErrors==false) // { // bibtext_to_add=bib.output; // if (bibtext_to_add.indexOf('abstract')!=-1) // {bibtext_to_add = bibtext_to_add.split('abstract')[0]+ bibtext_to_add.split('abstract')[1].split('",').slice(1).join('",');} // } let bib = await fetch("https://doi.org/" + doi_to_add, { method: "GET", headers: { Accept: "application/x-bibtex; charset=utf-8", "Content-Type": "text/html; charset=UTF-8", }, redirect: "follow", }).then((r) => r.text()); // console.log(bib); // if (!(bib.error) && bib.withErrors == false) { // bibtext_to_add = bib; // } if (bib != "" && bib != null && bib.startsWith("@") == true) { bib = bib.replace(/\$.*?\$/g, ""); bib = bib.replace(/amp/g, ""); bibtext_to_add = bib; console.log("Found bib"); // console.log(bib) } } else { let authors = []; if (paper.authors != null) { for (var jj = 0; jj < paper.authors.length; jj++) { authors.push(paper.authors[jj].name); } } console.log(authors.toString()); let bib_str = "@article{" + paper.paperId + ",\n title = {" + paper.title + "},\n"; if (paper.venue != null && paper.venue != "") { bib_str += "venue = {" + paper.venue + "},\n"; } if (paper.year != null && paper.year != "") { bib_str += " year = {" + paper.year + "},\n "; } if (paper.authors != null && paper.authors != []) { bib_str += "author = {" + authors.join(" and ") + "}\n"; } bib_str += "}"; console.log(bib_str); bibtext_to_add = bib_str; } // get tldr from semantic scholar let sem_scholar_ppid = paper.paperId; let tldr = null; const tl_f = await fetch( `https://api.semanticscholar.org/graph/v1/paper/${encodeURIComponent( sem_scholar_ppid )}?fields=${encodeURIComponent("tldr")}` ).then((r) => r.json()); if (tl_f.tldr) { tldr = tl_f.tldr.text; } let updateOptions = { page_id: i.id, properties: { Name: { title: [ { type: "text", text: { content: paper.title || i.properties.Name.title[0].plain_text.replace(";", ""), }, }, ], }, Authors: { multi_select: paper.authors .filter((x) => x) .map((x) => ({ name: x.name.replace(",", ""), })) .slice(0, 100), }, Abstract: { rich_text: [ { text: { content: (paper.abstract || "").length < 1900 ? paper.abstract || "" : paper.abstract.substring(0, 1900) + "...", }, }, ], }, Link: { url: paper.url, }, Year: { number: paper.year, }, }, }; if (tldr) { updateOptions.properties.tldr = { rich_text: [ { text: { content: tldr, }, }, ], }; } if (doi_to_add) { updateOptions.properties.DOI = { rich_text: [ { text: { content: doi_to_add, }, }, ], }; } if (bibtext_to_add) { updateOptions.properties.Bibtex = { rich_text: [ { text: { content: bibtext_to_add, }, }, ], }; if (bibtext_to_add != "") { updateOptions.properties.In_Text_Citation = { rich_text: [ { text: { content: bibtext_to_add.split("{")[1].split(",")[0], }, }, ], }; } } try { await notion.pages.update(updateOptions); all_updated.push(i.properties.Name.title[0].plain_text); } catch (e) { console.error(`Error on ${i.id}: [${e.status}] ${e.message}`); if (e.status == 409) { console.log("Saving conflict, scheduling retry in 3 seconds"); setTimeout(async () => { try { console.log(`Retrying ${i.id}`); await notion.pages.update(updateOptions); } catch (e) { console.error( `Subsequent error while resolving saving conflict on ${i.id}: [${e.status}] ${e.message}` ); // dont_update.push(i.id); } }, 3000); } else { // dont_update.push(i.id); } } console.log("Updated " + i.properties.Name.title[0].plain_text); } }
Remember this code is in format of val town and hence the weird imports. You would need to switch that to require probably (I ain’t good at JS) -
Add to Notion through Todoist
CommentI wish there was a way to run python on valtown -- because this is a script I want to use with OpenAI in the future and I want instructor and pydantic for LLM-fuzzy-estimation-processing of fields.
This js script will do in the meantime I guess.Add to a notion page as a callout or to a page name to notion database through todoist (corresponding valtown script)
Code in case you want to host somewhere elseimport process from "node:process"; import { TodoistApi } from "npm:@doist/todoist-api-typescript"; import { Client } from "npm:@notionhq/client"; const TODOIST_API_KEY = process.env.TODOIST_API_KEY; const todoistapi = new TodoistApi(TODOIST_API_KEY); const NOTION_API_KEY = process.env.NOTION_API_KEY; const notion = new Client({ auth: NOTION_API_KEY, }); var add_to_notion_todoist_project_id = "PROJECT_ID_HERE"; var todoist_dict_mapping = { "habit": { "todoist-section-id": "SECTION_ID_HERE", "notion-map-type": "page", "notion-id": "PAGE_ID_HERE", }, "papers": { "todoist-section-id": "SECTION_ID_HERE", "notion-map-type": "database", "notion-id": "DB_ID_HERE", }, }; function getNotionId(section_id) { if (!section_id) { return [todoist_dict_mapping["dump"]["notion-map-type"], todoist_dict_mapping["dump"]["notion-id"]]; } for (var key in todoist_dict_mapping) { if (todoist_dict_mapping[key]["todoist-section-id"] === section_id) { return [ todoist_dict_mapping[key]["notion-map-type"] || todoist_dict_mapping["dump"]["notion-map-type"], todoist_dict_mapping[key]["notion-id"] || todoist_dict_mapping["dump"]["notion-id"], ]; } } return [todoist_dict_mapping["dump"]["notion-map-type"], todoist_dict_mapping["dump"]["notion-id"]]; } function convertDateObject(due) { function convertToISOWithOffset(datetimeStr, timezoneStr) { const date = new Date(datetimeStr); const [, sign, hours, minutes] = timezoneStr.match(/GMT ([+-])(\d{1,2}):(\d{2})/); date.setUTCMinutes(date.getUTCMinutes() + (parseInt(hours) * 60 + parseInt(minutes)) * (sign === "+" ? 1 : -1)); return date.toISOString().split(".")[0] + `${sign}${String(hours).padStart(2, "0")}:${String(minutes).padStart(2, "0")}`; } const formatDate = (date, datetime, timezone) => { let isoString = datetime ? datetime : date; if (timezone && timezone.startsWith("GMT") && timezone.length > 3) { return convertToISOWithOffset(datetime, timezone); } else { return isoString; } }; return { start: due ? formatDate(due.date, due.datetime, due.timezone) : new Date().toISOString(), end: null, time_zone: due && due.datetime && due.timezone && due.timezone.startsWith("GMT") && due.timezone.length > 3 ? null : (due && due.datetime && due.timezone ? due.timezone : "America/Los_Angeles"), }; } async function addCalloutToNotionPage(page_id, content, date) { console.log(JSON.stringify(date)); const response = await notion.blocks.children.append({ block_id: page_id, children: [{ "callout": { "rich_text": [{ "type": "mention", "mention": { "type": "date", "date": date, }, }], "icon": { "type": "external", "external": { "url": "https://www.notion.so/icons/circle-dot_lightgray.svg", }, }, "children": [{ "paragraph": { "rich_text": [{ "text": { "content": content, }, }], }, }], }, }], }); console.log(JSON.stringify(response)); } async function addPageToNotionDatabse(database_id, content) { const response = await notion.pages.create({ "parent": { "type": "database_id", "database_id": database_id, }, "properties": { "Name": { "title": [{ "text": { "content": content, }, }], }, }, }); } export default async function(interval: Interval) { var tasks = await todoistapi.getTasks({ projectId: add_to_notion_todoist_project_id, }); for (const task of tasks) { console.log(task); const [mappedNotionType, mappedNotionId] = getNotionId(task.sectionId); if (mappedNotionId) { if (mappedNotionType == "page" && mappedNotionId) { addCalloutToNotionPage(mappedNotionId, task.content, convertDateObject(task.due)); } else if (mappedNotionType == "database" && mappedNotionId) { addPageToNotionDatabse(mappedNotionId, task.content); } todoistapi.deleteTask(task.id); } } }
Remember this code is in format of val town and hence the weird imports. You would need to switch that to require probably (I ain’t good at JS)- The script either adds a page with content as title to database (which I use with Notion Reference Manager), or adds the content in callout with date to a page.
- Uses a single project in Todoist and sections to identify which page/database the content goes into
-
Apps for 2024
For 2024, I am sticking to Notion, Todoist, Raindrop, Reader, and Google Keep.
My list of apps to check out later has shortened -- Tana and Heptabase for notes, Superlist for todos, and Matter if they have a native android+tweet capture in 2025 I guess.
I think what helped was realizing, I want something that is online-first (that is sync across devices at the same time is a major thing) and collaboration first. That removed Capacities from the list. I don't know if Tana plans to have collaboration on objects (rather than workspace).
-
Copilot for Commit Messages
If my repo has any commit message other than "huh?", "what", "done", "commit", it is because of this tiny lifesaver by @code. I don't even know where it is coming from (Copilot, internal, extension) -- but it has made things a lot easier!
A screenshot of a version control panel within Visual Studio Code. It highlights a feature where commit messages can be auto-generated using AI. The user interface shows a text box where a message "Fix sitemap filter and update site logo" has been typed, likely as a commit message for version control. There's also a tooltip visible with the option to "Generate Commit Message", indicating the AI's assistance in creating commit descriptions. The panel lists several changed files, like "astro.config.ts", "site.config.ts", and "types.ts", with a notification showing there are 4 changes in total. -
Missing information and GPT-4 to the rescue
Missing information bothers me a ton if I have structured data inputs (food, journal, habits, medications etc). What GPT4 enabled for me is the ability to dump stuff on a page that I would have earlier found an app for. I can use it to extract structured data if need be.
-
Timeboxing
It took me so long, but I realized I do not want time boxing for actual work. I want time boxing for everything other than work so that I know how much work I can get done, how much social energy can I expend when accepting a meeting, what is urgent vs important etc.
Work for me expands to fill the space I provide it based on the energy and brainpower I have. It is the most...finicky thing. Everything else affects work. Yes, it is the most important thing, but everything else on my calendar is there because it is either urgent or important.
So, it **has** to be there. Or I will end up not talking to my best friend for 3 months if I think that is moveable. So, that is what time-boxing needs to be for me. I do 2 day task slots now (it can be 16 hours in 1 day, or 5+10) but, that is what works the best for me.
Also hyperfocus is no fun, but it does not help me to fight against hyperfocus. So, work gets done how it needs to get done! Yes, there are review deadlines, but having a timer that says, oh, don't do this any longer, or something that says I need to do exact X has not helped
-
Local first apps are a no-go for me
Every single time I use a local first app, I am reminded why I am not meant for local first apps. My @GoodnotesApp ended up with a corrupted/unliked database to pdf files, even with the auto-backup on, which cannot be fixed. Great!
This image shows a screenshot of Goodnotes app on iPad. The interface is displaying a folder named "Paper Reviews". Several documents are shown as thumbnails. Annotations on the thumbnails are visible, such as lines or marks made on the documents, but the actual content of the PDFs is not visible in the thumbnail preview suggesting data corruption. In case people bring this up:
1. Maybe you deleted the files -- I did not intend to.
2. You should have exported to pdf -- I'll lose all ability to search handwriting, might as well print it out
3. Read pdf & review in markdown -- uh huh.
4. Beta OS switching problem -- and? -
Morgen’s Calendar Assist
I tried @morgencalendar's new assist option. It is infinitely configurable (w/ some defaults), which is great! W/ new pricing +education discount, it's more expensive than @reclaimai, the question for me is, how much customization do I really want (I only need todoist+tasks)
Like, yes I can use morgen to do something like -- oh, propagate stuff based on regex (because I often add stuff like Migraine attack from 4pm-4am) and delete it from original calendar using JS, but maybe I can just do it using google apps script instead, ya know