Stream
-
Use Notion’s Property Description As Text → DB add-itor
Val town script here (which you can import as a module if you want!)
Demo using the template Money Database- Uses
instructor
andopen ai
(withgpt-4-turbo
) to process any content into a notion database entry. - Use
addToNotion
with any database id and content.
await addToNotion( "DB_ID_GOES_HERE", "CONTENT_GOES HERE"//"for example: $43.28 ordered malai kofta and kadhi (doordash) [me and mom] jan 3 2024" );
Prompts are created based on your database name, database description, property name, property type, property description, and if applicable, property options (and their descriptions).
Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email. Filters properties where either type is title or description starts with ✨.
- Uses
NOTION_API_KEY
,OPENAI_API_KEY
stored in env variables and uses Valtown blob storage to store information about the database. - Use
get_notion_db_info
to use the stored blob if exists or create one, useget_and_save_notion_db_info
to create a new blob (and replace an existing one if exists).
- Uses
-
My Problem with Productivity Social Media
Productive doing something that is irrelevant to me
These two comments on reddit exactly explain why I don't watch productivity content. Whether it be youtube or twitter or anything else — I do not want to be productive running courses, I do not want to be productive running a productivity business - but the content creation grind requires so much input — that sooner than later, that becomes your job, either the content itself, or the course itself, or the business teaching about productivity. And I do not fault creators for that. But, that isn’t me. And now it isn’t relatable. Ali Abdaal used to stand out for his content being productive at being/or learning to be a doctor, but that is no longer the case.
That+sponsorships -- but the latter is not productivity sphere related.
-
Scenario’s Marketing Seems Iffy?
As much as I like Scenario, and am on the fence about AI art, I really do not like terminology shift from training → pre-training, and fine-tuning → training. Companies can say things like "train on your art", which still means the model was pre-trained on scraped art.
And the math & opt-in/opt-out debate aside -- I do not like the "lying by omission" aspect of it.
Like, a platform isn't more ethical than midjourney just because it provides services to fine-tune a model on a set of art samples. 🤷🏽♀️
-
Poorly done AI illustrations
I know, I know. I decided not be critical in the new year and only reinforce positivity, but what is up with really poorly done AI images used as thread’s main points or OG images for links? You want to save money by not paying an illustrator but you also want to save time by not spending hours and hours actually getting the perfect image? Is that what it is?
Because otherwise, how do you explain these images at all?
-
I want to learn interactive animations
One of my goals this year is to create animated illustrations. Do you all have recommendations for programmatic animation libraries that can animate illustrations (not divs/elements/simple shapes)? I'd use procreate dreams but you cannot configure those with trigger states, so I can't have them be interactive and procreate is not vector based so I can't export a svg 😟
One of the hard parts about this not being associated with a project is I have open ended questions about the scope.
- Do I need trigger animations for illustrations or is that need coming from data animations?
- Am I looking for video exports, runtimes (like rive) or svgs?
-
Using val.town for the first time
Honestly, val.town is cool and fun and I have never been able to get something up and running as quickly as I have on val.town. Once repl.it stopped allowing you to run flask apps without upgrading, I knew it was only a matter of time, when I would need to switch to a different cloud function app.
I was able to move my reference manager from repl to val.town Jan 4, 2024: Notion Reference Manager
Here is a quick friction log:
I tried to use it as an IDE (based on around 3 hours of exploration), but there were a few challenges:
- The all-scripts-on-one-page layout made it feel less like an IDE.
- There was a noticeable delay between clicking run and seeing the output.
- The auto-format code feature often didn't work as expected.
- Every save increased the version, which was a personal irritation as I prefer versions to represent actual changes.
- Sometimes, saving didn't work if I collapsed the tab group.
- For it to feel like a true script IDE, it might need code folding functionality.
On the positive side, I appreciated the TypeScript argument suggestions. So, I've been using it more as an executable pastebin. Overall, I like how compact and clean the interface is, considering how much you can do with it.
-
Fixing stream overflow issues
One benefit of owning my codebase for notion based blog is -- I can fix things that no other developer would be interested in fixing.
For example, my post pages were fine, but my stream page had an overflow of code block, which I hacked a fix for in an hour 😅BEFORE
AFTER
-
Notion Reference Manager
Notion Reference Manager and the corresponding valtown script
- Add just the paper name with semicolon at the end ; and watch it fill in!
- Add
NOTION_API_KEY
as env variable in val.town - Add your database id to here
const PAPERPILE_DB_ID = "DB_ID_GOES_HERE";
- Make sure you have shared the database with the integration (and hence the API key)
- Remember to change the script to run automatically every 15 minutes
- Have fun?
Code in case you want to host somewhere elseimport process from "node:process"; import { Client } from "npm:@notionhq/client"; import { fetch } from "npm:cross-fetch"; var currentDate = new Date(); currentDate.setMonth(currentDate.getMonth() - 2); var lastCheckDate = currentDate.toISOString().split("T")[0]; export default async function (interval: Interval) { const NOTION_API_KEY = process.env.NOTION_API_KEY; const PAPERPILE_DB_ID = "DB_ID_HERE"; if (!NOTION_API_KEY || !PAPERPILE_DB_ID) { throw new Error("Please fill in your API key and database ID"); } let dont_update = []; const notion = new Client({ auth: NOTION_API_KEY }); const databaseId = PAPERPILE_DB_ID; const queryResponse = await notion.databases.query({ database_id: databaseId, page_size: 100, filter: { and: [ { property: "Name", rich_text: { contains: ";", }, }, { property: "Created Time", date: { on_or_after: lastCheckDate, }, }, ], }, }); const relevant_results = queryResponse.results.filter( (i) => !dont_update.includes(i.id) ); console.log( `Checked database, found ${relevant_results.length} items to update.` ); const all_updated = []; for (var i of relevant_results) { let semscholar_query = i.properties.Name.title[0].plain_text.replace( /[^\w\s]/gi, " " ); /*+ " " + i.properties["Author(s)"].multi_select.map(x => x.name).join(", ")*/ console.log(semscholar_query); let fields = `url,title,abstract,authors,year,externalIds`; const j = await fetch( `https://api.semanticscholar.org/graph/v1/paper/search?query=${encodeURIComponent( semscholar_query )}&limit=1&fields=${encodeURIComponent(fields)}` ) .then((r) => r.json()) .catch(function () { console.log("Promise Rejected"); }); if (!(j.total > 0)) { console.log("No results found for " + semscholar_query); continue; } const paper = j.data[0]; // get bibtex let all_external_ids = paper.externalIds; // console.log(paper) // console.log(all_external_ids) let doi_to_add = null; let bibtext_to_add = null; if (paper.externalIds.DOI) { doi_to_add = paper.externalIds.DOI; } else if (paper.externalIds.PubMed) { doi_to_add = paper.externalIds.PubMed; } else if (paper.externalIds.PubMedCentral) { doi_to_add = paper.externalIds.PubMedCentral; } else if (paper.externalIds.ArXiv) { doi_to_add = "arxiv." + paper.externalIds.ArXiv; } if (doi_to_add) { // const bib = await fetch('https://api.paperpile.com/api/public/convert', { // method: 'POST', // headers: { // 'Accept': 'application/json, text/plain, */*', // 'Content-Type': 'application/json' // }, // body: JSON.stringify({fromIds: true, input: doi_to_add.replace("arxiv.",""), targetFormat: "Bibtex"}) // }).then((r) => r.json()); // if (!(bib.error) && bib.withErrors==false) // { // bibtext_to_add=bib.output; // if (bibtext_to_add.indexOf('abstract')!=-1) // {bibtext_to_add = bibtext_to_add.split('abstract')[0]+ bibtext_to_add.split('abstract')[1].split('",').slice(1).join('",');} // } let bib = await fetch("https://doi.org/" + doi_to_add, { method: "GET", headers: { Accept: "application/x-bibtex; charset=utf-8", "Content-Type": "text/html; charset=UTF-8", }, redirect: "follow", }).then((r) => r.text()); // console.log(bib); // if (!(bib.error) && bib.withErrors == false) { // bibtext_to_add = bib; // } if (bib != "" && bib != null && bib.startsWith("@") == true) { bib = bib.replace(/\$.*?\$/g, ""); bib = bib.replace(/amp/g, ""); bibtext_to_add = bib; console.log("Found bib"); // console.log(bib) } } else { let authors = []; if (paper.authors != null) { for (var jj = 0; jj < paper.authors.length; jj++) { authors.push(paper.authors[jj].name); } } console.log(authors.toString()); let bib_str = "@article{" + paper.paperId + ",\n title = {" + paper.title + "},\n"; if (paper.venue != null && paper.venue != "") { bib_str += "venue = {" + paper.venue + "},\n"; } if (paper.year != null && paper.year != "") { bib_str += " year = {" + paper.year + "},\n "; } if (paper.authors != null && paper.authors != []) { bib_str += "author = {" + authors.join(" and ") + "}\n"; } bib_str += "}"; console.log(bib_str); bibtext_to_add = bib_str; } // get tldr from semantic scholar let sem_scholar_ppid = paper.paperId; let tldr = null; const tl_f = await fetch( `https://api.semanticscholar.org/graph/v1/paper/${encodeURIComponent( sem_scholar_ppid )}?fields=${encodeURIComponent("tldr")}` ).then((r) => r.json()); if (tl_f.tldr) { tldr = tl_f.tldr.text; } let updateOptions = { page_id: i.id, properties: { Name: { title: [ { type: "text", text: { content: paper.title || i.properties.Name.title[0].plain_text.replace(";", ""), }, }, ], }, Authors: { multi_select: paper.authors .filter((x) => x) .map((x) => ({ name: x.name.replace(",", ""), })) .slice(0, 100), }, Abstract: { rich_text: [ { text: { content: (paper.abstract || "").length < 1900 ? paper.abstract || "" : paper.abstract.substring(0, 1900) + "...", }, }, ], }, Link: { url: paper.url, }, Year: { number: paper.year, }, }, }; if (tldr) { updateOptions.properties.tldr = { rich_text: [ { text: { content: tldr, }, }, ], }; } if (doi_to_add) { updateOptions.properties.DOI = { rich_text: [ { text: { content: doi_to_add, }, }, ], }; } if (bibtext_to_add) { updateOptions.properties.Bibtex = { rich_text: [ { text: { content: bibtext_to_add, }, }, ], }; if (bibtext_to_add != "") { updateOptions.properties.In_Text_Citation = { rich_text: [ { text: { content: bibtext_to_add.split("{")[1].split(",")[0], }, }, ], }; } } try { await notion.pages.update(updateOptions); all_updated.push(i.properties.Name.title[0].plain_text); } catch (e) { console.error(`Error on ${i.id}: [${e.status}] ${e.message}`); if (e.status == 409) { console.log("Saving conflict, scheduling retry in 3 seconds"); setTimeout(async () => { try { console.log(`Retrying ${i.id}`); await notion.pages.update(updateOptions); } catch (e) { console.error( `Subsequent error while resolving saving conflict on ${i.id}: [${e.status}] ${e.message}` ); // dont_update.push(i.id); } }, 3000); } else { // dont_update.push(i.id); } } console.log("Updated " + i.properties.Name.title[0].plain_text); } }
-
Add to Notion through Todoist
CommentI wish there was a way to run python on valtown -- because this is a script I want to use with OpenAI in the future and I want instructor and pydantic for LLM-fuzzy-estimation-processing of fields.
This js script will do in the meantime I guess.Add to a notion page as a callout or to a page name to notion database through todoist (corresponding valtown script)
Code in case you want to host somewhere elseimport process from "node:process"; import { TodoistApi } from "npm:@doist/todoist-api-typescript"; import { Client } from "npm:@notionhq/client"; const TODOIST_API_KEY = process.env.TODOIST_API_KEY; const todoistapi = new TodoistApi(TODOIST_API_KEY); const NOTION_API_KEY = process.env.NOTION_API_KEY; const notion = new Client({ auth: NOTION_API_KEY, }); var add_to_notion_todoist_project_id = "PROJECT_ID_HERE"; var todoist_dict_mapping = { "habit": { "todoist-section-id": "SECTION_ID_HERE", "notion-map-type": "page", "notion-id": "PAGE_ID_HERE", }, "papers": { "todoist-section-id": "SECTION_ID_HERE", "notion-map-type": "database", "notion-id": "DB_ID_HERE", }, }; function getNotionId(section_id) { if (!section_id) { return [todoist_dict_mapping["dump"]["notion-map-type"], todoist_dict_mapping["dump"]["notion-id"]]; } for (var key in todoist_dict_mapping) { if (todoist_dict_mapping[key]["todoist-section-id"] === section_id) { return [ todoist_dict_mapping[key]["notion-map-type"] || todoist_dict_mapping["dump"]["notion-map-type"], todoist_dict_mapping[key]["notion-id"] || todoist_dict_mapping["dump"]["notion-id"], ]; } } return [todoist_dict_mapping["dump"]["notion-map-type"], todoist_dict_mapping["dump"]["notion-id"]]; } function convertDateObject(due) { function convertToISOWithOffset(datetimeStr, timezoneStr) { const date = new Date(datetimeStr); const [, sign, hours, minutes] = timezoneStr.match(/GMT ([+-])(\d{1,2}):(\d{2})/); date.setUTCMinutes(date.getUTCMinutes() + (parseInt(hours) * 60 + parseInt(minutes)) * (sign === "+" ? 1 : -1)); return date.toISOString().split(".")[0] + `${sign}${String(hours).padStart(2, "0")}:${String(minutes).padStart(2, "0")}`; } const formatDate = (date, datetime, timezone) => { let isoString = datetime ? datetime : date; if (timezone && timezone.startsWith("GMT") && timezone.length > 3) { return convertToISOWithOffset(datetime, timezone); } else { return isoString; } }; return { start: due ? formatDate(due.date, due.datetime, due.timezone) : new Date().toISOString(), end: null, time_zone: due && due.datetime && due.timezone && due.timezone.startsWith("GMT") && due.timezone.length > 3 ? null : (due && due.datetime && due.timezone ? due.timezone : "America/Los_Angeles"), }; } async function addCalloutToNotionPage(page_id, content, date) { console.log(JSON.stringify(date)); const response = await notion.blocks.children.append({ block_id: page_id, children: [{ "callout": { "rich_text": [{ "type": "mention", "mention": { "type": "date", "date": date, }, }], "icon": { "type": "external", "external": { "url": "https://www.notion.so/icons/circle-dot_lightgray.svg", }, }, "children": [{ "paragraph": { "rich_text": [{ "text": { "content": content, }, }], }, }], }, }], }); console.log(JSON.stringify(response)); } async function addPageToNotionDatabse(database_id, content) { const response = await notion.pages.create({ "parent": { "type": "database_id", "database_id": database_id, }, "properties": { "Name": { "title": [{ "text": { "content": content, }, }], }, }, }); } export default async function(interval: Interval) { var tasks = await todoistapi.getTasks({ projectId: add_to_notion_todoist_project_id, }); for (const task of tasks) { console.log(task); const [mappedNotionType, mappedNotionId] = getNotionId(task.sectionId); if (mappedNotionId) { if (mappedNotionType == "page" && mappedNotionId) { addCalloutToNotionPage(mappedNotionId, task.content, convertDateObject(task.due)); } else if (mappedNotionType == "database" && mappedNotionId) { addPageToNotionDatabse(mappedNotionId, task.content); } todoistapi.deleteTask(task.id); } } }
- The script either adds a page with content as title to database (which I use with Notion Reference Manager), or adds the content in callout with date to a page.
- Uses a single project in Todoist and sections to identify which page/database the content goes into
-
Apps for 2024
For 2024, I am sticking to Notion, Todoist, Raindrop, Reader, and Google Keep.
My list of apps to check out later has shortened -- Tana and Heptabase for notes, Superlist for todos, and Matter if they have a native android+tweet capture in 2025 I guess.
I think what helped was realizing, I want something that is online-first (that is sync across devices at the same time is a major thing) and collaboration first. That removed Capacities from the list. I don't know if Tana plans to have collaboration on objects (rather than workspace).