Project Humainity

12 bookmarks
Custom sorting
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️ op X: 'This is really cool thinking from @trq212 here, but I think I disagree with the solution. He makes a great point about Markdown being more difficult to share and communicate ideas with, because formatting and visuals can make things super easy to understand. My problem with the' / X
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️ op X: 'This is really cool thinking from @trq212 here, but I think I disagree with the solution. He makes a great point about Markdown being more difficult to share and communicate ideas with, because formatting and visuals can make things super easy to understand. My problem with the' / X
·x.com·
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️ op X: 'This is really cool thinking from @trq212 here, but I think I disagree with the solution. He makes a great point about Markdown being more difficult to share and communicate ideas with, because formatting and visuals can make things super easy to understand. My problem with the' / X
How I Use Obsidian + Claude Code to Run My Life
How I Use Obsidian + Claude Code to Run My Life
sit down with my dear friend Vin (Internet Vin) for a deep, hands-on walkthrough of how he uses Obsidian and Claude Code together as a thinking partner, idea generator, and personal operating system. Vin demonstrates live how Claude Code can read, reference, and surface patterns across an entire Obsidian vault of interlinked markdown files — turning years of personal notes into actionable insights, project ideas, and even custom commands. This episode covers everything from the basic setup to advanced workflows like tracing how ideas evolve over time, generating contextual startup ideas, and delegating tasks to autonomous agents. If you are serious about getting the most out of LLMs, this is the episode that shows you how your own writing becomes the fuel.
·youtu.be·
How I Use Obsidian + Claude Code to Run My Life
Instead of watching an hour of Netflix, watch this 2 hour hour Stanford lecture will teach you more about how LLMs like ChatGPT and Claude are built than most people working at top AI companies learn in their entire careers.
Instead of watching an hour of Netflix, watch this 2 hour hour Stanford lecture will teach you more about how LLMs like ChatGPT and Claude are built than most people working at top AI companies learn in their entire careers.
In plaats van een uur Netflix te kijken, kijk naar deze 2 uur durende Stanford-lezing, die je meer zal leren over hoe LLMs zoals ChatGPT en Claude zijn gebouwd dan de meeste mensen die bij top AI-bedrijven werken in hun hele carrière leren.
·x.com·
Instead of watching an hour of Netflix, watch this 2 hour hour Stanford lecture will teach you more about how LLMs like ChatGPT and Claude are built than most people working at top AI companies learn in their entire careers.
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)

This lecture provides a concise overview of building a ChatGPT-like model, covering both pretraining (language modeling) and post-training (SFT/RLHF). For each component, it explores common practices in data collection, algorithms, and evaluation methods. This guest lecture was delivered by Yann Dubois in Stanford’s CS229: Machine Learning course, in Summer 2024.

·youtu.be·
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
RTX 5090, Mac Studio, or DGX Spark? I tried all three.
RTX 5090, Mac Studio, or DGX Spark? I tried all three.

What's really happening inside the personal AI computer movement when everyone is defaulting to cloud models but the real power comes from owning the substrate underneath?

The common framing is local versus cloud — but the reality is that this is a routing decision, and the long-term reason to build your own stack is not cost savings but compounding your knowledge over time.

In this video, I share the inside scoop on how to build a personal AI computer that actually works:

• Why memory is the heart of the system and most people get the pipeline side wrong • How to set up many surfaces with one stack underneath so your editor, notes, browser, and voice all call the same runtime • What hardware makes sense for the local-first knowledge worker versus the all-local maximalist versus the local-first builder • Why cloud AI should be a visitor to your system, not dominant across it

Leaders renting their mem

·youtu.be·
RTX 5090, Mac Studio, or DGX Spark? I tried all three.
Retrieval-augmented generation - Wikipedia
Retrieval-augmented generation - Wikipedia
Retrieval-Augmented Generation (RAG) is an AI framework that improves Large Language Model (LLM) accuracy by retrieving external data (documents, databases) to ground responses, reducing hallucinations. It involves searching, augmenting prompts, and generating answers, often used for company-specific chatbots, up-to-date information retrieval, and analyzing private documents.
·en.wikipedia.org·
Retrieval-augmented generation - Wikipedia
The Future of Computers and Cybersecurity: When Code Becomes Power
The Future of Computers and Cybersecurity: When Code Becomes Power
From invisible code to real-world consequences: how AI, cyber warfare, and systemic vulnerabilities are reshaping the foundations of modern society.
The Future of Computers and Cybersecurity: When Code Becomes PowerFrom invisible code to real-world consequences: how AI, cyber warfare, and systemic vulnerabilities are reshaping the foundations of modern society.
·thefutureoftheworld.nl·
The Future of Computers and Cybersecurity: When Code Becomes Power