Front-page articles summarized hourly.
Baochip-1x is a mostly-open, full-custom 22nm SoC for high-assurance applications. It combines a 350MHz VexRiscv CPU with MMU and an I/O processor (BIO) with four 700MHz PicoRV32 cores, plus 4 MiB RRAM and 2 MiB SRAM. It includes TRNG, crypto accelerators, secure mesh, glitch sensors, ECC RAM, protected key slots, and one-way counters; production-qualified with a mask set. A key differentiator is the MMU, enabling open software like Xous. Some subsystems are closed (AXI, USB PHY, analogs), but data paths are inspectable. Open RTL is partial; aims for open silicon-to-software stack.
Could not summarize article.
Vittorio Romeo analyzes C++26 reflection’s compile-time cost. Using GCC 16 in Docker, he benchmarks baseline, <meta>, <ranges>, and <print> dependencies, and several reflection scenarios. -freflection adds no overhead; bottlenecks come from parsing standard library headers: about 149 ms for <meta>, about 440 ms for <ranges>, about 1,082 ms for <print>. Reflecting one struct about 331 ms; +57 ms for 10 types; +22 ms for 20. PCHs or modules dramatically reduce times; modules sometimes trail PCH. Conclusion: reflection enables power but raises per-TU overhead; large projects will rely on PCHs/modules; minimize stdlib dependencies.
Notes that reviewing the project's CONTRIBUTING.md guidelines enables you to contribute to the redox-os/redox project on GitLab.
Getting Started post explains easing entry into Common Lisp with the ls-dev-image: a batteries-included OCI container bundling Emacs, Slime, QuickLisp, Lisp-Stat, sample data sets and plots. Run: docker run --rm -it --user vscode -w /home/vscode ghcr.io/lisp-stat/ls-dev:latest bash, then launch Emacs and M-x slime. A built-in ls-server starts on port 20202, with a web interface at https://localhost:20202 for plots and data-frames; a refresh script keeps it aligned with upstream Lisp-Stat. Codespaces support; contributions welcome.
IISc researchers led by Kavita Babu studied C. elegans to understand neuromodulation of social behavior. They found that loss of CASY-1 disrupts pigment dispersing factor signaling and unleashes serotonin-driven swarming. The swarm is self-emergent, even from a single worm, shown with CRISPR mutants and optogenetics, plus modeling with Koç University. The work suggests conserved neuromodulatory control of collective behavior and outlines future tests under varying environmental conditions; published in PNAS (2026).
LoGeR (Long-Context Geometric Reconstruction with Hybrid Memory) scales dense 3D reconstruction to minutes-long videos by chunking streams and bridging them with a hybrid memory: Local Memory with Sliding Window Attention (SWA) preserves high-fidelity local geometry, while Global Memory via Test-Time Training (TTT) enforces long-range consistency and prevents scale drift. This results in sub-quadratic complexity and no post-hoc optimization. In tests up to 19,000 frames, LoGeR achieves strong geometric coherence and improved drift, outperforming prior feedforward methods on KITTI, VBR, 7-Scenes, ScanNet, and TUM-Dynamics.
Author notes that macOS Tahoe introduces non-uniform window corner radii: different windows show different radii, e.g., TextEdit vs Calculator, with toolbars making radii more exaggerated. In new Xcode projects, the main window has a smaller radius; adding a toolbar increases it. Other elements like sidebars are affected too. The author criticizes this as a confusing, inconsistent UI change that harms macOS’s reputation for consistency, and mentions a related WebKit bug about scrollbars being cut off by corner radii.
FT homepage headlines focus on the Iran war’s geoeconomic impact—Kharg Island’s oil lifeline and which economies will pay the price—alongside US politics around Trump and Venezuela. In markets, Goldman pitches hedges against corporate loans and talk of emergency oil reserves. In tech, Nvidia-backed AI startup Nscale joins Sheryl Sandberg/ Nick Clegg, Anthropic sues the Pentagon, Microsoft adds Anthropic AI to Copilot, and SoftBank’s OpenAI bet weighs on valuations. The page blends global conflict risk, energy economics, and AI/tech industry news.
FUTILE is a minimalist art project: a useless infinite-scroll site that treats endless scrolling as an experiment. On mobile it measures your scroll distance (in mm) and ranks top scrollers, while the void always wins. The bottom is unreachable, and the piece critiques social-media scrolling. It jokes about thumb tendinitis and lost time, suggests a vague mood boost in an invisible metric, and invites you to scroll for the sake of scrolling.
An overview of the rapid evolution of agentic coding around Claude Code, Claude Cowork, and Codex, detailing new features (agent teams, multi-agent swarms, memory, scheduling, web hooks), performance upgrades (Claude Fast), and the economics of token usage. The piece catalogs hackathon results, real-world deployments, and strong user enthusiasm, alongside cautions about safety, privacy, and malware risks (OpenClaw), as well as warnings about dangerous commands and the need for backups. It also discusses broader implications for software development labor, governance, and the shift toward AI‑assisted coding at scale.
Christopher Drum surveys Lotus 1-2-3’s rise on PC-DOS, arguing it rewrote what a spreadsheet could be and toppled VisiCalc by leveraging 80-column text, more RAM, and an integrated suite (spreadsheet, graphing, database). He recounts Release 2.x and 3.4, explains the key ideas—A1 references with $, a slash-based interface, built-in graphing, dBase translation, and robust macros and add-ins—that made it a business engine. The piece also chronicles HAL, a natural-language wrapper, and the gritty realities of DOS-era usability, plus emulation efforts (DOSBox-X/86Box) and Lotus’s legacy in modern spreadsheets.
A brief declarative claim of having been somewhere, suggesting a desire to leave a trace or memory of the speaker’s presence.
Microsoft broke the core value of Windows for everyday users by diverting resources to AI and cloud at the expense of a stable, affordable OS. The piece cites a series of missteps—from the 'Recall' privacy risk and TPM 2.0 gatekeeping to forced Copilot integrations and a disruptive Start menu—plus patch instability and the July 2024 CrowdStrike outage, as signs of neglect. With Windows 10 still in use after end of support and Windows 11 adoption lagging, people are sticking with older, riskier software, creating a trust deficit and signaling a shift toward Azure/AI.
RVA23 makes the RISC-V Vector Extension (RVV) mandatory on compliant cores, elevating structured vector parallelism to a baseline for performance. It argues speculative execution dominated CPUs by preserving sequential models but at high power, verification, and security costs. With RVV, data-parallel workloads (AI, ML, DSP) can be parallelized explicitly, improving predictability and energy efficiency. The shift moves the performance center to vector throughput and memory bandwidth, enabling simpler in-order cores paired with capable vector engines while keeping some speculative options. RVA23 ends speculation’s monopoly, not its existence.
Debunking the Forbes-linked $5k per Claude Code user, the piece argues API prices are not actual compute costs. Opus 4.6 at $5/1M input and $25/1M output could yield $5k/month in API usage, but that's not Anthropic’s expense. Using OpenRouter prices for similar models, actual costs are roughly 10% of API prices (about $0.39-$2.34 per million tokens), implying the heaviest users cost Anthropic around $500/month, not $4,800. Cursor’s $5k figure reflects paying retail API rates. For most users, costs are modest; fewer than 5% are affected by weekly caps. Inference isn’t the main expense.
Postgres Top K struggles when filters or text search are involved; B-trees suffice for ORDER BY LIMIT but fail with filters, and composite indexes don’t generalize. GIN + B-tree plans can’t preserve ordering, forcing multi-phase, expensive scans. ParadeDB uses a single compound index plus two data structures: an inverted index and columnar arrays (via Tantivy). This lets boolean filters intersect document IDs and enables fast, SIMD-accelerated value lookups; Block WAND prunes early. Benchmarks show ParadeDB delivering hundreds of ms for Top K on 100M rows versus tens of seconds in Postgres; further optimizations planned (partitioning, Top K joins).
A force-directed word graph visualizes how words define one another. It draws from the 10,000 most common words in Google's Trillion Word Corpus and uses definitions from Open English Wordnet. Created by Wyatt Sell with help from Claude. Users can click or search words to see their definitions and how they help define other terms. The page also shows an out-degree distribution of words by how many definitions they appear in, and highlights high out/in and high in/out ratio words (frequently defined or frequently used in definitions).
Kapwing’s Tess.Design tried to pay artists 50% royalties for AI-generated art by fine-tuning models on their work. Launched May 2024, shut Jan 2026. Outreach: 325 artists; 6.5% joined. Revenue: $12,172.33 gross; $18,000 in advance royalties; ~$100/mo infra; net loss ~ $7k plus development time. Unresolved AI copyright litigation and broad artist hostility limited adoption; a major outlet paused a contract over risk. Lessons: creator onboarding is hard, brand control matters, timing and regulatory clarity are crucial. Some functionality moved into Kapwing AI.
From IBM PCjr's 1984 Fn to today’s Fn/Globe confusion. Fn started as a way to resurrect missing keys on laptops, then expanded to hardware controls and OS shortcuts. Windows added the Windows key; Apple kept ⌘, ⌥, ⌃ and later added Fn. The iPad/Mac Globe key (🌐) emerged to switch layouts and, from 2021, to manage windows and system tasks; but mappings remain inconsistent across Macs, iPads, keyboards, and remote sessions. No clear plan. Possible routes: standardize, or repurpose Caps Lock as a universal shortcut. The piece urges Apple to rethink and simplify keyboard design.
Made by Johno Whitaker using FastHTML