Show HN: HNping 'remind me later' for HN via web push
Show HN (score: 8)Description
I built HNping because I kept stumbling on HN posts where the discussion still had to get going. Wanted to revisit, but didn't want to create even more bookmarks/etc I'll just forget about. So I created a 'remind me later' tool (like the reddit bot) to fix this for myself.
To use it: go to hnping.com, enable notifications, and drag the bookmarklet to your bookmarks bar. Then click it on any HN post to set a reminder (5 minutes to 1 week). No personal info needed - you just get a UUID that serves as your account.
I tried to make it as simple as possible.
It's built on a Cloudflare Worker with D1 for data storage and uses Firebase Cloud Messaging for push notifications.
More from Show
Show HN: Easy alternative to giflib – header-only decoder in C
Show HN: Easy alternative to giflib – header-only decoder in C Hi HN, I made a lightweight, header-only GIF decoder in C, inspired by stb-style libraries. No dynamic allocation, portable, and optimized for embedded devices.<p>GitHub: <a href="https://github.com/Ferki-git-creator/TurboStitchGIF-HeaderOnly-Fast-ZeroAllocation-PlatformIndependent-Embedded-C-GIF-Decoder">https://github.com/Ferki-git-creator/TurboStitchGIF-HeaderOn...</a><p>Would love feedback or suggestions.
Show HN: OrioleDB Beta12 Features and Benchmarks
Show HN: OrioleDB Beta12 Features and Benchmarks Hey HN, I'm the creator of OrioleDB, an extension for PostgreSQL that serves as a drop-in replacement for the default Heap storage engine. It is designed to address scalability bottlenecks in PostgreSQL's buffer manager and reduce the WAL, enabling better utilization of modern multi-core CPUs and high‑performance storage systems.<p>We are getting closer to GA. This release includes:<p>- An index bridge to support all indexes that Heap supports<p>- Support for rewinding recent changes in the database.<p>- Tablespaces support<p>- Fillfactor support<p>- An orioledb_tree_stat() function for space utilization statistics<p>- Support for tables with more than 32 columns.<p>We also show several performance improvements using the TPC-C benchmarks. Overall, OrioleDB is much faster than Heap, also outperforming other Postgres providers.<p>We would love more people testing OrioleDB. The fastest way to do that is to use the docker image provided:<p><pre><code> docker run -d --name orioledb -p 5432:5432 orioledb/orioledb </code></pre> Read the full release here:<p><a href="https://www.orioledb.com/blog/orioledb-beta12-benchmarks" rel="nofollow">https://www.orioledb.com/blog/orioledb-beta12-benchmarks</a>
Show HN: BloomSearch – Keyword search with hierarchical bloom filters
Show HN: BloomSearch – Keyword search with hierarchical bloom filters Hey HN! I got nerd-sniped by Bloom Filters this weekend, specifically for searching datasets with high "cardinality" (number of unique items).<p>They're an _amazing_ data structure that, at a fixed size, tracks potential set membership. That means unlike normal b-tree indexes, they don't grow with the number of unique items in the dataset.<p>This makes them great for "needle in a haystack" search (logs, document) as implementations like VictoriaMetrics and Bing's BitFunnel show. I've used them in the past, but they've never been center-stage in my projects.<p>I wanted high cardinality keyword search for ANOTHER project... and, well, down the yak-shaving rabbit hole we go!<p>BloomSearch brings this into an extensible Go package:<p>- Very memory efficient via bloom filters and streaming row scans<p>- DataStore and MetaStore interfaces for any backend (can be same or separate)<p>- Hierarchical pruning via partitions, minmax indexes, and of course bloom filters<p>- Search by field, token, or field:token with complex combinators<p>- Disaggregated storage and compute for unbound ingest and query throughput<p>And of course, you know I had to make a custom file format ^-^ (FILE_FORMAT.MD)<p>BloomSearch is optimized for massive concurrency, arbitrary cardinality and dataset size, and super low memory usage. There's still a lot on the table too in terms of size and performance optimizations, but I'm already super pleased with it. With distributed query processing I'm targeting >100B rows/s over large datasets.<p>I'm also excited to replace our big logging bill ~$0.003/GB for log storage with infinite retention and guilt-free querying :P
Show HN: Kuvasz – an open-source uptime and SSL monitoring service
Show HN: Kuvasz – an open-source uptime and SSL monitoring service A few months ago I took out my side project - an uptime & SSL monitoring service - from the drawer. I've decided to give it a new life and completely overhauled it, added a lot of new feature, and most importantly, a UI.<p>Highlights<p>- configurable uptime & SSL monitoring<p>- Telegram, Slack, PagerDuty & E-mail notifications (more to come!)<p>- fully-fledged REST API<p>- a responsive, modern & fast UI<p>- monitors are optionally configurable via a single YAML file, or you can choose to use either the UI or the API to maintain them<p>- Cloud-native, distributed as amd64 and arm64 images<p>- Only one dependency: a PostgreSQL database to connect to<p>- Extensive examples in the docs<p>- stable memory usage (max ~360MB) & great performance<p>It's written in Kotlin, under the hood it uses Micronaut with Netty, jOOQ, and PostgreSQL, and the server-side-rendered UI is built with kotlinx.html, Alpine.js, and htmx.<p>It's called Kuvasz (pronounce as [ˈkuvɒs]), and you can find the repository here: <a href="https://github.com/kuvasz-uptime/kuvasz">https://github.com/kuvasz-uptime/kuvasz</a><p>And the website with the extensive documentation here: <a href="https://kuvasz-uptime.dev" rel="nofollow">https://kuvasz-uptime.dev</a>
No other tools from this source yet.