SupStack started as a passion project to make sense of the supplement world. Here's how it organizes the research to help you decide what's worth trying.
Finds and summarizes the actual studies behind each supplement, so you don't have to dig through journals yourself.
Presents key information — dosing, effects, safety — in a clear format so you can quickly understand what matters.
Compare options side-by-side and match supplements to your goals so you can figure out what's worth trying.
25 curated supplement protocols for specific conditions — with dosage, timing, contraindications, and PubMed citations. All reviewed for FDA DSHEA compliance.
Not every supplement makes the cut. Here's how I decide what to include in SupStack:
Have a supplement you'd like to see added? Suggest it in the library and it'll be considered for future updates.
Each supplement gets a simple score from 1-10 based on how strong the research is. Here's what the numbers mean:
Multiple high-quality meta-analyses and RCTs with consistent, significant results. Effect sizes are meaningful and replicated across populations.
Several RCTs and at least one meta-analysis showing positive effects. Some inconsistency in results but overall evidence is supportive.
The evidence score combines several factors to give you a quick sense of how solid the research is:
Randomized controlled trials (RCTs) and systematic reviews carry the most weight. Animal studies and anecdotal reports are noted but don't drive the score.
Every claim on SupStack is backed by real data. Here's the scale of what's behind the site:
225
Supplements
8,020+
PubMed Studies
670+
Drug Interactions
1,800+
Medical Claims Verified
Pulling studies off PubMed is just the start. Every supplement goes through a multi-step validation pipeline before anything goes live:
Studies are pulled from PubMed using supplement names, aliases, and scientific names. Each study is matched, ranked by quality, and tagged with DOI links so you can read the originals.
Knowing if something works is only half the picture. Each supplement also gets a safety rating:
Well-established safety profile with minimal side effects at recommended doses. Safe for most adults.
Generally safe but with notable drug interactions or contraindications for specific populations.
Significant potential for side effects or interactions. Medical supervision recommended.
Pick what you're trying to improve, and SupStack shows you which supplements have research supporting that goal. The match score weights your priorities:
Each supplement has a score for how well it addresses each goal, based on the research. Your priorities determine the final match percentage.
Research tells you what works on average — but you're not average. SupStack lets you run structured personal experiments to find out if a supplement actually works for you.
Pick a supplement you haven't started yet, select a goal (e.g. better sleep), and SupStack generates a protocol with recommended dose, timing, and duration based on the research.
You start with a baseline survey before taking anything. Then you follow the protocol and do periodic check-ins. At the end, SupStack compares your responses to your baseline and gives you a personal verdict: helped, no clear effect, or made things worse.
The scoring system and goal matching are works in progress. If you have ideas for how to improve them — better ways to weight the research, additional factors to consider, or goals that should be added — I'd genuinely love to hear them.
This project gets better with feedback. Reach out at feedback@supstack.me or open an issue on GitHub.
Supplement research comes with its own vocabulary. Throughout the site, hover over underlined terms for quick definitions. Here are some common ones:
How much of a supplement actually gets absorbed and used by your body.
How long a substance stays active in your system before being eliminated.
Natural compounds that help your body adapt to stress and maintain balance.
Substances that enhance cognitive function like memory, focus, or creativity.
Limited RCTs with mixed results or primarily observational studies. Promising but more research needed.
Mostly animal studies, mechanistic research, or very limited human trials. Theoretical benefits not yet confirmed.
A statistically significant result doesn't always mean much in practice. The effect size tells you if the benefit is actually noticeable.
One study is interesting. Multiple studies showing the same thing? That's more convincing.
Who funded the study? Was it double-blind with a placebo? These details affect how much you can trust the results.
An automated audit checks every supplement across 11 categories: dosages against clinical ranges, safety ratings against reported side effects, drug interactions against known contraindications, and more. Issues get flagged by severity — critical, warning, or informational.
After enrichment, the actual study findings are compared against every supplement's claims. Safety keywords are scanned for concerns, dosage mentions are extracted and compared, and positive vs. negative findings are tallied against evidence scores. This is how we catch things like a safety rating that's too generous or an evidence score that doesn't match the research.
The pipeline runs on every update. New studies get pulled, validations re-run, and scores get adjusted. If a meta-analysis comes out that changes the picture, the data catches up.
Most people try a supplement and “feel like” it helps — or doesn't. A structured experiment with a real baseline removes guesswork. It's not a clinical trial, but it's far better than vibes.
Important: Experiments are designed for supplements you haven't started yet. If you're already taking something, the baseline won't reflect your true starting point, and the results won't be meaningful.