-
Notifications
You must be signed in to change notification settings - Fork 689
Add pricing faq for PostHog AI credits #13524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
| Getting help on how to use PostHog never costs credits. | ||
| All AI features still in beta are also free to use — as of **Oct 31**, that includes **session summaries** and **deep research**. | ||
|
|
||
| ### **How credit usage works** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this section is confusing. As a customer I read everything but still don't understand the relationship between tokens and credits. If we can't be fully transparent here, I would remove any references to tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree this brings up questions like "well, why aren't they just charging me by tokens then?"
That being said, I think this section is important to have. It just needs more detail and reasoning and ideally a conversion calculator or formula. I'm assuming we're charging a slight markup or something!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, what about input tokens vs output tokens? and tokens between different LLM models? This section probably needs an in-depth breakdown of a few different prompts and tasks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we clarify the set markup, like @kappa90 points out on line 35, I think this becomes fairly clear and user-friendly:
AI credits are based on the underlying token costs. The choice of model is optimized automatically per PostHog AI feature, but in each case our markup is a constant 20% over the LLM provider price. This means that 1 PostHog AI credit = $0.8333 worth of raw inference.
Note that 25% markup would be a pleasant $0.8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarified in the relevant section below
|
Since this is a draft, I won't have any detailed comments. This is more context for ya drive. The convention is to have a "Cutting costs" section in most product sections. It's basically a pricing FAQ. Generally we introduce:
This is just context for you to consider <3 |
edwinyjlim
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exciting!
This is still WIP and I didn't consider navigation etc.
I'd focus just on the content! I have a PR coming soon that overhauls the PostHog AI docs and its layout, so we can merge your pricing content into that branch.
| Getting help on how to use PostHog never costs credits. | ||
| All AI features still in beta are also free to use — as of **Oct 31**, that includes **session summaries** and **deep research**. | ||
|
|
||
| ### **How credit usage works** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree this brings up questions like "well, why aren't they just charging me by tokens then?"
That being said, I think this section is important to have. It just needs more detail and reasoning and ideally a conversion calculator or formula. I'm assuming we're charging a slight markup or something!
| Getting help on how to use PostHog never costs credits. | ||
| All AI features still in beta are also free to use — as of **Oct 31**, that includes **session summaries** and **deep research**. | ||
|
|
||
| ### **How credit usage works** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, what about input tokens vs output tokens? and tokens between different LLM models? This section probably needs an in-depth breakdown of a few different prompts and tasks.
edwinyjlim
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exciting!
This is still WIP and I didn't consider navigation etc.
I'd focus just on the content! I have a PR coming soon that overhauls the PostHog AI docs and its layout, so we can merge your pricing content into that branch.
|
|
||
| You’ll always see **real-time cost information** while using AI features. | ||
|
|
||
| To keep it simple, token costs are converted into **AI credits**, billed at **$0.01 per credit** — so **1,000 credits = $10**. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This hides the markup calculation. We should be clear that we take the token cost given us by the provider, apply a markup and convert to credits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarified this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we still don't mention the markup?
Twixes
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be great to have, potentially to link to from the app too
|
|
||
| ### **What doesn’t use credits** | ||
|
|
||
| Getting help on how to use PostHog never costs credits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is strictly true, i.e. if you getting help involves a query being created – that costs credits.
Because we have a (we could say generous) free tier, is it perhaps then simpler – for both the user an us – to charge for the root node always? Regardless of whether just the docs search tool was involved, insight creation, or no tools at all. Especially with the new agent orchestration based on mode-switching, being prototyped by @skoob13, the tools become "dumb", and it's even trickier to say "this is just getting help" vs. "this is doing some work for the user".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, completely agree @Twixes. We either are fully transparent on what we pass along to end user cost-wise or might as well remove this.
The current plan is to always charge for the root node anyway. Another (small) example is the title generator, we use a super small (i.e. fast) model to perform this simple operation, but we do still plan to pass that along to end user cost-wise. I would rather have a section explaining what we bill and what not, if not too technical and prone to being outdated (although we could work around that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The outcome of this discussion is that we've updated the calculation and will not charge for the root node when a user searches the docs.
| Getting help on how to use PostHog never costs credits. | ||
| All AI features still in beta are also free to use — as of **Oct 31**, that includes **session summaries** and **deep research**. | ||
|
|
||
| ### **How credit usage works** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we clarify the set markup, like @kappa90 points out on line 35, I think this becomes fairly clear and user-friendly:
AI credits are based on the underlying token costs. The choice of model is optimized automatically per PostHog AI feature, but in each case our markup is a constant 20% over the LLM provider price. This means that 1 PostHog AI credit = $0.8333 worth of raw inference.
Note that 25% markup would be a pleasant $0.8.
|
|
||
| ### **How credit usage works** | ||
|
|
||
| AI credits are based on **token costs**, which reflect the effort required to complete your request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe "inference costs" instead of "token costs"? Addresses some concerns raised in comments above + feels more accurate since IIRC some model providers have pricing components that are not purely token based
|
|
||
| While exact usage varies, credit consumption usually scales with value — more advanced tasks cost more but deliver deeper insights and time savings. | ||
|
|
||
| You’ll always see **real-time cost information** while using AI features. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this real-time cost information include markup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes 👍
| Stay on top of your AI credit spend in real time: | ||
|
|
||
| 1. Type `/usage` in chat to see the current cost of your conversation | ||
| 2. Check the **Billing & usage** page to view your total monthly consumption |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| You’re always in control of your AI credit usage: | ||
| - **Real-time tracking** — see costs as you go | ||
| - **Billing limits** — set hard caps to prevent overspending |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mention the default paid limit? Assuming we're going with it
| You’re always in control of your AI credit usage: | ||
| - **Real-time tracking** — see costs as you go | ||
| - **Billing limits** — set hard caps to prevent overspending | ||
| - **Usage alerts** — get notified when you hit key thresholds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these the standard billing alerts when you hit 80% and 100% of your free allowance (if on free plan) or billing limit (if on paid plan)? Or something custom for AI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, standard. We don't have anything custom planned for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 file reviewed, 8 comments
| ### **What doesn’t use credits** | ||
|
|
||
| Getting help on how to use PostHog never costs credits. | ||
| All AI features still in beta are also free to use — as of **Oct 31**, that includes **session summaries** and **deep research**. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Add year to date for clarity (assuming 2024)
Prompt To Fix With AI
This is a comment left during a code review.
Path: contents/docs/posthog-ai/pricing-faq.mdx
Line: 22:22
Comment:
**style:** Add year to date for clarity (assuming 2024)
How can I resolve this? If you propose a fix, please make it concise.Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
gewenyu99
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice some comments
This page also needs to be added to the side nav, but I'd suggest we just base this on top of #13614 and do the nav together there.
@edwinyjlim how's that sound
| AI credits are consumed whenever PostHog performs intelligent work for you. | ||
| You’ll spot these features by the ✨ icon or when using the in-app chat. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| AI credits are consumed whenever PostHog performs intelligent work for you. | |
| You’ll spot these features by the ✨ icon or when using the in-app chat. | |
| AI credits are consumed whenever PostHog performs intelligent work for you. You’ll spot these features by the ✨ icon or when using the in-app chat. |
@edwinyjlim maybe we rebase this on your other PR and link to the list of actions here (if we have a list)
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
Co-authored-by: Vincent (Wen Yu) Ge <[email protected]>
|
@gewenyu99 Thanks for your suggestions, I've committed them all. I left one comment on the calculation comment from you, feel free make the necessary change & commit directly, if it makes sense for you. I would hand this now over to you and @edwinyjlim, so you can add it into the main AI docs. Thanks! |
Sounds gyuuud. Consider this branch commandeered 🏴☠️ by me |
|
incorporated into #13614 |
Changes
Add screenshots or screen recordings for visual / UI-focused changes.
Checklist
vercel.jsonArticle checklist