Skip to content

Conversation

@Hanzzalaaa
Copy link

…und:

  • playground.py with MockLLM for semantic_extract and semantic_join
  • canned_responses.json with sample mock outputs
  • README.md with quickstart guide and usage instructions

This helps new users try fenic workflows locally without API setup.

…und:

- playground.py with MockLLM for semantic_extract and semantic_join
- canned_responses.json with sample mock outputs
- README.md with quickstart guide and usage instructions

This helps new users try fenic workflows locally without API setup.
@YoungVor
Copy link
Contributor

YoungVor commented Sep 6, 2025

Hey Hanzala, thank you for joining the fenic community, we're excited to see what folks can do with fenic!

I'm curious about the vision for the playground, and the usecases you have in mind?
If its a cost concern, you can set up a Google account with the free tier and play with the semantic layer in fenic.
If its more of a security or privacy concern, that's a reason for us to explore integrating with local options (like ollama, or litellm with a proxy server running a model).

There are could be some benefits in mocking the LLM response from a testing perspective. For e.g., if we wrote MockLLM to be MockOpenAIProvider (and replaced the provider/client classes with mocks), we could test that the semantic layer works correctly with complex queries without incurring completions costs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants