Usage Guide
This guide will cover the basic usage of the LLM-ready API, including how to set up the API, make requests and handle responses using the kFinance Python Library, and popular LLM integrations using code code generation and function calling.
You can also use the LLM-ready API by calling the API directly. For more information, check out the OpenAPI Specification.
We provide example Playground Notebooks (opens in a new tab) as tools to walk through the set up.
Setup and Installation
Before you begin, make sure you have access to the LLM-ready API, your authentication credentials, and Python 3.10.0 or greater installed.
To install the kFinance Python Library:
pip install https://kfinance.kensho.com/static/kensho_finance.tar.gz
To upgade to the latest version available:
pip install --upgrade https://kfinance.kensho.com/static/kensho_finance.tar.gz
For more details on the library, see the Python Library page.
Basic Usage
The LLM-ready API organizes data around a Ticker object. The Ticker object represents the combination of the company, the security, and the specific instance of the security trading on an exchange. Below is an example of how to retrieve information using the Ticker object:
from kensho_finance.kfinance import Client
# Authenticate with one of the three methods below.
client = Client()
client = Client(refresh_token="your_refresh_token_here")
client = Client(client_id="your_client_id_here", private_key="your_private_key_here")
# Instantiate a company's ticker object
microsoft = client.ticker("MSFT")
# Basic company information
microsoft.info
# Microsoft quarterly income statement
microsoft.income_statement(period_type="quarterly", start_year=2022, start_quarter=4, end_year=2024, end_quarter=1)
# Microsoft historical prices aggregated over a week
microsoft.history(start_date="2024-01-01", end_date="2024-07-01", periodicity="week")
For a complete list of functions, see the Python library documentation (opens in a new tab).
Integrations with LLMs
The LLM-ready API supports both code generation and function calling as data access patterns. With code generation, the LLM generates Python code that interacts with the API, which the LLM application or user must execute in a Python environment. With function calling, the LLM directly invokes API operations within the interaction. Code generation offers more flexibility and control, while function calling is simpler and limited to predefined capabilities.
See examples of both code generation and function calling in the Playground Notebooks (opens in a new tab).
Code Generation
An example of a single pass of model prompt-response:
# First, follow the previous steps from the Usage Guide
# Ask your question
question = "FAANG companies annual revenues over the last 5 years?"
# Combine your question with the prompt into one string
input_prompt = f"{client.prompt} + \nQUESTION: {question}"
from openai import OpenAI
OpenAI_Client = OpenAI(api_key=<Your OpenAI key>, # replace with your own key)
open_ai_model_response = OpenAI_Client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": input_prompt,
}
],
temperature = 0,
seed = 0,
).choices[0].message.content
print(open_ai_model_response)
Then execute the resulting code in a Python environment.
When incorporated as a tool in an LLM application (with other tools orchestrated as a system), the LLM-ready API can drive richer and more complex end-user interactions.
Function Calling
We provide kfinance_client.tools
as external tools that can be connected to models, enable them to fetch data, take actions, and handle responses.
To learn more, check out documentation from OpenAI (opens in a new tab), Anthropic (opens in a new tab), and Google (opens in a new tab).
Below is an example of how to integrate the LLM-ready API using function calling with GPT, Claude, and Gemini models:
from openai import OpenAI
OpenAI(api_key=<Your OpenAI key>).chat.completions.create(
model=<Your model of choice>,
messages=<Your system prompt and other chat calls>,
tools=kfinance_client.openai_tool_descriptions
)