LLM-ready API
Usage Guide

Usage Guide

This guide will cover the basic usage of the LLM-ready API, including how to set up the API, make requests and handle responses using the kFinance Python Library, and popular LLM integrations using code generation, function calling and function calling via LangChain.

You can also use the LLM-ready API by calling the API directly. For more information, check out the OpenAPI Specification.

We provide example Playground Notebooks (opens in a new tab) as tools to walk through the set up.

Setup and Installation

Before you begin, make sure you have access to the LLM-ready API, your authentication credentials, and Python 3.10.0 or greater installed.

To install the kFinance Python Library:

pip install kensho-kfinance

To upgade to the latest version available:

pip install --upgrade kensho-kfinance

For more details on the library, see the Python Library page.

Basic Usage

The LLM-ready API organizes data around a Ticker object. The Ticker object represents the combination of the company, the security, and the specific instance of the security trading on an exchange. Below is an example of how to retrieve information using the Ticker object:

from kensho_finance.kfinance import Client
 
# Authenticate with one of the three methods below.
kfinance_client = Client()
kfinance_client = Client(refresh_token="your_refresh_token_here")
kfinance_client = Client(client_id="your_client_id_here", private_key="your_private_key_here")
 
# Instantiate a company's ticker object
microsoft = kfinance_client.ticker("MSFT")
 
# Basic company information
microsoft.info
 
# Microsoft quarterly income statement
microsoft.income_statement(period_type="quarterly", start_year=2022, start_quarter=4, end_year=2024, end_quarter=1)
 
# Microsoft historical prices aggregated over a week
microsoft.history(start_date="2024-01-01", end_date="2024-07-01", periodicity="week")

For a complete list of functions, see the Python library documentation (opens in a new tab).

Integrations with LLMs

The LLM-ready API supports both code generation, function calling, and function calling via LangChain as data access patterns. With code generation, the LLM generates Python code that interacts with the API, which the LLM application or user must execute in a Python environment. With function calling, the LLM directly invokes API operations within the interaction. Code generation offers more flexibility and control, while function calling is simpler and limited to predefined capabilities.

See examples of code generation, function calling, and function calling via LangChain in our Playground Notebooks (opens in a new tab).

Code Generation

Code generation is a method where the LLM is instructed to generate code by utilizing the available functions provided to answer natural language queries. Each function is defined within kfinance_client.prompt, including its name, parameters, return type and purpose.

Below is an example of a single pass of model prompt-response:

# First, follow the previous steps from the Usage Guide
 
# Ask your question
question = "FAANG companies annual revenues over the last 5 years?"
 
# Combine your question with the prompt into one string
input_prompt = f"{kfinance_client.prompt} + \nQUESTION: {question}"
 
from openai import OpenAI
 
OpenAI_Client = OpenAI(api_key=<Your OpenAI key>, # replace with your own key)
open_ai_model_response = OpenAI_Client.chat.completions.create(
   model="gpt-4o",
       messages=[
           {
               "role": "user",
               "content": input_prompt,
           }
       ],
       temperature = 0,
       seed = 0,
   ).choices[0].message.content
print(open_ai_model_response)

Then execute the resulting code in a Python environment.

When incorporated as a tool in an LLM application (with other tools orchestrated as a system), the LLM-ready API can drive richer and more complex end-user interactions.

Function Calling

We provide kfinance_client.tools as external tools that can be connected to models, enable them to fetch data, take actions, and handle responses. To learn more, check out documentation from OpenAI (opens in a new tab), Anthropic (opens in a new tab), and Google (opens in a new tab).

Below is an example of how to integrate the LLM-ready API using function calling with GPT, Claude, and Gemini models:

from openai import OpenAI
 
OpenAI(api_key=<Your OpenAI key>).chat.completions.create(
    model=<Your model of choice>,
    messages=<Your system prompt and other chat calls>,
    tools=kfinance_client.openai_tool_descriptions
)

Function Calling via LangChain

We support function calling via LangChain with kfinance_client.langchain_tools. Using LangChain's tool/function calling framework (opens in a new tab) allows for greater control over tool execution logic, enables chaining multiple calls, and additional reasoning steps.

Below is an example using the LLM-ready API with LangChain's tool calling agent:

from langchain import hub
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
 
openai_llm = ChatOpenAI(model="gpt-4o", api_key=<Your OpenAI key>)
 
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", <Your base prompt>),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
 
# Construct the tool calling agent
agent = create_tool_calling_agent(llm=openai_llm, tools=kfinance_client.langchain_tools, prompt=prompt)
 
# Create an agent executor by passing in the agent and tools
self.agent_executor = AgentExecutor(agent=agent, tools=kfinance_client.langchain_tools, verbose=True)