Core Modules¶
This section contains documentation for the core modules of DeepCritical.
Agents¶
DeepCritical Agents - Pydantic AI-based agent system for research workflows.
This module provides a comprehensive agent system following Pydantic AI patterns, integrating with existing tools and state machines for bioinformatics, search, and RAG workflows.
Classes:
Name | Description |
---|---|
AgentDependencies | Dependencies for agent execution. |
AgentResult | Result from agent execution. |
AgentStatus | Agent execution status. |
AgentType | Types of agents in the DeepCritical system. |
BaseAgent | Base class for all DeepCritical agents following Pydantic AI patterns. |
BioinformaticsAgent | Agent for bioinformatics data fusion and reasoning. |
DeepAgentFilesystemAgent | DeepAgent filesystem agent integrated with DeepResearch. |
DeepAgentGeneralAgent | DeepAgent general-purpose agent integrated with DeepResearch. |
DeepAgentOrchestrationAgent | DeepAgent orchestration agent integrated with DeepResearch. |
DeepAgentPlanningAgent | DeepAgent planning agent integrated with DeepResearch. |
DeepAgentResearchAgent | DeepAgent research agent integrated with DeepResearch. |
DeepSearchAgent | Agent for deep search operations with iterative refinement. |
EvaluatorAgent | Agent for evaluating research results and quality. |
ExecutionHistory | History of agent executions. |
ExecutorAgent | Agent for executing research workflows. |
MultiAgentOrchestrator | Orchestrator for coordinating multiple agents in complex workflows. |
ParserAgent | Agent for parsing and understanding research questions. |
PlannerAgent | Agent for planning research workflows. |
RAGAgent | Agent for RAG (Retrieval-Augmented Generation) operations. |
SearchAgent | Agent for web search operations. |
Functions:
Name | Description |
---|---|
create_agent | Create an agent of the specified type. |
create_orchestrator | Create a multi-agent orchestrator. |
Attributes¶
__all__ module-attribute
¶
__all__ = [
"AgentDependencies",
"AgentResult",
"AgentStatus",
"AgentType",
"BaseAgent",
"BioinformaticsAgent",
"DeepAgentFilesystemAgent",
"DeepAgentGeneralAgent",
"DeepAgentOrchestrationAgent",
"DeepAgentPlanningAgent",
"DeepAgentResearchAgent",
"DeepSearchAgent",
"EvaluatorAgent",
"ExecutionHistory",
"ExecutorAgent",
"MultiAgentOrchestrator",
"ParserAgent",
"PlannerAgent",
"RAGAgent",
"SearchAgent",
"create_agent",
"create_orchestrator",
]
Classes¶
AgentDependencies dataclass
¶
AgentDependencies(
config: dict[str, Any] = dict(),
tools: list[str] = list(),
other_agents: list[str] = list(),
data_sources: list[str] = list(),
)
Dependencies for agent execution.
Attributes:
Name | Type | Description |
---|---|---|
config | dict[str, Any] | |
data_sources | list[str] | |
other_agents | list[str] | |
tools | list[str] | |
AgentResult dataclass
¶
AgentResult(
success: bool,
data: dict[str, Any] = dict(),
metadata: dict[str, Any] = dict(),
error: str | None = None,
execution_time: float = 0.0,
agent_type: AgentType = EXECUTOR,
)
Result from agent execution.
Attributes:
Name | Type | Description |
---|---|---|
agent_type | AgentType | |
data | dict[str, Any] | |
error | str | None | |
execution_time | float | |
metadata | dict[str, Any] | |
success | bool | |
AgentStatus ¶
AgentType ¶
Bases: str
, Enum
Types of agents in the DeepCritical system.
Attributes:
BaseAgent ¶
BaseAgent(
agent_type: AgentType,
model_name: str = "anthropic:claude-sonnet-4-0",
dependencies: AgentDependencies | None = None,
system_prompt: str | None = None,
instructions: str | None = None,
)
Bases: ABC
Base class for all DeepCritical agents following Pydantic AI patterns.
This abstract base class provides the foundation for all agent implementations in DeepCritical, integrating Pydantic AI agents with the existing tool ecosystem and state management systems.
Attributes:
Name | Type | Description |
---|---|---|
agent_type | AgentType | The type of agent (search, rag, bioinformatics, etc.) |
model_name | str | The AI model to use for this agent |
_agent | Agent | The underlying Pydantic AI agent instance |
_prompts | AgentPrompts | Agent-specific prompt templates |
Examples:
Creating a custom agent:
class MyCustomAgent(BaseAgent):
def __init__(self):
super().__init__(AgentType.CUSTOM, "anthropic:claude-sonnet-4-0")
async def execute(
self, input_data: str, deps: AgentDependencies
) -> AgentResult:
result = await self._agent.run(input_data, deps=deps)
return AgentResult(success=True, data=result.data)
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
BioinformaticsAgent ¶
Bases: BaseAgent
Agent for bioinformatics data fusion and reasoning.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
fuse_data | Fuse bioinformatics data from multiple sources. |
perform_reasoning | Perform reasoning on fused bioinformatics data. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
fuse_data async
¶
Fuse bioinformatics data from multiple sources.
Source code in DeepResearch/agents.py
perform_reasoning async
¶
Perform reasoning on fused bioinformatics data.
Source code in DeepResearch/agents.py
DeepAgentFilesystemAgent ¶
Bases: BaseAgent
DeepAgent filesystem agent integrated with DeepResearch.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
manage_files | Manage filesystem operations. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
manage_files async
¶
Manage filesystem operations.
Source code in DeepResearch/agents.py
DeepAgentGeneralAgent ¶
Bases: BaseAgent
DeepAgent general-purpose agent integrated with DeepResearch.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
handle_general_task | Handle general-purpose tasks. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
handle_general_task async
¶
handle_general_task(
task_description: str,
context: DeepAgentState | None = None,
) -> AgentExecutionResult
Handle general-purpose tasks.
Source code in DeepResearch/agents.py
DeepAgentOrchestrationAgent ¶
Bases: BaseAgent
DeepAgent orchestration agent integrated with DeepResearch.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_parallel_tasks | Execute multiple tasks in parallel. |
execute_sync | Synchronous execution wrapper. |
orchestrate_tasks | Orchestrate multiple tasks across agents. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_parallel_tasks async
¶
execute_parallel_tasks(
tasks: list[dict[str, Any]],
context: DeepAgentState | None = None,
) -> list[AgentExecutionResult]
Execute multiple tasks in parallel.
Source code in DeepResearch/agents.py
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
orchestrate_tasks async
¶
orchestrate_tasks(
task_description: str,
context: DeepAgentState | None = None,
) -> AgentExecutionResult
Orchestrate multiple tasks across agents.
Source code in DeepResearch/agents.py
DeepAgentPlanningAgent ¶
Bases: BaseAgent
DeepAgent planning agent integrated with DeepResearch.
Methods:
Name | Description |
---|---|
create_plan | Create a detailed execution plan. |
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
create_plan async
¶
create_plan(
task_description: str,
context: DeepAgentState | None = None,
) -> AgentExecutionResult
Create a detailed execution plan.
Source code in DeepResearch/agents.py
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
DeepAgentResearchAgent ¶
Bases: BaseAgent
DeepAgent research agent integrated with DeepResearch.
Methods:
Name | Description |
---|---|
conduct_research | Conduct comprehensive research. |
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
conduct_research async
¶
conduct_research(
research_query: str,
context: DeepAgentState | None = None,
) -> AgentExecutionResult
Conduct comprehensive research.
Source code in DeepResearch/agents.py
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
DeepSearchAgent ¶
Bases: BaseAgent
Agent for deep search operations with iterative refinement.
Methods:
Name | Description |
---|---|
deep_search | Perform deep search with iterative refinement. |
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
deep_search async
¶
Perform deep search with iterative refinement.
Source code in DeepResearch/agents.py
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
EvaluatorAgent ¶
Bases: BaseAgent
Agent for evaluating research results and quality.
Methods:
Name | Description |
---|---|
evaluate | Evaluate research results. |
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
evaluate async
¶
Evaluate research results.
Source code in DeepResearch/agents.py
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
ExecutionHistory dataclass
¶
History of agent executions.
Methods:
Name | Description |
---|---|
record | Record an execution result. |
Attributes:
Name | Type | Description |
---|---|---|
items | list[dict[str, Any]] | |
ExecutorAgent ¶
Bases: BaseAgent
Agent for executing research workflows.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_plan | Execute a research plan. |
execute_sync | Synchronous execution wrapper. |
run_plan | Legacy synchronous run_plan method. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
retries | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_plan async
¶
execute_plan(
plan: list[dict[str, Any]], history: ExecutionHistory
) -> dict[str, Any]
Execute a research plan.
Source code in DeepResearch/agents.py
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
run_plan ¶
run_plan(
plan: list[dict[str, Any]], history: ExecutionHistory
) -> dict[str, Any]
MultiAgentOrchestrator ¶
Orchestrator for coordinating multiple agents in complex workflows.
Methods:
Name | Description |
---|---|
execute_workflow | Execute a complete research workflow. |
Attributes:
Name | Type | Description |
---|---|---|
agents | dict[AgentType, BaseAgent] | |
config | | |
history | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute_workflow async
¶
Execute a complete research workflow.
Source code in DeepResearch/agents.py
ParserAgent ¶
Bases: BaseAgent
Agent for parsing and understanding research questions.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
parse | Legacy synchronous parse method. |
parse_question | Parse a research question. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
parse ¶
Legacy synchronous parse method.
parse_question async
¶
Parse a research question.
Source code in DeepResearch/agents.py
PlannerAgent ¶
Bases: BaseAgent
Agent for planning research workflows.
Methods:
Name | Description |
---|---|
create_plan | Create an execution plan from parsed question. |
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
plan | Legacy synchronous plan method. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
create_plan async
¶
Create an execution plan from parsed question.
Source code in DeepResearch/agents.py
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
plan ¶
Legacy synchronous plan method.
Source code in DeepResearch/agents.py
RAGAgent ¶
Bases: BaseAgent
Agent for RAG (Retrieval-Augmented Generation) operations.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
query | Perform RAG query. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
query async
¶
Perform RAG query.
Source code in DeepResearch/agents.py
SearchAgent ¶
Bases: BaseAgent
Agent for web search operations.
Methods:
Name | Description |
---|---|
execute | Execute the agent with input data. |
execute_sync | Synchronous execution wrapper. |
search | Perform web search. |
Attributes:
Name | Type | Description |
---|---|---|
agent_type | | |
dependencies | | |
history | | |
model_name | | |
status | |
Source code in DeepResearch/agents.py
Attributes¶
Functions¶
execute async
¶
execute(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
Execute the agent with input data.
This is the main entry point for executing an agent. It handles initialization, execution, and result processing while tracking execution metrics and errors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data | Any | The input data to process. Can be a string, dict, or any structured data appropriate for the agent type. | required |
deps | AgentDependencies | None | Optional agent dependencies. If not provided, uses the agent's default dependencies. | None |
Returns:
Name | Type | Description |
---|---|---|
AgentResult | AgentResult | The execution result containing success status, processed data, execution metrics, and any errors. |
Raises:
Type | Description |
---|---|
RuntimeError | If the agent is not properly initialized. |
Examples:
Basic execution:
agent = SearchAgent()
deps = AgentDependencies.from_config(config)
result = await agent.execute("machine learning", deps)
if result.success:
print(f"Results: {result.data}")
else:
print(f"Error: {result.error}")
With custom dependencies:
custom_deps = AgentDependencies(
model_name="openai:gpt-4",
api_keys={"openai": "your-key"},
config={"temperature": 0.8}
)
result = await agent.execute("research query", custom_deps)
Source code in DeepResearch/agents.py
170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
|
execute_sync ¶
execute_sync(
input_data: Any, deps: AgentDependencies | None = None
) -> AgentResult
search async
¶
Perform web search.
Source code in DeepResearch/agents.py
Functions¶
create_agent ¶
Create an agent of the specified type.
Source code in DeepResearch/agents.py
create_orchestrator ¶
create_orchestrator(
config: dict[str, Any],
) -> MultiAgentOrchestrator
Main Application¶
Classes:
Name | Description |
---|---|
Analyze | |
BioinformaticsFuse | |
BioinformaticsParse | |
DSAnalyze | |
DSExecute | |
DSPlan | |
DSSynthesize | |
EnhancedREACTWorkflow | |
EvaluateChallenge | |
Plan | Planning node for research workflow. |
PrepareChallenge | |
PrimaryREACTWorkflow | |
PrimeEvaluate | |
PrimeExecute | |
PrimeParse | |
PrimePlan | |
RAGExecute | |
RAGParse | |
ResearchState | State object for the research workflow. |
RunChallenge | |
Search | |
Synthesize | |
Functions:
Name | Description |
---|---|
main | |
run_graph | |
Attributes:
Name | Type | Description |
---|---|---|
research_graph | |
Attributes¶
research_graph module-attribute
¶
research_graph = Graph(
nodes=(
Plan,
Search,
Analyze,
Synthesize,
PrimaryREACTWorkflow,
EnhancedREACTWorkflow,
),
state_type=ResearchState,
)
Classes¶
Analyze dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> Synthesize
BioinformaticsFuse dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> Annotated[End[str], Edge(label="done")]
Source code in DeepResearch/app.py
BioinformaticsParse dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> BioinformaticsFuse
Source code in DeepResearch/app.py
DSAnalyze dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> DSSynthesize
Source code in DeepResearch/app.py
DSExecute dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> DSAnalyze
Source code in DeepResearch/app.py
DSPlan dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> DSExecute
Source code in DeepResearch/app.py
DSSynthesize dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> Annotated[End[str], Edge(label="done")]
Source code in DeepResearch/app.py
EnhancedREACTWorkflow dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | Execute the enhanced REACT workflow with nested loops and subgraphs. |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> Annotated[End[str], Edge(label="done")]
Execute the enhanced REACT workflow with nested loops and subgraphs.
Source code in DeepResearch/app.py
EvaluateChallenge dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> Synthesize
Plan dataclass
¶
Bases: BaseNode[ResearchState]
Planning node for research workflow.
This node analyzes the research question and determines the appropriate workflow path based on configuration flags and question characteristics. Routes to different execution paths including search, REACT workflows, or challenge mode.
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> (
Search
| PrimaryREACTWorkflow
| EnhancedREACTWorkflow
| PrepareChallenge
| PrimeParse
| BioinformaticsParse
| RAGParse
| DSPlan
)
Source code in DeepResearch/app.py
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 |
|
PrepareChallenge dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> RunChallenge
Source code in DeepResearch/app.py
PrimaryREACTWorkflow dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | Execute the primary REACT workflow with orchestration. |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> Annotated[End[str], Edge(label="done")]
Execute the primary REACT workflow with orchestration.
Source code in DeepResearch/app.py
PrimeEvaluate dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> Annotated[End[str], Edge(label="done")]
Source code in DeepResearch/app.py
PrimeExecute dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> PrimeEvaluate
Source code in DeepResearch/app.py
PrimeParse dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> PrimePlan
Source code in DeepResearch/app.py
PrimePlan dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> PrimeExecute
Source code in DeepResearch/app.py
RAGExecute dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> Annotated[End[str], Edge(label="done")]
Source code in DeepResearch/app.py
RAGParse dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> RAGExecute
Source code in DeepResearch/app.py
ResearchState dataclass
¶
ResearchState(
question: str,
plan: list[str] | None = list(),
full_plan: list[dict[str, Any]] | None = list(),
notes: list[str] = list(),
answers: list[str] = list(),
structured_problem: StructuredProblem | None = None,
workflow_dag: WorkflowDAG | None = None,
execution_results: dict[str, Any] = dict(),
config: DictConfig | None = None,
orchestration_config: WorkflowOrchestrationConfig
| None = None,
orchestration_state: OrchestrationState | None = None,
spawned_workflows: list[str] = list(),
multi_agent_results: dict[str, Any] = dict(),
hypothesis_datasets: list[HypothesisDataset] = list(),
testing_environments: list[
HypothesisTestingEnvironment
] = list(),
reasoning_results: list[ReasoningResult] = list(),
judge_evaluations: dict[str, Any] = dict(),
app_configuration: AppConfiguration | None = None,
agent_orchestrator: AgentOrchestrator | None = None,
nested_loops: dict[str, Any] = dict(),
active_subgraphs: dict[str, Any] = dict(),
break_conditions_met: list[str] = list(),
loss_function_values: dict[str, float] = dict(),
current_mode: AppMode | None = None,
)
State object for the research workflow.
This dataclass maintains the state of a research workflow execution, containing the original question, planning results, intermediate notes, and final answers.
Attributes:
Name | Type | Description |
---|---|---|
question | str | The original research question being answered. |
plan | list[str] | None | High-level plan steps (optional). |
full_plan | list[dict[str, Any]] | None | Detailed execution plan with parameters. |
notes | list[str] | Intermediate notes and observations. |
answers | list[str] | Final answers and results. |
structured_problem | StructuredProblem | None | PRIME-specific structured problem representation. |
workflow_dag | WorkflowDAG | None | PRIME workflow DAG for execution. |
execution_results | dict[str, Any] | Results from tool execution. |
config | DictConfig | None | Global configuration object. |
Attributes¶
active_subgraphs class-attribute
instance-attribute
¶
agent_orchestrator class-attribute
instance-attribute
¶
app_configuration class-attribute
instance-attribute
¶
break_conditions_met class-attribute
instance-attribute
¶
execution_results class-attribute
instance-attribute
¶
full_plan class-attribute
instance-attribute
¶
hypothesis_datasets class-attribute
instance-attribute
¶
judge_evaluations class-attribute
instance-attribute
¶
loss_function_values class-attribute
instance-attribute
¶
multi_agent_results class-attribute
instance-attribute
¶
nested_loops class-attribute
instance-attribute
¶
orchestration_config class-attribute
instance-attribute
¶
orchestration_state class-attribute
instance-attribute
¶
reasoning_results class-attribute
instance-attribute
¶
spawned_workflows class-attribute
instance-attribute
¶
structured_problem class-attribute
instance-attribute
¶
testing_environments class-attribute
instance-attribute
¶
Functions¶
RunChallenge dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> EvaluateChallenge
Search dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(ctx: GraphRunContext[ResearchState]) -> Analyze
Source code in DeepResearch/app.py
Synthesize dataclass
¶
Bases: BaseNode[ResearchState]
Methods:
Name | Description |
---|---|
run | |
Functions¶
run async
¶
run(
ctx: GraphRunContext[ResearchState],
) -> Annotated[End[str], Edge(label="done")]
Source code in DeepResearch/app.py
Functions¶
main ¶
run_graph ¶
Source code in DeepResearch/app.py
Models¶
Custom Pydantic AI model implementations for DeepCritical.
This module provides Pydantic AI model wrappers for: - vLLM (production-grade local LLM inference) - llama.cpp (lightweight local inference) - OpenAI-compatible servers (generic wrapper)
Usage
from pydantic_ai import Agent
from DeepResearch.src.models import VLLMModel, LlamaCppModel
# vLLM
vllm_model = VLLMModel.from_vllm(
base_url="http://localhost:8000/v1",
model_name="meta-llama/Llama-3-8B"
)
agent = Agent(vllm_model)
# llama.cpp
llamacpp_model = LlamaCppModel.from_llamacpp(
base_url="http://localhost:8080/v1",
model_name="llama-3-8b.gguf"
)
agent = Agent(llamacpp_model)
Modules:
Name | Description |
---|---|
openai_compatible_model | Pydantic AI model wrapper for OpenAI-compatible servers. |
Classes:
Name | Description |
---|---|
OpenAICompatibleModel | Pydantic AI model for OpenAI-compatible servers. |
Attributes:
Name | Type | Description |
---|---|---|
LlamaCppModel | Alias for OpenAICompatibleModel when using llama.cpp. | |
VLLMModel | Alias for OpenAICompatibleModel when using vLLM. |
Attributes¶
LlamaCppModel module-attribute
¶
LlamaCppModel = OpenAICompatibleModel
Alias for OpenAICompatibleModel when using llama.cpp.
VLLMModel module-attribute
¶
VLLMModel = OpenAICompatibleModel
Alias for OpenAICompatibleModel when using vLLM.
Classes¶
OpenAICompatibleModel ¶
Bases: OpenAIChatModel
Pydantic AI model for OpenAI-compatible servers.
This is a thin wrapper around Pydantic AI's OpenAIChatModel that makes it easy to connect to local or custom OpenAI-compatible servers.
Supports: - vLLM with OpenAI-compatible API - llama.cpp server in OpenAI mode - Text Generation Inference (TGI) - Any custom OpenAI-compatible endpoint
Methods:
Name | Description |
---|---|
from_config | Create a model from Hydra configuration. |
from_custom | Create a model for any custom OpenAI-compatible server. |
from_llamacpp | Create a model for a llama.cpp server. |
from_tgi | Create a model for a Text Generation Inference (TGI) server. |
from_vllm | Create a model for a vLLM server. |
Functions¶
from_config classmethod
¶
from_config(
config: DictConfig | dict | LLMModelConfig,
model_name: str | None = None,
base_url: str | None = None,
api_key: str | None = None,
**kwargs: Any,
) -> OpenAICompatibleModel
Create a model from Hydra configuration.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config | DictConfig | dict | LLMModelConfig | Hydra configuration (DictConfig), dict, or LLMModelConfig with model settings. | required |
model_name | str | None | Override model name from config. | None |
base_url | str | None | Override base URL from config. | None |
api_key | str | None | Override API key from config. | None |
**kwargs | Any | Additional arguments passed to the model. | {} |
Returns:
Type | Description |
---|---|
OpenAICompatibleModel | Configured OpenAICompatibleModel instance. |
Source code in DeepResearch/src/models/openai_compatible_model.py
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
from_custom classmethod
¶
from_custom(
config: DictConfig | dict | None = None,
model_name: str | None = None,
base_url: str | None = None,
api_key: str | None = None,
**kwargs: Any,
) -> OpenAICompatibleModel
Create a model for any custom OpenAI-compatible server.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config | DictConfig | dict | None | Optional Hydra configuration with custom server settings. | None |
model_name | str | None | Model name (overrides config if provided). | None |
base_url | str | None | Server URL (overrides config if provided). | None |
api_key | str | None | API key (overrides config if provided). | None |
**kwargs | Any | Additional arguments passed to the model. | {} |
Returns:
Type | Description |
---|---|
OpenAICompatibleModel | Configured OpenAICompatibleModel instance. |
Source code in DeepResearch/src/models/openai_compatible_model.py
from_llamacpp classmethod
¶
from_llamacpp(
config: DictConfig | dict | None = None,
model_name: str | None = None,
base_url: str | None = None,
api_key: str | None = None,
**kwargs: Any,
) -> OpenAICompatibleModel
Create a model for a llama.cpp server.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config | DictConfig | dict | None | Optional Hydra configuration with llama.cpp settings. | None |
model_name | str | None | Model name (overrides config if provided). | None |
base_url | str | None | llama.cpp server URL (overrides config if provided). | None |
api_key | str | None | API key (overrides config if provided). | None |
**kwargs | Any | Additional arguments passed to the model. | {} |
Returns:
Type | Description |
---|---|
OpenAICompatibleModel | Configured OpenAICompatibleModel instance. |
Source code in DeepResearch/src/models/openai_compatible_model.py
from_tgi classmethod
¶
from_tgi(
config: DictConfig | dict | None = None,
model_name: str | None = None,
base_url: str | None = None,
api_key: str | None = None,
**kwargs: Any,
) -> OpenAICompatibleModel
Create a model for a Text Generation Inference (TGI) server.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config | DictConfig | dict | None | Optional Hydra configuration with TGI settings. | None |
model_name | str | None | Model name (overrides config if provided). | None |
base_url | str | None | TGI server URL (overrides config if provided). | None |
api_key | str | None | API key (overrides config if provided). | None |
**kwargs | Any | Additional arguments passed to the model. | {} |
Returns:
Type | Description |
---|---|
OpenAICompatibleModel | Configured OpenAICompatibleModel instance. |
Source code in DeepResearch/src/models/openai_compatible_model.py
from_vllm classmethod
¶
from_vllm(
config: DictConfig | dict | None = None,
model_name: str | None = None,
base_url: str | None = None,
api_key: str | None = None,
**kwargs: Any,
) -> OpenAICompatibleModel
Create a model for a vLLM server.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config | DictConfig | dict | None | Optional Hydra configuration with vLLM settings. | None |
model_name | str | None | Model name (overrides config if provided). | None |
base_url | str | None | vLLM server URL (overrides config if provided). | None |
api_key | str | None | API key (overrides config if provided). | None |
**kwargs | Any | Additional arguments passed to the model. | {} |
Returns:
Type | Description |
---|---|
OpenAICompatibleModel | Configured OpenAICompatibleModel instance. |