-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Feature/eas llm setup #9268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feature/eas llm setup #9268
Conversation
Validation for Breaking Change Starting...
Thanks for your contribution! |
Hi @alex3267006, |
Thank you for your contribution! We will review the pull request and get back to you soon. |
The git hooks are available for azure-cli and azure-cli-extensions repos. They could help you run required checks before creating the PR. Please sync the latest code with latest dev branch (for azure-cli) or main branch (for azure-cli-extensions). pip install azdev --upgrade
azdev setup -c <your azure-cli repo path> -r <your azure-cli-extensions repo path>
|
@alex3267006 please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces major improvements to the AKS Agent extension, focusing on simplifying LLM model setup and management. The key change is the addition of the az aks agent-init
command for interactive LLM configuration, eliminating the need for manual environment variable setup.
- Interactive LLM model configuration through
az aks agent-init
command - Support for multiple providers (Azure, OpenAI, Anthropic, Gemini, OpenAI-compatible) with validation
- Configuration management system with local storage and model selection capabilities
Reviewed Changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated 5 comments.
Show a summary per file
File | Description |
---|---|
commands.py |
Added new agent-init command |
custom.py |
Implemented aks_agent_init function and configuration management logic |
agent/agent.py |
Added configuration conflict prevention |
agent/llm_config_manager.py |
New configuration manager for LLM settings |
agent/llm_providers/*.py |
Provider classes with validation and connection testing |
tests/latest/*.py |
Test coverage for new functionality |
_help.py |
Updated help text and examples |
README.rst |
Updated documentation to reflect new workflow |
class OpenAICompatiableProvider(LLMProvider): | ||
name = "openai_compatiable" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'compatiable' to 'compatible'.
class OpenAICompatiableProvider(LLMProvider): | |
name = "openai_compatiable" | |
class OpenAICompatibleProvider(LLMProvider): | |
name = "openai_compatible" |
Copilot uses AI. Check for mistakes.
from .openai_provider import OpenAIProvider | ||
from .anthropic_provider import AnthropicProvider | ||
from .gemini_provider import GeminiProvider | ||
from .openai_compatiable_provider import OpenAICompatiableProvider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'compatiable' to 'compatible'.
Copilot uses AI. Check for mistakes.
self.assertIs(PROVIDER_REGISTRY['openai'], OpenAIProvider) | ||
self.assertIs(PROVIDER_REGISTRY['anthropic'], AnthropicProvider) | ||
self.assertIs(PROVIDER_REGISTRY['gemini'], GeminiProvider) | ||
self.assertIs(PROVIDER_REGISTRY['openai_compatiable'], OpenAICompatiableProvider) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected spelling of 'compatiable' to 'compatible'.
Copilot uses AI. Check for mistakes.
|
||
# REST API reference: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle?tabs=rest | ||
api_base = api_base.rstrip('/') + '/' | ||
url = urljoin(api_base, "openai/responses") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The endpoint 'openai/responses' appears incorrect. Azure OpenAI chat completions endpoint should be 'openai/deployments/{deployment-id}/chat/completions'. Consider using the correct endpoint pattern.
url = urljoin(api_base, "openai/responses") | |
url = urljoin(api_base, f"openai/deployments/{model_name}/chat/completions") |
Copilot uses AI. Check for mistakes.
payload = {"model": model_name, | ||
"input": "ping", "max_output_tokens": 16} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The payload structure is incorrect for Azure OpenAI chat completions. It should include 'messages' array with role/content structure, not 'input' field. Consider using the correct payload format: {'messages': [{'role': 'user', 'content': 'ping'}], 'max_tokens': 16}.
payload = {"model": model_name, | |
"input": "ping", "max_output_tokens": 16} | |
payload = { | |
"model": model_name, | |
"messages": [{"role": "user", "content": "ping"}], | |
"max_tokens": 16 | |
} |
Copilot uses AI. Check for mistakes.
Hi @alex3267006 Release SuggestionsModule: aks-agent
Notes
|
Related command
This pull request introduces major improvements to the AKS Agent extension, focusing on simplifying LLM model setup and management for users. The most significant change is the addition of the new
az aks agent-init
command, which enables interactive configuration of LLM providers and models, removing the need for manual environment variable setup. Documentation and help text have been updated to reflect these changes, and new backend modules have been added to support provider selection and configuration management. There are also code changes to ensure configuration consistency and prevent conflicts.User Experience and CLI Improvements
az aks agent-init
command, allowing users to interactively add and configure LLM models for AKS troubleshooting. This command saves model configurations locally, supports multiple providers, and guides users through the setup process.README.rst
and CLI help to highlight the new workflow: users no longer need to set environment variables manually, and can easily select or switch models using--model
or--config-file
.Configuration Management and Validation
LLMConfigManager
(llm_config_manager.py
) for robust loading, saving, and retrieval of LLM model configurations, ensuring the last used or specifically selected model is used by default.llm_providers/__init__.py
, making it easy for users to choose among Azure, OpenAI, Anthropic, Gemini, and other compatible providers.Provider Support and Validation
This checklist is used to make sure that common guidelines for a pull request are followed.
General Guidelines
azdev style <YOUR_EXT>
locally? (pip install azdev
required)python scripts/ci/test_index.py -q
locally? (pip install wheel==0.30.0
required)For new extensions:
About Extension Publish
There is a pipeline to automatically build, upload and publish extension wheels.
Once your pull request is merged into main branch, a new pull request will be created to update
src/index.json
automatically.You only need to update the version information in file setup.py and historical information in file HISTORY.rst in your PR but do not modify
src/index.json
.