Add initial setup scripts and configuration for HAVPE Relay#314
Add initial setup scripts and configuration for HAVPE Relay#314AnkushMalaker wants to merge 4 commits intodevfrom
Conversation
- Introduced `init.py` for interactive configuration of the ESP32 Voice-PE TCP-to-WebSocket relay, allowing users to set backend URLs and authentication credentials. - Added `flash.sh` for flashing ESPHome firmware to the ESP32, including error handling for missing secrets. - Created `.gitignore` files for both the main project and firmware directories to exclude build artifacts and sensitive information. - Updated `docker-compose.yml` to include environment file support and improved command structure for backend URL configuration. - Enhanced `README.md` with setup instructions and architecture overview for better user guidance. - Added initial firmware configurations and templates for ESPHome, including secrets management and TCP streaming setup. - Updated `pyproject.toml` to reflect new dependencies and versioning for the project. This commit lays the groundwork for the HAVPE Relay project, ensuring a smooth setup and configuration process for users.
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughA complete TCP-to-WebSocket relay system for ESP32 Voice-PE devices is introduced, including firmware configurations, relay server implementations, interactive setup scripts, and comprehensive documentation for the HAVPE architecture. Changes
Sequence Diagram(s)sequenceDiagram
participant Device as ESP32 Device
participant Relay as HAVPE Relay<br/>(main.py)
participant Backend as Chronicle Backend
Device->>Relay: TCP connect
Relay->>Relay: Accept connection
Relay->>Backend: Request JWT token<br/>(get_jwt_token)
Backend-->>Relay: Access token
Relay->>Backend: WebSocket connect<br/>(with JWT auth)
Backend-->>Relay: Connected
Relay-->>Device: Ready (implicit)
Device->>Relay: Send TCP frame<br/>[MSG_AUDIO][len][PCM data]
Relay->>Relay: Parse framed message
Relay->>Backend: Forward audio chunk<br/>(Wyoming format)
Backend-->>Relay: ACK
Device->>Relay: Send TCP frame<br/>[MSG_BUTTON][len][code]
Relay->>Relay: Parse button event
Relay->>Backend: Forward button event<br/>(with readable name)
Backend-->>Relay: ACK
Device->>Relay: Disconnect
Relay->>Relay: Close WebSocket
Relay->>Backend: Disconnect
Backend-->>Relay: Closed
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 15
🧹 Nitpick comments (4)
extras/havpe-relay/init.py (2)
335-342: Auth username (email) displayed in cleartext in summary.The summary prints the full username/email. While this is a local interactive tool, consider using
mask_value(available via the import on line 25) to partially mask the email for consistency with how the password is handled.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extras/havpe-relay/init.py` around lines 335 - 342, The Auth Username is printed in cleartext in the summary; update the display code that calls self.console.print for AUTH_USERNAME to pass the retrieved value through the existing mask_value helper (imported on line 25) so the email is partially masked (e.g., show masked = mask_value(self.config.get('AUTH_USERNAME', '')) and print that masked value, preserving the same label and handling empty/missing values the same way as other fields like AUTH_PASSWORD.
148-169: Remove extraneousfprefixes on lines 152 and 168.These f-strings contain no interpolation placeholders.
🧹 Proposed fix
self.console.print( - f"[green][SUCCESS][/green] Username configured from command line" + "[green][SUCCESS][/green] Username configured from command line" )self.console.print( - f"[green][SUCCESS][/green] Password configured from command line" + "[green][SUCCESS][/green] Password configured from command line" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extras/havpe-relay/init.py` around lines 148 - 169, The two console.print calls use unnecessary f-strings with no interpolation (e.g., f"[green][SUCCESS][/green] Username configured from command line" and f"[green][SUCCESS][/green] Password configured from command line"); update these in the init logic that sets AUTH_USERNAME and password handling by removing the leading f so they are plain strings, leaving the surrounding logic in methods like prompt_value and read_existing_env_value unchanged.extras/havpe-relay/firmware/tcp_stream.h (1)
1-4: Header contains only includes — no declarations or interfaces.This is just a forwarding header for lwIP includes. If this is intentional (e.g., included by ESPHome YAML lambdas as a convenience), it's fine, but consider adding a brief doc comment explaining its purpose to avoid confusion for future contributors.
The clang error for
lwip/sockets.his a false positive — these headers are provided by ESP-IDF at build time on the target.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extras/havpe-relay/firmware/tcp_stream.h` around lines 1 - 4, The header tcp_stream.h currently only forwards lwip headers (lwip/sockets.h, lwip/inet.h, lwip/errno.h) which is confusing to readers; add a brief top-of-file doc comment explaining that this is an intentional convenience/forwarding header for ESPHome/ESP-IDF builds (used by YAML lambdas), and note that clang diagnostics for lwip/sockets.h are false positives because those headers are provided at target build time; optionally include a short comment about why `#pragma` once is used and, if desired, a single-line suppression/conditional (e.g., a comment mentioning toolchain-provided headers) to help future contributors.extras/havpe-relay/README.md (1)
7-10: Add fence languages for markdownlint compatibility.Line 7 and Line 152 use unlabeled fenced blocks (
MD040warning).📝 Suggested doc-only fix
-``` +```text ESP32 Voice-PE ──TCP:8989──► HAVPE Relay ──WebSocket──► Chronicle Backend (32-bit stereo) (16-bit mono) (/ws?codec=pcm) -``` +``` ... -``` +```text havpe-relay/ ├── main.py # Relay server ├── init.py # Setup wizard ... └── pyproject.toml # Python dependencies -``` +```Also applies to: 152-167
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extras/havpe-relay/README.md` around lines 7 - 10, The markdown uses unlabeled fenced blocks causing MD040 warnings; update the two fenced code blocks in README.md (the ASCII diagram block and the file-tree block) to include a language label such as "text" by changing the opening fences from ``` to ```text for the diagram near the ESP32/Relay/Chronicle lines and the file-list block around the havpe-relay/ tree so both fenced sections are explicitly labeled.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@extras/havpe-relay/.gitignore`:
- Around line 1-12: Add the generated .env to .gitignore so secrets from init.py
(e.g., AUTH_PASSWORD and other credentials) are not committed; edit the
repository .gitignore to include a line with ".env" (and consider adding related
patterns like ".env.local" or "*.env" if appropriate) to ensure the generated
environment file is excluded from version control.
In `@extras/havpe-relay/docker-compose.yml`:
- Around line 7-10: The host-to-container port mapping currently hardcodes the
container side to 8989 causing a mismatch when TCP_PORT is overridden; update
the port mapping in docker-compose.yml to use the same variable for both sides
(e.g., replace "${TCP_PORT:-8989}:8989" with
"${TCP_PORT:-8989}:${TCP_PORT:-8989}") so the container port and the environment
variable TCP_PORT remain consistent with the existing environment entry
(TCP_PORT=${TCP_PORT:-8989}).
In `@extras/havpe-relay/firmware/chronicle-sdk/chronicle.h`:
- Around line 69-79: The send_msg function currently assumes the header write
(lwip_send(sockfd, hdr, 3, 0)) always returns 3; change it to handle short
writes the same way as the payload loop: after checking sockfd and preparing
hdr, loop until all 3 header bytes are sent, calling lwip_send with hdr+sent and
3-sent, handle n<=0 by calling disconnect() and returning false; only proceed to
the payload loop once the header is fully written. Reference symbols: send_msg,
hdr, sockfd, lwip_send, disconnect.
In `@extras/havpe-relay/firmware/voice-chronicle.yaml`:
- Around line 239-289: The on_data lambda currently sends raw bytes via
lwip_send (references: on_data, sockfd, lwip_send, announced) which is
incompatible with the framed protocol parsed by extras/havpe-relay/main.py; fix
by framing each send using the same protocol as chronicle::send_audio (include a
message type byte, a length field, then the payload) before calling lwip_send,
or alternatively add a negotiated/raw-stream flag (e.g., a boolean in the
firmware config and a matching mode in the relay) so both sides agree on raw vs
framed mode; ensure the framing logic mirrors chronicle::send_audio exactly and
update id(tcp_ok) and announced handling to only flip after a successful framed
handshake/send.
In `@extras/havpe-relay/firmware/voice-tcp.yaml`:
- Around line 186-193: Replace the VLA and unsafe cast with a std::vector and
byte-safe reads: allocate mono as std::vector<int16_t> mono(n_frames); use a
uint8_t* (or auto ptr = x.data()) and for each frame memcpy 4 bytes into a local
int32_t temp (e.g. memcpy(&temp, ptr + i*8 + channel*4, 4)) then compute mono[i]
= static_cast<int16_t>(temp >> 16); this removes the (int32_t*)x.data() cast and
the int16_t mono[n_frames] VLA while preserving the same extraction logic from
samples and n_frames.
In `@extras/havpe-relay/init.py`:
- Around line 100-101: The variable default_ws is defined but never used; either
remove default_ws or use it when computing auto_ws. Fix by deleting the unused
default_ws assignment (leave default_http) or replace the auto_ws derivation to
fall back to default_ws (ensure auto_ws uses default_ws when no explicit ws URL
is provided). Update references in init.py around the default_* declarations and
the auto_ws computation so there is no dead code and the WebSocket default is
properly applied.
- Around line 238-245: The socket created in the block that sets default_ip
(variable s) can leak if s.connect() or s.getsockname() raises; change the code
to ensure the socket is always closed by using a context manager (with
socket.socket(...) as s) or a try/finally that calls s.close() after the
connect/getsockname operations inside the function or initialization logic that
defines default_ip, so the socket is closed on both success and exception paths.
- Around line 73-81: Update the return type annotations for
read_existing_env_value and read_backend_env_value from -> str to ->
Optional[str] because both can return None; import Optional from typing, update
the docstrings if desired, and ensure callers handle the None case; the methods
to change are read_existing_env_value and read_backend_env_value (which checks
self.backend_env_path.exists() and calls read_env_value).
- Around line 405-416: Replace the plain "--password" CLI flag with safer
options and update the credential-loading logic: add "--password-file" (path to
a file) and "--password-env" (environment variable name) arguments alongside
"--username" and adjust setup_auth_credentials to prefer password from
args.password_env (os.environ.get), then args.password_file (read file
securely), and fall back to interactive prompt only if neither is provided; also
update help text to warn that passing a password directly on the CLI is insecure
and remove or deprecate the original "--password" flag usage.
In `@extras/havpe-relay/init.sh`:
- Around line 1-3: The script is CWD-dependent because the uv run invocation
uses relative paths; update init.sh to first resolve the script directory (using
dirname "$0" / cd or a SCRIPT_DIR variable) and change to that directory (or use
pushd) before sourcing ../../scripts/check_uv.sh and running uv run
--with-requirements ../../setup-requirements.txt python init.py "$@"; ensure you
restore original CWD (popd or cd -) if needed. This targets the init.sh file and
the uv run invocation so the relative ../../ paths resolve from the script
location rather than the caller's CWD.
In `@extras/havpe-relay/main.py`:
- Around line 14-16: The WebSocket URI is being built by interpolating token and
device_name directly, which will break on reserved characters; before forming
the ws URI used in the connect/create_connection call (the code that uses token
and device_name to build the query string, referenced as token, device_name and
the websocket connect call around line ~70), URL-encode those parameters using
urllib.parse.quote_plus or urllib.parse.urlencode and then assemble the full URI
with the encoded query string so the websocket.connect/create_connection call
receives a valid percent-encoded URI.
In `@extras/havpe-relay/menu_relay.py`:
- Around line 193-198: start and stop currently call
self.bg.run_coro(self._start_server()) and self.bg.run_coro(self._stop_server())
and drop the returned concurrent.futures.Future, which hides exceptions; change
both start and stop to capture the future returned from bg.run_coro(...) and
attach a done callback that checks fut.exception() and logs it (e.g., using
self.logger.exception or self.logger.error) so any errors raised by
_start_server or _stop_server are surfaced and recorded. Ensure the callback
references the Future from bg.run_coro and does not swallow exceptions.
In `@extras/havpe-relay/pyproject.toml`:
- Around line 10-11: Remove "rumps" from the main dependency list in
pyproject.toml and declare it as an optional/extra macOS-only dependency (use
the project optional-dependencies or extras section and a platform marker like
sys_platform == "darwin") so it is not installed in Linux Docker builds; ensure
the macOS-only consumer file menu_relay.py is documented or updated to require
that extra (e.g., install package[rumps]) and leave main.py unchanged so the
relay service does not pull in rumps.
In `@extras/havpe-relay/test_audio_output.py`:
- Around line 144-167: The function list_entities is annotated as returning None
but actually returns the entities list; update the return type to match the
implementation (e.g., change the signature from async def list_entities(...) ->
None to -> list[aioesphomeapi.EntityInfo] or the appropriate Sequence/Iterable
type) and adjust any imports or typing references as needed; ensure the
docstring or callers expecting None are updated to handle the returned entities
variable and keep the function name list_entities and the final return entities
unchanged.
- Around line 247-261: The temporary HTTP server started by start_http_server is
not guaranteed to be shut down if an exception occurs after creation; ensure
http_server is always torn down by calling http_server.shutdown() in the outer
finally block (or a dedicated finally that runs before await
client.disconnect()), guarding with a presence check (e.g., if http_server is
not None) so shutdown is safe; update the code around start_http_server, the
variable http_server, and the finally section to guarantee shutdown even on
errors (alternatively wrap start_http_server in a context manager that ensures
cleanup).
---
Nitpick comments:
In `@extras/havpe-relay/firmware/tcp_stream.h`:
- Around line 1-4: The header tcp_stream.h currently only forwards lwip headers
(lwip/sockets.h, lwip/inet.h, lwip/errno.h) which is confusing to readers; add a
brief top-of-file doc comment explaining that this is an intentional
convenience/forwarding header for ESPHome/ESP-IDF builds (used by YAML lambdas),
and note that clang diagnostics for lwip/sockets.h are false positives because
those headers are provided at target build time; optionally include a short
comment about why `#pragma` once is used and, if desired, a single-line
suppression/conditional (e.g., a comment mentioning toolchain-provided headers)
to help future contributors.
In `@extras/havpe-relay/init.py`:
- Around line 335-342: The Auth Username is printed in cleartext in the summary;
update the display code that calls self.console.print for AUTH_USERNAME to pass
the retrieved value through the existing mask_value helper (imported on line 25)
so the email is partially masked (e.g., show masked =
mask_value(self.config.get('AUTH_USERNAME', '')) and print that masked value,
preserving the same label and handling empty/missing values the same way as
other fields like AUTH_PASSWORD.
- Around line 148-169: The two console.print calls use unnecessary f-strings
with no interpolation (e.g., f"[green][SUCCESS][/green] Username configured from
command line" and f"[green][SUCCESS][/green] Password configured from command
line"); update these in the init logic that sets AUTH_USERNAME and password
handling by removing the leading f so they are plain strings, leaving the
surrounding logic in methods like prompt_value and read_existing_env_value
unchanged.
In `@extras/havpe-relay/README.md`:
- Around line 7-10: The markdown uses unlabeled fenced blocks causing MD040
warnings; update the two fenced code blocks in README.md (the ASCII diagram
block and the file-tree block) to include a language label such as "text" by
changing the opening fences from ``` to ```text for the diagram near the
ESP32/Relay/Chronicle lines and the file-list block around the havpe-relay/ tree
so both fenced sections are explicitly labeled.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
⛔ Files ignored due to path filters (1)
extras/havpe-relay/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
extras/havpe-relay/.gitignoreextras/havpe-relay/README.mdextras/havpe-relay/docker-compose.ymlextras/havpe-relay/firmware/.gitignoreextras/havpe-relay/firmware/chronicle-sdk/chronicle.hextras/havpe-relay/firmware/secrets.template.yamlextras/havpe-relay/firmware/tcp_stream.hextras/havpe-relay/firmware/voice-chronicle.yamlextras/havpe-relay/firmware/voice-tcp.yamlextras/havpe-relay/flash.shextras/havpe-relay/init.pyextras/havpe-relay/init.shextras/havpe-relay/main.pyextras/havpe-relay/menu_relay.pyextras/havpe-relay/pyproject.tomlextras/havpe-relay/test_audio_output.py
| # ESPHome build artifacts | ||
| firmware/.esphome/ | ||
|
|
||
| # Firmware secrets (generated by init.py) | ||
| firmware/secrets.yaml | ||
|
|
||
| # Audio recordings (debug mode) | ||
| audio_chunks/ | ||
|
|
||
| # Python | ||
| __pycache__/ | ||
| *.pyc |
There was a problem hiding this comment.
Missing .env from gitignore — secrets will be committed.
init.py generates a .env file containing AUTH_PASSWORD and other credentials. This file should be excluded from version control.
🐛 Proposed fix
# Firmware secrets (generated by init.py)
firmware/secrets.yaml
+# Environment file (generated by init.py, contains credentials)
+.env
+
# Audio recordings (debug mode)
audio_chunks/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # ESPHome build artifacts | |
| firmware/.esphome/ | |
| # Firmware secrets (generated by init.py) | |
| firmware/secrets.yaml | |
| # Audio recordings (debug mode) | |
| audio_chunks/ | |
| # Python | |
| __pycache__/ | |
| *.pyc | |
| # ESPHome build artifacts | |
| firmware/.esphome/ | |
| # Firmware secrets (generated by init.py) | |
| firmware/secrets.yaml | |
| # Environment file (generated by init.py, contains credentials) | |
| .env | |
| # Audio recordings (debug mode) | |
| audio_chunks/ | |
| # Python | |
| __pycache__/ | |
| *.pyc |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/.gitignore` around lines 1 - 12, Add the generated .env to
.gitignore so secrets from init.py (e.g., AUTH_PASSWORD and other credentials)
are not committed; edit the repository .gitignore to include a line with ".env"
(and consider adding related patterns like ".env.local" or "*.env" if
appropriate) to ensure the generated environment file is excluded from version
control.
| - "${TCP_PORT:-8989}:8989" | ||
| env_file: .env | ||
| environment: | ||
| # Connect to backend running on host (adjust as needed) | ||
| - WS_URL=${WS_URL:-ws://host.docker.internal:8000/ws?codec=pcm} | ||
| - TCP_PORT=${TCP_PORT:-8989} |
There was a problem hiding this comment.
Container-side port is hardcoded — changing TCP_PORT will break connectivity.
The port mapping "${TCP_PORT:-8989}:8989" binds the host side dynamically but the container side is always 8989. If a user sets TCP_PORT=9000, the application inside the container listens on 9000, but Docker still forwards traffic to container port 8989, causing a mismatch.
🐛 Proposed fix
ports:
- - "${TCP_PORT:-8989}:8989"
+ - "${TCP_PORT:-8989}:${TCP_PORT:-8989}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - "${TCP_PORT:-8989}:8989" | |
| env_file: .env | |
| environment: | |
| # Connect to backend running on host (adjust as needed) | |
| - WS_URL=${WS_URL:-ws://host.docker.internal:8000/ws?codec=pcm} | |
| - TCP_PORT=${TCP_PORT:-8989} | |
| - "${TCP_PORT:-8989}:${TCP_PORT:-8989}" | |
| env_file: .env | |
| environment: | |
| - TCP_PORT=${TCP_PORT:-8989} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/docker-compose.yml` around lines 7 - 10, The
host-to-container port mapping currently hardcodes the container side to 8989
causing a mismatch when TCP_PORT is overridden; update the port mapping in
docker-compose.yml to use the same variable for both sides (e.g., replace
"${TCP_PORT:-8989}:8989" with "${TCP_PORT:-8989}:${TCP_PORT:-8989}") so the
container port and the environment variable TCP_PORT remain consistent with the
existing environment entry (TCP_PORT=${TCP_PORT:-8989}).
| static bool send_msg(uint8_t type, const uint8_t* data, uint16_t len) { | ||
| if (sockfd < 0) return false; | ||
| uint8_t hdr[3] = {type, (uint8_t)(len >> 8), (uint8_t)(len & 0xFF)}; | ||
| if (lwip_send(sockfd, hdr, 3, 0) != 3) { disconnect(); return false; } | ||
| if (len > 0) { | ||
| size_t sent = 0; | ||
| while (sent < len) { | ||
| ssize_t n = lwip_send(sockfd, data + sent, len - sent, 0); | ||
| if (n <= 0) { disconnect(); return false; } | ||
| sent += n; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
lwip_send() partial write short-write TCP socket behavior documentation
💡 Result:
In lwIP’s BSD-sockets layer, lwip_send() / lwip_write() can legitimately short-write (return a positive value < len) on TCP—so you must code it like POSIX send() and loop until all bytes are sent or an error occurs.
Key lwIP-specific gotchas documented by lwIP maintainers/users:
-
Nonblocking + “too large” writes may fail instead of short-write. In nonblocking mode (or with
MSG_DONTWAIT), lwIP historically checks whether the request can ever fit into the configured TCP send buffering/queuing (TCP_SND_BUF,TCP_SND_QUEUELEN) and may return-1witherrno=EMSGSIZErather than sending “as much as possible”. This behavior is discussed as a lwIP bug/quirk in the lwIP users list. [1] -
“ERR_OK doesn’t imply ‘all bytes written’ unless you asked for a byte-count.” Simon Goldschmidt (lwIP maintainer) explicitly notes that for nonblocking/timeout-like behavior you must use the “partly” API so you can learn how many bytes were actually accepted; otherwise you can get
ERR_OKeven though not all bytes were written. [2] -
The netconn API documents an all-or-nothing nonblocking option (not partial). The official lwIP
netconn_write*docs describeNETCONN_DONTBLOCKas “only write the data if all data can be written at once” (i.e., not “write what you can”). Since the sockets API maps to netconn internally, this is part of why lwIP can differ from the “short-write” expectation in some nonblocking cases. [3]
Practical takeaway: implement a send loop that handles (a) short positive returns, and (b) -1 with EWOULDBLOCK/EAGAIN/timeouts depending on your lwIP port/config; and for nonblocking semantics prefer the APIs/paths that actually return bytes_written (lwIP’s *_partly functions internally). [2][3]
Sources
[1] lwIP users thread: nonblocking lwip_send() returning EMSGSIZE instead of partial send when request exceeds TCP_SND_BUF/TCP_SND_QUEUELEN.
[2] Simon Goldschmidt on lwIP users list: need netconn_write_partly() to get “number of actually written bytes”; ERR_OK can happen without all bytes written.
[3] lwIP 2.1.0 api.h Doxygen: NETCONN_DONTBLOCK = “only write the data if all data can be written at once”; bytes_written provides actual written count.
🏁 Script executed:
# Verify the file exists and check the exact code at lines 69-79
fd -e h "chronicle.h" extras/havpe-relay/firmware/chronicle-sdk/ || echo "File not found via fd"
# Alternative: check if the path exists
if [ -f "extras/havpe-relay/firmware/chronicle-sdk/chronicle.h" ]; then
echo "=== File exists, showing lines 69-79 ==="
sed -n '69,79p' "extras/havpe-relay/firmware/chronicle-sdk/chronicle.h"
else
echo "File does not exist at specified path"
fiRepository: SimpleOpenSoftware/chronicle
Length of output: 653
Handle short writes for header send to prevent frame desynchronization.
Line 72 assumes lwip_send(..., 3, ...) writes all 3 bytes in one call. However, lwIP's BSD-sockets layer can legitimately short-write on TCP (return a positive value smaller than requested), as documented by lwIP maintainers. Without looping, the header will become misaligned with the payload, corrupting frame boundaries. The payload send loop (lines 76–80) already handles this; the header must too.
🔧 Proposed fix
static bool send_msg(uint8_t type, const uint8_t* data, uint16_t len) {
if (sockfd < 0) return false;
uint8_t hdr[3] = {type, (uint8_t)(len >> 8), (uint8_t)(len & 0xFF)};
- if (lwip_send(sockfd, hdr, 3, 0) != 3) { disconnect(); return false; }
+ auto send_all = [&](const uint8_t* buf, size_t total) -> bool {
+ size_t sent = 0;
+ while (sent < total) {
+ ssize_t n = lwip_send(sockfd, buf + sent, total - sent, 0);
+ if (n <= 0) { disconnect(); return false; }
+ sent += static_cast<size_t>(n);
+ }
+ return true;
+ };
+ if (!send_all(hdr, sizeof(hdr))) return false;
if (len > 0) {
- size_t sent = 0;
- while (sent < len) {
- ssize_t n = lwip_send(sockfd, data + sent, len - sent, 0);
- if (n <= 0) { disconnect(); return false; }
- sent += n;
- }
+ if (!send_all(data, len)) return false;
}
return true;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/firmware/chronicle-sdk/chronicle.h` around lines 69 - 79,
The send_msg function currently assumes the header write (lwip_send(sockfd, hdr,
3, 0)) always returns 3; change it to handle short writes the same way as the
payload loop: after checking sockfd and preparing hdr, loop until all 3 header
bytes are sent, calling lwip_send with hdr+sent and 3-sent, handle n<=0 by
calling disconnect() and returning false; only proceed to the payload loop once
the header is fully written. Reference symbols: send_msg, hdr, sockfd,
lwip_send, disconnect.
| on_data: | ||
| then: | ||
| - lambda: |- | ||
| static int sockfd = -1; | ||
| static bool announced = false; | ||
| static uint32_t last_connect_attempt = 0; | ||
| id(tcp_ok) = (sockfd >= 0); | ||
|
|
||
| // (Re)open TCP socket if needed - with 3-second cooldown | ||
| if (sockfd < 0) { | ||
| uint32_t now = millis(); | ||
| if (now - last_connect_attempt < 3000) { | ||
| return; // wait before retrying | ||
| } | ||
| last_connect_attempt = now; | ||
|
|
||
| sockfd = lwip_socket(AF_INET, SOCK_STREAM, 0); | ||
| if (sockfd >= 0) { | ||
| struct sockaddr_in dest {}; | ||
| dest.sin_family = AF_INET; | ||
| dest.sin_port = htons(atoi("${server_port}")); | ||
| inet_pton(AF_INET, "${server_ip}", &dest.sin_addr); | ||
| int rc = lwip_connect(sockfd, (struct sockaddr *)&dest, sizeof(dest)); | ||
| if (rc < 0) { | ||
| ESP_LOGE("tcp", "connect() failed, errno=%d", errno); | ||
| lwip_close(sockfd); | ||
| sockfd = -1; | ||
| } else { | ||
| ESP_LOGI("tcp", "Socket %d connected to %s:%s", | ||
| sockfd, "${server_ip}", "${server_port}"); | ||
| } | ||
| } else { | ||
| ESP_LOGE("tcp", "socket() failed, errno=%d", errno); | ||
| } | ||
| return; // skip sending on the connect attempt frame | ||
| } | ||
|
|
||
| // Send raw audio data | ||
| if (sockfd >= 0) { | ||
| ssize_t n = lwip_send(sockfd, x.data(), x.size(), 0); | ||
| if (n < 0) { | ||
| ESP_LOGE("tcp", "send() errno=%d - closing & retrying", errno); | ||
| lwip_close(sockfd); | ||
| sockfd = -1; | ||
| announced = false; | ||
| id(tcp_ok) = false; | ||
| } else if (!announced) { | ||
| ESP_LOGI("tcp", "Streaming audio to relay..."); | ||
| announced = true; | ||
| } | ||
| } |
There was a problem hiding this comment.
on_data transport protocol is incompatible with the current relay parser.
This block sends raw bytes (lwip_send(sockfd, x.data(), x.size(), 0)), but extras/havpe-relay/main.py parses framed messages ([type][length][payload]). With the current relay, this will misinterpret the stream.
Please either:
- frame audio via the same protocol used in
extras/havpe-relay/firmware/chronicle-sdk/chronicle.h(chronicle::send_audio), or - add an explicit raw-stream mode in the relay and document which firmware uses which mode.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/firmware/voice-chronicle.yaml` around lines 239 - 289, The
on_data lambda currently sends raw bytes via lwip_send (references: on_data,
sockfd, lwip_send, announced) which is incompatible with the framed protocol
parsed by extras/havpe-relay/main.py; fix by framing each send using the same
protocol as chronicle::send_audio (include a message type byte, a length field,
then the payload) before calling lwip_send, or alternatively add a
negotiated/raw-stream flag (e.g., a boolean in the firmware config and a
matching mode in the relay) so both sides agree on raw vs framed mode; ensure
the framing logic mirrors chronicle::send_audio exactly and update id(tcp_ok)
and announced handling to only flip after a successful framed handshake/send.
| int32_t* samples = (int32_t*)x.data(); | ||
| int n_frames = x.size() / 8; // 8 bytes per stereo frame | ||
| if (n_frames <= 0) return; | ||
|
|
||
| int16_t mono[n_frames]; | ||
| for (int i = 0; i < n_frames; i++) { | ||
| mono[i] = (int16_t)(samples[i * 2] >> 16); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Is Variable Length Array VLA valid in ISO C++17 standard
💡 Result:
No. Variable Length Arrays (VLAs) are not part of the ISO C++17 standard—in standard C++ the bound of an array with known size must be a compile-time constant expression, so code like int a[n]; (where n is only known at runtime) is ill-formed in ISO C++17. [3]
Some compilers accept VLAs in C++ as a non-standard extension (notably GCC and Clang), but that code is then non-portable and may be rejected under stricter/pedantic settings. GCC explicitly documents VLA-in-C++ support as an extension. [1] Clang likewise supports it as a GNU/C99-compatibility extension and warns that it’s not permitted in standard C++. [2]
Portable C++17 alternatives: std::vector<T> (dynamic size) or std::unique_ptr<T[]>.
Sources: [1] [2] [3]
🏁 Script executed:
# Locate and inspect the actual file
find . -name "voice-tcp.yaml" -type fRepository: SimpleOpenSoftware/chronicle
Length of output: 116
🏁 Script executed:
# Read the file to see the actual code context
cat -n ./extras/havpe-relay/firmware/voice-tcp.yaml | sed -n '180,200p'Repository: SimpleOpenSoftware/chronicle
Length of output: 666
🏁 Script executed:
# Check the file type and broader context
head -50 ./extras/havpe-relay/firmware/voice-tcp.yamlRepository: SimpleOpenSoftware/chronicle
Length of output: 1008
Use a standards-safe conversion path for audio samples.
Lines 186–193 contain two significant portability and safety issues:
-
VLA (Variable Length Array):
int16_t mono[n_frames];is not valid ISO C++17—it's only a non-standard compiler extension. While GCC and Clang (used by esp-idf) tolerate it, this creates portability risks and may fail under stricter compiler settings or different optimization levels. -
Unsafe reinterpret cast:
(int32_t*)x.data()casts fromuint8_t*toint32_t*, which violates alignment guarantees. Auint8_tpointer has 1-byte alignment, butint32_ttypically requires 4-byte alignment. This creates strict aliasing violations and risks undefined behavior on embedded systems, potentially causing memory corruption or crashes under optimization.
Replace with a standards-compliant, safe conversion using std::vector and explicit memcpy for byte-level reads:
🔧 Proposed fix
- int32_t* samples = (int32_t*)x.data();
- int n_frames = x.size() / 8; // 8 bytes per stereo frame
- if (n_frames <= 0) return;
-
- int16_t mono[n_frames];
- for (int i = 0; i < n_frames; i++) {
- mono[i] = (int16_t)(samples[i * 2] >> 16);
- }
-
- chronicle::send_audio((const uint8_t*)mono, n_frames * 2);
+ const uint8_t* bytes = x.data();
+ size_t n_frames = x.size() / 8; // 8 bytes per stereo frame
+ if (n_frames == 0) return;
+
+ std::vector<int16_t> mono;
+ mono.reserve(n_frames);
+ for (size_t i = 0; i < n_frames; i++) {
+ int32_t left = 0;
+ memcpy(&left, bytes + (i * 8), sizeof(left)); // left channel
+ mono.push_back(static_cast<int16_t>(left >> 16));
+ }
+
+ chronicle::send_audio(
+ reinterpret_cast<const uint8_t*>(mono.data()),
+ mono.size() * sizeof(int16_t)
+ );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/firmware/voice-tcp.yaml` around lines 186 - 193, Replace
the VLA and unsafe cast with a std::vector and byte-safe reads: allocate mono as
std::vector<int16_t> mono(n_frames); use a uint8_t* (or auto ptr = x.data()) and
for each frame memcpy 4 bytes into a local int32_t temp (e.g. memcpy(&temp, ptr
+ i*8 + channel*4, 4)) then compute mono[i] = static_cast<int16_t>(temp >> 16);
this removes the (int32_t*)x.data() cast and the int16_t mono[n_frames] VLA
while preserving the same extraction logic from samples and n_frames.
| import os | ||
| import struct | ||
|
|
There was a problem hiding this comment.
URL-encode WebSocket query parameters before connect.
Interpolating token/device_name directly into the URI can break if values contain reserved characters.
🔧 Proposed fix
import os
import struct
+from urllib.parse import urlencode
@@
- backend_uri = f"{BACKEND_WS_URL}/ws?codec=pcm&token={token}&device_name={DEVICE_NAME}"
+ query = urlencode({"codec": "pcm", "token": token, "device_name": DEVICE_NAME})
+ backend_uri = f"{BACKEND_WS_URL}/ws?{query}"Also applies to: 70-70
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/main.py` around lines 14 - 16, The WebSocket URI is being
built by interpolating token and device_name directly, which will break on
reserved characters; before forming the ws URI used in the
connect/create_connection call (the code that uses token and device_name to
build the query string, referenced as token, device_name and the websocket
connect call around line ~70), URL-encode those parameters using
urllib.parse.quote_plus or urllib.parse.urlencode and then assemble the full URI
with the encoded query string so the websocket.connect/create_connection call
receives a valid percent-encoded URI.
| def start(self) -> None: | ||
| self.bg.run_coro(self._start_server()) | ||
|
|
||
| def stop(self) -> None: | ||
| self.bg.run_coro(self._stop_server()) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the file to see the context and implementation
head -250 extras/havpe-relay/menu_relay.py | tail -100Repository: SimpleOpenSoftware/chronicle
Length of output: 3761
🏁 Script executed:
# Also search for the run_coro method definition
rg "def run_coro" -A 10Repository: SimpleOpenSoftware/chronicle
Length of output: 1720
🏁 Script executed:
# Find where self.bg is instantiated/defined to understand its type
rg "self\.bg\s*=" -B 2 -A 2 extras/havpe-relay/menu_relay.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 253
🌐 Web query:
asyncio.run_coroutine_threadsafe exception handling when future not awaited
💡 Result:
asyncio.run_coroutine_threadsafe(coro, loop) schedules coro on loop and returns a concurrent.futures.Future (not an asyncio.Task/Future). Exceptions raised by coro are captured and stored in that returned future, and are only re-raised if you retrieve them via future.result() / future.exception() (or equivalent). If you never do that, the exception typically won’t surface anywhere automatically. [1]
Practical patterns when you won’t await / block on the returned future
1) “Fire-and-forget, but don’t lose exceptions”: add a done-callback that consumes/logs
import asyncio, logging
log = logging.getLogger(__name__)
def log_cf_future(f: "concurrent.futures.Future"):
try:
f.result() # re-raises the coroutine exception here
except Exception:
log.exception("run_coroutine_threadsafe task failed")
cf = asyncio.run_coroutine_threadsafe(coro(), loop)
cf.add_done_callback(log_cf_future)add_done_callback() is the standard way to ensure completion/exception handling without blocking. [1]
2) If you can wait sometimes, call result(timeout=...)
This both waits and propagates exceptions (or lets you cancel on timeout). [1]
Related gotcha
If instead of run_coroutine_threadsafe() you create background tasks with asyncio.create_task(), you must keep a strong reference (and typically attach callbacks) to avoid tasks “disappearing” mid-execution. [2]
Sources: Python docs on asyncio.run_coroutine_threadsafe() and its exception propagation via the returned concurrent.futures.Future [1], and asyncio.create_task() guidance on background tasks/callbacks [2].
[1] (docs.python.org)
[2] (docs.python.org)
Citations:
- 1: https://docs.python.org/3/library/asyncio-task.html
- 2: https://docs.python.org/3/library/asyncio-task.html
Surface exceptions from start() and stop() coroutines; don't drop the returned futures.
When run_coro() drops the returned concurrent.futures.Future, any exception raised by _start_server or _stop_server (e.g., port binding failures, network issues) will be silently hidden. Add a done-callback to catch and log these exceptions.
🔧 Proposed fix
def start(self) -> None:
- self.bg.run_coro(self._start_server())
+ fut = self.bg.run_coro(self._start_server())
+ fut.add_done_callback(self._on_bg_task_done)
def stop(self) -> None:
- self.bg.run_coro(self._stop_server())
+ fut = self.bg.run_coro(self._stop_server())
+ fut.add_done_callback(self._on_bg_task_done)
+
+def _on_bg_task_done(self, fut) -> None:
+ try:
+ fut.result()
+ except Exception as e:
+ logger.exception("Relay background task failed")
+ self.state.update(status="error", error=str(e))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/menu_relay.py` around lines 193 - 198, start and stop
currently call self.bg.run_coro(self._start_server()) and
self.bg.run_coro(self._stop_server()) and drop the returned
concurrent.futures.Future, which hides exceptions; change both start and stop to
capture the future returned from bg.run_coro(...) and attach a done callback
that checks fut.exception() and logs it (e.g., using self.logger.exception or
self.logger.error) so any errors raised by _start_server or _stop_server are
surfaced and recorded. Ensure the callback references the Future from
bg.run_coro and does not swallow exceptions.
| "rumps>=0.4.0", | ||
| "python-dotenv>=1.0.0", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if rumps is imported in the codebase
rg -n --type=py 'import rumps|from rumps'Repository: SimpleOpenSoftware/chronicle
Length of output: 177
🏁 Script executed:
# Also check the overall structure of the project
find . -name "*.py" -type f | head -20Repository: SimpleOpenSoftware/chronicle
Length of output: 1273
🏁 Script executed:
# Check if there's a Dockerfile or similar that would be affected
find . -name "Dockerfile*" -o -name "*.dockerfile" -o -name "docker-compose*"Repository: SimpleOpenSoftware/chronicle
Length of output: 1698
🌐 Web query:
rumps package pypi macOS platform compatibility
💡 Result:
rumps on PyPI is macOS-only (a library for building macOS status-bar/menu-bar apps) and is classified for MacOS X / Cocoa only, not Windows or Linux. It also requires PyObjC, so it depends on Apple’s Cocoa APIs and therefore won’t be usable on non-macOS platforms. The latest PyPI release shown is 0.4.0 (Oct 15, 2022). [1]
Source(s):
[1] PyPI project page for rumps (classifiers + requirements + release date) (pypi.org)
Citations:
🏁 Script executed:
cat extras/havpe-relay/DockerfileRepository: SimpleOpenSoftware/chronicle
Length of output: 767
🏁 Script executed:
cat extras/havpe-relay/pyproject.tomlRepository: SimpleOpenSoftware/chronicle
Length of output: 498
🏁 Script executed:
# Check if pyproject.toml uses any platform markers for rumps
rg -A 5 -B 5 'rumps' extras/havpe-relay/pyproject.tomlRepository: SimpleOpenSoftware/chronicle
Length of output: 262
🏁 Script executed:
# Check if rumps is actually used in menu_relay.py
cat -n extras/havpe-relay/menu_relay.py | head -50Repository: SimpleOpenSoftware/chronicle
Length of output: 1849
🏁 Script executed:
# Check if there's conditional/optional import handling
rg -A 3 -B 3 'import rumps|from rumps' extras/havpe-relay/menu_relay.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 176
🏁 Script executed:
# Verify the esphome and aioesphomeapi versions exist
curl -s https://pypi.org/pypi/esphome/json | python3 -c "import sys, json; data=json.load(sys.stdin); print('esphome versions:', sorted([v for v in data['releases'].keys()], key=lambda x: tuple(map(int, x.split('.'))), reverse=True)[:5])"Repository: SimpleOpenSoftware/chronicle
Length of output: 242
🏁 Script executed:
curl -s https://pypi.org/pypi/aioesphomeapi/json | python3 -c "import sys, json; data=json.load(sys.stdin); print('aioesphomeapi versions:', sorted([v for v in data['releases'].keys()], key=lambda x: tuple(map(int, x.split('.'))), reverse=True)[:5])"Repository: SimpleOpenSoftware/chronicle
Length of output: 149
🏁 Script executed:
# Let me check if the Docker build actually needs menu_relay.py or if it's excluded
rg -n 'menu_relay' extras/havpe-relay/DockerfileRepository: SimpleOpenSoftware/chronicle
Length of output: 54
🏁 Script executed:
# Check what main.py does (what the Docker CMD actually runs)
cat -n extras/havpe-relay/main.py | head -30Repository: SimpleOpenSoftware/chronicle
Length of output: 995
🏁 Script executed:
# Verify esphome version - try a different approach
curl -s 'https://pypi.org/pypi/esphome/json' 2>/dev/null | python3 << 'EOF'
import sys, json
try:
data = json.load(sys.stdin)
versions = sorted(data['releases'].keys())
# Check if 2025.12.0 exists
if '2025.12.0' in data['releases']:
print("✓ esphome 2025.12.0 EXISTS")
else:
latest = versions[-1]
print(f"✗ esphome 2025.12.0 NOT FOUND. Latest: {latest}")
print(f"Recent versions: {versions[-5:]}")
except Exception as e:
print(f"Error: {e}")
EOFRepository: SimpleOpenSoftware/chronicle
Length of output: 120
🏁 Script executed:
# Check if there are platform markers or conditional dependencies anywhere
cat extras/havpe-relay/pyproject.tomlRepository: SimpleOpenSoftware/chronicle
Length of output: 498
🏁 Script executed:
# Confirm Docker build path - what does uv sync do with platform-specific packages?
# Check if uv.lock exists and how it handles platforms
head -50 extras/havpe-relay/uv.lockRepository: SimpleOpenSoftware/chronicle
Length of output: 10886
🏁 Script executed:
# Final check: verify that uv will try to install rumps on Linux (check uv.lock for rumps)
rg -A 5 'name = "rumps"' extras/havpe-relay/uv.lockRepository: SimpleOpenSoftware/chronicle
Length of output: 497
🏁 Script executed:
# Check if esphome package exists at all on PyPI
curl -s 'https://pypi.org/pypi/esphome/json' 2>/dev/null | head -c 500Repository: SimpleOpenSoftware/chronicle
Length of output: 572
🏁 Script executed:
# Let me check the uv.lock more thoroughly to see if rumps has platform markers
rg -B 5 -A 10 'name = "rumps"' extras/havpe-relay/uv.lock | tail -20Repository: SimpleOpenSoftware/chronicle
Length of output: 1226
🏁 Script executed:
# Double-check: does the uv.lock have a complete rumps entry with platforms?
sed -n '/^\[\[package\]\]$/,/^\[\[package\]\]$/p' extras/havpe-relay/uv.lock | grep -A 20 'name = "rumps"'Repository: SimpleOpenSoftware/chronicle
Length of output: 2409
🏁 Script executed:
# Verify aioesphomeapi versions more carefully
curl -s 'https://pypi.org/pypi/aioesphomeapi/json' 2>/dev/null | python3 << 'EOF'
import sys, json
try:
data = json.load(sys.stdin)
versions = sorted(data['releases'].keys(), key=lambda x: tuple(map(int, x.split('.'))))
print("Latest 10 aioesphomeapi versions:")
for v in versions[-10:]:
print(f" {v}")
if '30.2.0' in data['releases']:
print("\n✓ aioesphomeapi 30.2.0 EXISTS")
else:
print("\n✗ aioesphomeapi 30.2.0 NOT FOUND")
print(f"Min available: {versions[0]}")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
EOFRepository: SimpleOpenSoftware/chronicle
Length of output: 858
🏁 Script executed:
# Verify esphome versions
curl -s 'https://pypi.org/pypi/esphome/json' 2>/dev/null | python3 << 'EOF'
import sys, json
try:
data = json.load(sys.stdin)
versions = sorted(data['releases'].keys(), key=lambda x: tuple(map(int, x.split('.')[:3] if len(x.split('.')) > 2 else x.split('.'))))
print("Latest 10 esphome versions:")
for v in versions[-10:]:
print(f" {v}")
if '2025.12.0' in data['releases']:
print("\n✓ esphome 2025.12.0 EXISTS")
else:
print("\n✗ esphome 2025.12.0 NOT FOUND")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
EOFRepository: SimpleOpenSoftware/chronicle
Length of output: 858
🏁 Script executed:
# Check if uv will fail on Linux when trying to install rumps
# Let me see if there are any markers in uv.lock for rumps
cat extras/havpe-relay/uv.lock | grep -A 15 'name = "rumps"' | grep -i markerRepository: SimpleOpenSoftware/chronicle
Length of output: 54
🏁 Script executed:
# Let me also verify: does main.py actually get run in Docker, and does it need rumps?
grep -n "import rumps\|from rumps" extras/havpe-relay/main.pyRepository: SimpleOpenSoftware/chronicle
Length of output: 54
rumps is macOS-only and will break the Docker build — move to an optional platform-specific dependency group.
rumps wraps macOS-specific PyObjC APIs. The uv.lock contains no platform markers for rumps, so uv sync will attempt to install it on all platforms, including Linux in the Docker container. Since main.py (the actual relay service) does not import rumps, and only the optional menu_relay.py macOS menu bar app uses it, rumps should be moved to an optional dependency group.
🔧 Proposed fix
dependencies = [
"httpx>=0.27.0",
"websockets>=15.0.1",
- "rumps>=0.4.0",
"python-dotenv>=1.0.0",
]
+
+[project.optional-dependencies]
+macos = [
+ "rumps>=0.4.0",
+]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "rumps>=0.4.0", | |
| "python-dotenv>=1.0.0", | |
| dependencies = [ | |
| "httpx>=0.27.0", | |
| "websockets>=15.0.1", | |
| "python-dotenv>=1.0.0", | |
| ] | |
| [project.optional-dependencies] | |
| macos = [ | |
| "rumps>=0.4.0", | |
| ] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/pyproject.toml` around lines 10 - 11, Remove "rumps" from
the main dependency list in pyproject.toml and declare it as an optional/extra
macOS-only dependency (use the project optional-dependencies or extras section
and a platform marker like sys_platform == "darwin") so it is not installed in
Linux Docker builds; ensure the macOS-only consumer file menu_relay.py is
documented or updated to require that extra (e.g., install package[rumps]) and
leave main.py unchanged so the relay service does not pull in rumps.
| async def list_entities(client: aioesphomeapi.APIClient) -> None: | ||
| """List all entities on the device.""" | ||
| entities, services = await client.list_entities_services() | ||
|
|
||
| print("\n=== Device Entities ===") | ||
| for entity in entities: | ||
| entity_type = type(entity).__name__.replace("Info", "") | ||
| print(f" [{entity_type}] {entity.name or entity.object_id} (key={entity.key})") | ||
|
|
||
| print(f"\n=== Services ({len(services)}) ===") | ||
| for service in services: | ||
| print(f" {service.name}") | ||
|
|
||
| media_players = [ | ||
| e for e in entities if isinstance(e, aioesphomeapi.MediaPlayerInfo) | ||
| ] | ||
| if media_players: | ||
| print(f"\nFound {len(media_players)} media player(s):") | ||
| for mp in media_players: | ||
| print(f" - {mp.name or mp.object_id} (key={mp.key})") | ||
| else: | ||
| print("\nNo media players found! Device needs firmware with media_player component.") | ||
|
|
||
| return entities |
There was a problem hiding this comment.
Align list_entities return annotation with implementation.
The function is declared as -> None but returns entities at Line 167.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/test_audio_output.py` around lines 144 - 167, The function
list_entities is annotated as returning None but actually returns the entities
list; update the return type to match the implementation (e.g., change the
signature from async def list_entities(...) -> None to ->
list[aioesphomeapi.EntityInfo] or the appropriate Sequence/Iterable type) and
adjust any imports or typing references as needed; ensure the docstring or
callers expecting None are updated to handle the returned entities variable and
keep the function name list_entities and the final return entities unchanged.
| local_ip = get_local_ip() | ||
| http_server = start_http_server(audio_data, port=args.http_port) | ||
|
|
||
| audio_url = f"http://{local_ip}:{args.http_port}/audio.wav" | ||
| logger.info("Audio URL: %s", audio_url) | ||
|
|
||
| await play_audio(client, audio_url, announcement=not args.media) | ||
|
|
||
| logger.info("Waiting %d seconds for playback to finish...", args.wait) | ||
| await asyncio.sleep(args.wait) | ||
|
|
||
| http_server.shutdown() | ||
|
|
||
| finally: | ||
| await client.disconnect() |
There was a problem hiding this comment.
Always tear down the temporary HTTP server in finally.
If an exception occurs after Line 248, the server may keep running because shutdown is not guaranteed.
🛠️ Proposed fix
async def main(args: argparse.Namespace) -> None:
"""Main entry point."""
client = await connect_to_device(
device_ip=args.device_ip,
port=args.port,
password=args.password,
noise_psk=args.noise_psk,
)
+ http_server: HTTPServer | None = None
try:
if args.list_entities:
await list_entities(client)
return
@@
local_ip = get_local_ip()
http_server = start_http_server(audio_data, port=args.http_port)
@@
logger.info("Waiting %d seconds for playback to finish...", args.wait)
await asyncio.sleep(args.wait)
-
- http_server.shutdown()
finally:
+ if http_server is not None:
+ http_server.shutdown()
+ http_server.server_close()
await client.disconnect()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/havpe-relay/test_audio_output.py` around lines 247 - 261, The
temporary HTTP server started by start_http_server is not guaranteed to be shut
down if an exception occurs after creation; ensure http_server is always torn
down by calling http_server.shutdown() in the outer finally block (or a
dedicated finally that runs before await client.disconnect()), guarding with a
presence check (e.g., if http_server is not None) so shutdown is safe; update
the code around start_http_server, the variable http_server, and the finally
section to guarantee shutdown even on errors (alternatively wrap
start_http_server in a context manager that ensures cleanup).
- Updated `audio_jobs.py` to dynamically read audio format settings (sample rate, sample width, channels) from Redis, improving flexibility in audio processing. - Added error handling for Redis audio format retrieval to ensure defaults are used if the read fails, enhancing robustness. - Modified `main.py` to allow sample rate, width, and channels to be set via environment variables, improving configurability. - Updated firmware YAML files to change the sample rate from 16000 to 48000, aligning with new audio processing requirements.
|
| Metric | Count |
|---|---|
| ✅ Passed | 109 |
| ❌ Failed | 3 |
| 📊 Total | 112 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
init.pyfor interactive configuration of the ESP32 Voice-PE TCP-to-WebSocket relay, allowing users to set backend URLs and authentication credentials.flash.shfor flashing ESPHome firmware to the ESP32, including error handling for missing secrets..gitignorefiles for both the main project and firmware directories to exclude build artifacts and sensitive information.docker-compose.ymlto include environment file support and improved command structure for backend URL configuration.README.mdwith setup instructions and architecture overview for better user guidance.pyproject.tomlto reflect new dependencies and versioning for the project.This commit lays the groundwork for the HAVPE Relay project, ensuring a smooth setup and configuration process for users.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Chores