Skip to content

Conversation

@ja7ad
Copy link
Collaborator

@ja7ad ja7ad commented Aug 16, 2025

What does this PR do?

  • This PR, based on the JS SDK tests, fixes the integration test.

https://github.com/meilisearch/meilisearch-js/blob/main/tests/chat-workspaces.test.ts

PR checklist

Please check if your PR fulfills the following requirements:

  • Does this PR fix an existing issue, or have you listed the changes applied in the PR description (and why they are needed)?
  • Have you read the contributing guidelines?
  • Have you made sure that the title is accurate and descriptive of the changes?

Thank you so much for contributing to Meilisearch!

Summary by CodeRabbit

  • Tests
    • Made live chat streaming test environment-agnostic by removing dependency on external credentials and simplifying assertions to only verify stream establishment and iteration.
    • Added extensive decoder/stream unit tests covering error paths, fallback behavior, lifecycle semantics, and edge cases to improve reliability.
    • No user-facing changes.

@coderabbitai
Copy link

coderabbitai bot commented Aug 16, 2025

Walkthrough

Updates testing: integration chat streaming test removed environment-based API key handling and no longer asserts streamed content; new unit tests were added for stream/decoder behavior exercising many edge cases and error paths.

Changes

Cohort / File(s) Summary of Changes
Integration test updates
integration/chat_test.go
Removed OPENAI_API_KEY usage and skip logic; UpdateChatWorkspace now sets OrgId, ApiVersion, DeploymentId, and Source; streaming loop iterates without building or asserting streamed content; removed unused os import.
Stream/decoder unit tests
stream_test.go
Added comprehensive tests for decoder and stream behaviors: registration fallback on invalid content-type, NewDecoder(nil) behavior, default SSE fallback for unregistered types, SSE parsing edge cases (leading empty lines, event field without colon), Next/Close semantics, stream creation with nil decoder, and propagation of decoder errors using a fakeDecoder. Added errors import and test helpers usage.

Sequence Diagram(s)

sequenceDiagram
  participant T as Test_ChatCompletionStream
  participant C as Client
  participant S as Chat Service

  T->>C: UpdateChatWorkspace(OrgId, ApiVersion, DeploymentId, Source)
  C->>S: Configure workspace
  S-->>C: Ack
  T->>C: Start chat completion stream
  C->>S: Request stream
  loop Receive chunks
    S-->>C: StreamChunk
    C-->>T: Yield chunk (no accumulation/assertion)
  end
Loading
sequenceDiagram
  participant UT as Unit Test
  participant D as Decoder
  participant S as Stream wrapper

  UT->>D: RegisterDecoder / NewDecoder(...)
  UT->>D: Feed response bytes
  D-->>UT: Next()/Event()/Err() outcomes (including EOF/errors)
  UT->>S: NewStream(decoder)
  S->>D: Iteration (Next/Close)
  alt decoder error after iteration
    D-->>S: error
    S-->>UT: Propagate error via Err()
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • curquiza
  • ElyarSadig

Poem

I twitch my whiskers at tests that run,
No keys, just wind and packets spun.
I nibble streams, ignore their song—
Count each hop, but not what's gone.
A rabbit cheers the simpler race, 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/integration-test-of-chat

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ja7ad ja7ad requested a review from ElyarSadig August 16, 2025 10:33
@codecov
Copy link

codecov bot commented Aug 16, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 85.45%. Comparing base (9402597) to head (027440f).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #690      +/-   ##
==========================================
+ Coverage   83.80%   85.45%   +1.65%     
==========================================
  Files          20       20              
  Lines        3514     3514              
==========================================
+ Hits         2945     3003      +58     
+ Misses        420      366      -54     
+ Partials      149      145       -4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
integration/chat_test.go (1)

316-321: Make the streaming test assertive: ensure at least one chunk and check final error.

As written, the test can pass even if the stream yields zero chunks (e.g., due to misconfiguration), since there is no post-loop assertion. Count chunks and assert the final stream.Err() is nil.

- for stream.Next() {
-     require.NoError(t, stream.Err())
-     chunk := stream.Current()
-     require.NotNil(t, chunk)
- }
+ gotChunks := 0
+ for stream.Next() {
+     require.NoError(t, stream.Err())
+     chunk := stream.Current()
+     require.NotNil(t, chunk)
+     gotChunks++
+ }
+ require.NoError(t, stream.Err())
+ require.Greater(t, gotChunks, 0, "expected at least one streamed chunk")

This will prevent false positives where the stream is established but produces no data.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9402597 and 675dc9b.

📒 Files selected for processing (1)
  • integration/chat_test.go (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
integration/chat_test.go (1)
enum.go (1)
  • OpenaiChatSource (129-129)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: integration-tests (go current version)
  • GitHub Check: integration-tests (go latest version)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
stream_test.go (3)

225-239: Clean up the global decoder registry after the test

RegisterDecoder mutates the package-level decoderTypes map. Add a t.Cleanup to delete the test key to avoid cross-test coupling if future tests use the same content type or if tests run in parallel elsewhere.

Apply this diff:

 func TestRegisterDecoder_InvalidContentType(t *testing.T) {
   // Force the fallback branch in RegisterDecoder when mime.ParseMediaType fails
   key := "%%%INVALID%%%"
   called := false
   RegisterDecoder(key, func(rc io.ReadCloser) Decoder {
     called = true
     return newSSEDecoder(rc)
   })
+  t.Cleanup(func() {
+    delete(decoderTypes, strings.ToLower(key))
+  })
   // The decoderTypes map should now contain the raw lower-cased key
   _, ok := decoderTypes[strings.ToLower(key)]
   assert.True(t, ok, "expected decoderTypes to contain fallback key")
   // Just sanity: constructing a response with this invalid content-type won't match
   // (NewDecoder parsing fails and defaults to SSE) so we only validate registration here.
   assert.False(t, called, "decoder factory should not be invoked in this test")
 }

250-262: Make fallback-to-SSE test resilient to future registrations

Using "application/json" assumes no decoder is registered for it. If a JSON decoder is ever added, this test will start failing. Use a clearly unregistered content type to ensure stability.

Apply this diff:

- // Use an unregistered (but valid) content type so NewDecoder falls back to SSE.
- res := httpRespWithBody(t, data, "application/json")
+ // Use an obviously unregistered (but valid) content type so NewDecoder falls back to SSE.
+ res := httpRespWithBody(t, data, "application/x-meilisearch-test")

295-315: Add compile-time interface assertion for fakeDecoder

This guards against future interface changes to Decoder by catching mismatches at compile time.

Apply this diff:

 type fakeDecoder struct {
   events []Event
   idx    int
   err    error
   cur    Event
   closed bool
 }
 
+// Ensure fakeDecoder implements Decoder.
+var _ Decoder = (*fakeDecoder)(nil)
+
 func (f *fakeDecoder) Next() bool {
   if f.idx < len(f.events) {
     f.cur = f.events[f.idx]
     f.idx++
     return true
   }
   return false
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 675dc9b and 027440f.

📒 Files selected for processing (1)
  • stream_test.go (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
stream_test.go (1)
stream.go (5)
  • RegisterDecoder (17-24)
  • Decoder (26-31)
  • NewDecoder (38-53)
  • Event (33-36)
  • NewStream (197-205)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: integration-tests (go current version)
  • GitHub Check: integration-tests (go latest version)
🔇 Additional comments (7)
stream_test.go (7)

4-4: Import of errors is appropriate for sentinel error tests

Used in TestStream_DecoderErrorPropagatesAfterIteration; no issues.


264-273: Double close and post-close Next() semantics verified

Solid coverage of Close idempotency and Next() behavior after Close.


275-286: Correctly asserts behavior for 'event' line without colon

Matches SSE parsing rules: field name with empty value yields empty event type and no data.


288-293: Nil decoder path is well covered

Validates NewStream’s defensive behavior and Close() no-op with nil decoder.


316-323: Error propagation from underlying decoder is correctly asserted

Good use of sentinel error and ErrorIs for robustness.


325-337: Leading blank lines are correctly ignored

This test nicely captures a subtle but common SSE edge case.


241-248: errorReader helper is already defined

Verified in encoding_test.go (lines 27–31) that the errorReader type and its Read method exist, so TestDecoder_ReadErrorPropagates will compile successfully.

@ja7ad ja7ad added this pull request to the merge queue Aug 17, 2025
Merged via the queue into main with commit b6f1172 Aug 17, 2025
6 checks passed
@ja7ad ja7ad deleted the fix/integration-test-of-chat branch August 17, 2025 06:07
@ja7ad ja7ad added the enhancement New feature or request label Aug 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants