Skip to content

Technical Best Practices


Don't forget to check for open issues and upcoming enhancements on Marqo's GitHub repository for the latest updates on features and capabilities.

Resolving Connection Issues with Docker Containers

When running the Marqo service on an M1 Mac, users may encounter issues where the Docker container is unable to connect to the host to add documents, particularly when attempting to access images with a message like "cannot resolve the host."

If you're facing connectivity issues between your Docker container and the host, especially for image indexing with marqo:

  • Replace host.docker.internal with Localhost IP: Instead of using http://host.docker.internal:8222, use the localhost IP address (e.g., to allow the container to access images on the host. This can often resolve the issue where the container cannot connect to the host.
  • Inspect Errors for Insights: If there are errors during document addition, inspect the error messages by indexing small batches and printing out the response. This will help you identify and troubleshoot specific issues with document indexing.
  • Consider Model Size: On M1 Macs, which lack GPU support for CUDA, opt for smaller models (like open_clip/ViT-B-32/laion400m_e32) to avoid performance issues during the indexing of images.
  • Update Docker Run Command: Modify the Docker run command to include your gateway IP address, which can be found using docker network inspect bridge. This change can enhance the Docker container's ability to communicate with the host. Monitor Initial Batches: Be aware that the first batch might be slow due to model downloading times. Ensure that the model is pre-downloaded if possible to speed up the process.
  • Use Correct URLs: Verify the URLs you're using to access the images. They should be reachable from the container. Alternatives like or http://localhost:8222/image.jpg might work if the direct IP doesn't.

EC2 Instance Storage Management

Users encounter a 507 Insufficient Storage error on a Linux EC2 instance despite having ample space.


  • Verify the actual disk usage via the df -h command to understand how much space is truly available and what is being utilized.

  • Be aware that certain applications, like Docker, can consume significant disk space. Old Docker volumes, in particular, can accumulate and take up space.

  • Clean up or remove unnecessary data and volumes to free up space, especially when the root filesystem is reaching its capacity.

Image Search Issues in Marqo

Difficulties in performing image searches using local paths or URLs, resulting in MarqoWebError messages.


  • Ensure that Marqo has access to the images you're trying to use. If using a local path, Marqo may not recognize it due to access restrictions.

  • Host the image on an HTTP server and use the URL for the search query, as Marqo currently only supports image searching via URL.

  • For reverse image search issues, it's important to know that Marqo currently does not support image search via local paths in Docker. This functionality is on the roadmap for future releases, as indicated by the open GitHub issue. Meanwhile, continue to use image URLs for search queries.

Error when loading a custom model into marqo

Loading a custom model from Hugging Face into Marqo can sometimes result in errors if the model isn't supported by the existing frameworks (open_clip or clip).

If you encounter an error when loading a custom model, like ValueError: You have to specify pixel_values, it may mean that Marqo can't load the model as-is. A workaround could be to convert the model's .bin file to a .pt (PyTorch) file and attempt to load it again using open_clip.

Remember to update your settings with the new file path and ensure treatUrlsAndPointersAsImages is set to True for image handling. If the model still isn't supported, stay tuned for future updates where more models may be integrated based on user requests.

When setting up a data schema for products, you may wonder which structure is more effective for vector search: a single key-value (k-v) pair for multiple attributes or multiple k-v pairs for each attribute.


  • Single k-v pairs with a string of comma-separated values (e.g., "Tags: blue, patterned, cotton, elegant") are more efficient for vector search.
  • Benefits:
    • Contextual Relevance: A single string provides more context, leading to better recall performance in search results.
    • Resource Efficiency: Only one tensor field and vector are generated, which conserves RAM and can speed up search times.
    • Technical Note: Remember, lists of strings are for non-tensor fields used only in filtering, not vector search.

Managing Document Deletion in Marqo

You've split a large document into smaller sub-documents in Marqo, each with an incremental ID based on the title, and now you need to delete all sub-documents associated with a specific title.

While Marqo allows for batch deletion by ID, this can be cumbersome when dealing with multiple sub-documents. Here are some tips to streamline the process:

  • Unique ID Generation: Instead of incremental IDs, consider creating a unique hash for each title. This makes tracking and deletion more straightforward, as every sub-document related to a title would share this unique identifier.
  • External Tracking: Keep an external record of document IDs. This offloads the complexity from Marqo and simplifies the deletion process, as you would have a ready list of IDs to remove.
  • Over Deletion Method: If you're aware of the maximum number of sub-documents, you could attempt to delete a range (e.g., title_0 to title_20). Marqo will skip non-existing IDs, so there's no harm in overshooting the actual count. However, this method is less efficient for large numbers of documents or a massive index.
  • Post-Processing: Rely on Marqo's internal chunking. Use search highlights to identify the relevant section of the text and then manually extract the required context, such as adding sentences before and after the highlighted section. This approach is beneficial when you're dealing with larger chunks of text and need only a single match per document.

Enhancing Product Categorization Accuracy with Marqo Search Parameters

Inconsistent search results when mapping products to categories.


When using Marqo for product mapping, you might find that increasing the limit parameter improves the search score. This is because Marqo’s limit is linked to the k parameter in vector search, which determines the number of nearest documents considered during the search process. A higher k means a better chance of finding the true closest match. If your initial search misses a more accurate document, increasing the limit may help you retrieve it. To enhance search stability and accuracy, try setting a higher limit, such as 100, and then filter out any unnecessary results afterward. This way, you ensure no potential matches are overlooked, leading to more precise categorization.

Optimizing Marqo Docker Images for Minimal Resource Usage

Users with internet connectivity issues or limited system resources need a lightweight Marqo Docker image that consumes minimal memory.


For those looking to deploy a text-encoding model with Marqo in a resource-constrained environment, we recommend the hf/e5-small-v2 model, which is quite lean at 134MB. If your requirements allow for even more lightweight solutions, consider using the 'random' model, which generates random vectors for text and significantly reduces the size and memory footprint.

To build a light Marqo Docker image locally, please refer to the instructions provided in our GitHub repository, under the section 'Option C: Build and run the Marqo as a Docker'.

For existing Marqo Docker container instances that seem to be memory-intensive (e.g., around 2.5 GBs), it's possible to reduce memory usage by excluding unnecessary components. To achieve this, ensure that Marqo starts with preloaded models set to an empty list ([]). This configuration will prevent loading image models or any other models that are not essential for your specific use case, thereby conserving memory. Detailed guidance on this configuration can be found in the 'Configuring Preloaded Models' section of our advanced usage documentation for version 1.4.0.