# Results

Here's an example of the result of an analysis performed by Foyer Insight:

```javascript
{
  "classifications": [
    {
      "confidence": 0.9996496438980103,
      "name": "kitchen",
      "rank": 1
    },
    {
      "confidence": 0.9999802112579346,
      "name": "indoor",
      "rank": 2
    }
  ],
  "detections": [
    {
      "class": "dishwasher",
      "area": 0.055518,
      "boundingBox": [
        0.312353,
        0.156456,
        0.564683,
        0.356472
      ],
      "confidence": 0.8126423358917236,
      "attributes": [{
                "name": "tagpoint",
                "value": [0.045786, 
                          0.49609375]
              },
              {
                "name":"is_stainless"
                "value":true
              }
      ],
      "segmentation": {'size': [512, 512], 'counts': '`<`3`<000O10000O100O10000O100000000O10000O100O1000000O100000000O10000O10000O1000000O10000O10000O100O1000000O100000000O100O10000O100000000O100000000O100O10000O10000000000O10000O100O10000O1000000O10000O100O100O100000000O1000000O100O100O1000000O1000000O10000O100O10000O1000000O10000O100O1000000O1000000O10000O100O100O1000000O1000000O10000O10000O1000000O100000000O100O10000O100000000O10000O10000O10000O100000000O10000O10000O100O1000000O1000000O10000O10000O100000000O10000O10000O10000O1000000O1000000O10000O10000O10000O1000000O100O10000O1000000O1000000O1000000O10000O1000000O10000O10000O10000O1000000O1000000O10000000000001O0000001O000000001O00000000000000O10000O1O100O100000000O1O1N2N2O1O1O10000O100O1000000000000O100O10000O10000000000000000001O1O3`IlFn5d9O00001O00000000001O0000001O0000001O000000001O0000001O0000001O000000001O0000000000001O000000001O0000001O00000000001O00000000001O0000001O0000001O0000000000000000001O0000O10000N2M3G9N2O1O100O10000000000000000000000000000000000000000001O0000000000001O000000001O00000000001O000000000000'},
    },
    {
      "class": "floor",
      "area": 0.66851,
      "boundingBox": [
        0,
        3,
        540,
        271
      ],
      "confidence": 0.7137435674667358,
      "attributes": [{
              "name": "floor_type",
              "value": "hardwood",
              "confidence": 1
            },
            {
              "name": "tagpoint",
              "value": [0.3966845, 
                        0.7948374]
            }
          ],
      "segmentation": {'size': [512, 512], 'counts': 'YWY62[`1OdoN0ZXd1'}
    }
    ...
  ]
}
```

Let's break down the sections we're seeing.

### Classifier

Each classifier has three fields: `confidence`, `name`, and `rank`.

* Confidence is how certain Foyer Insight is of the classification. This is a number between 0 and 1, which alternatively can be thought of as a percentage. In this case, Insight is almost certain we've provided an image of an indoor kitchen.
* Name is the full name of the image's classification.
* Rank is a number greater than zero. Classifications are ranked in order of highest to lowest confidence, with an exception for indoor and outdoor classifications. Indoor and outdoor classifications will always be ranked last.

### Detections

Each detection has six fields: `area`, `attributes`, `boundingBox`, `class`,`confidence, and segmentation.` A detection is an individual object or group of similar objects that Insight found on the image.&#x20;

* **Class** is the full name of the detection's class.
* **Area** is the percentage of pixels that a segmentation covers out of the entire image. For example, a floor segmentation with an area of 0.53 means that Insight has detected the floor covers 53% of the pixels in the image.
* **Attributes** are extra details about the detection. It will contain pertinent information about the segmentation. For example, all floor detections will contain a floor\_type attribute.

  &#x20;Attributes always contains the tagpoint attribute, which is a percent-based \[x, y] array that denotes a point within the segmentation.
* **BoundingBox** is the smallest box that contains all pixels of a segmentation. It is made up of four numbers, which are really two (x, y) coordinate pairs representing the upper-left and bottom-right corners, respectively.&#x20;
  * The coordinates assume the upper-left hand corner of the image as the origin point.&#x20;
  * As with all of Insight's size-based outputs, the points are percentage-based. Multiply the x positions by your image's final width and the y positions by your image's final height to get the coordinates of a segmentation's bounding box.&#x20;
  * For example, the values \[0.3, 0.156, 0.564, 0.356] represent a point at (0.3\*w, 0.156\*h) and (0.564\*w, 0.356\*h).
* **Confidence** is how certain Insight is of the class for a detection. As with a classification's `confidence`, it is a number between 0 and 1. In our example, Insight is about 81% certain that its detection of a table is correct.
* **Segmentation** is the RLE-encoded binary mask of the detected object. It can be decoded using the pycocotools mask decode function, or any other RLE decoder. &#x20;


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://foyer.gitbook.io/docs/sandbox/results.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
