Skip to main content

ImageAnnotator_ListImageAnalyses

ImageAnnotator_ListImageAnalyses

Path Parameters
  • user string required

    The user id.

  • album string required

    The album id.

  • image string required

    The image id.

Query Parameters
  • pageSize int32

    If unspecified, server will pick an appropriate default.

  • pageToken string

    The value returned from the previous call.

Responses

OK

Schema
  • analyses object[]

    The list of image analyses.

    • name string

      The name of the image analysis.

    • properties object
      • dominantColors object[]

        Set of dominant colors and their corresponding scores.

        • color object

          Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of java.awt.Color in Java; it can also be trivially provided to UIColor's +colorWithRed:green:blue:alpha method in iOS; and, with just a little work, it can be easily formatted into a CSS rgba() string in JavaScript. This reference page doesn't carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor fromProto(Color protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color toProto(UIColor color) { CGFloat red, green, blue, alpha; if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { return nil; } Color result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha <= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac 255); var green = Math.floor(greenFrac 255); var blue = Math.floor(blueFrac * 255); if (!('alpha' in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(','); return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red << 16) | (green << 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = ['#']; for (var i = 0; i < missingZeros; i++) { resultBuilder.push('0'); } resultBuilder.push(hexString); return resultBuilder.join(''); }; // ...

          • red float

            The amount of red in the color as a value in the interval [0, 1].

          • green float

            The amount of green in the color as a value in the interval [0, 1].

          • blue float

            The amount of blue in the color as a value in the interval [0, 1].

          • alpha object

            Wrapper message for float. The JSON representation for FloatValue is JSON number.

            • value float

              The float value.

        • score float

          Image-specific score for this color. Value in range [0, 1].

        • pixelFraction float

          The fraction of pixels the color occupies in the image. Value in range [0, 1].

      • fingerprints object[]

        The fingerprints of the image.

        • content bytes

          The fingerprint of the image in binary representation.

        • algorithm string

          The algorithm used to generate the fingerprint.

    • annotations object
      • textAnnotations object[]

        The texts detected in the image.

        • property object

          Additional information detected on the structural component.

          • languages object[]

            A list of detected languages together with confidence.

            • languageCode string

              The BCP-47 language code, such as "en-US" or "sr-Latn".

            • confidence float

              Confidence of detected language. Range [0, 1].

        • boundingBox object

          A bounding polygon for the detected image annotation.

          • vertices object[]

            The bounding polygon vertices.

            • x int32

              X coordinate.

            • y int32

              Y coordinate.

        • text string

          UTF-8 text detected by the OCR.

        • confidence float

          Confidence of the OCR results for the paragraph. Range [0, 1].

      • labelAnnotations object[]

        The labels detected in the image.

        • name string

          The label resource name.

        • score float

          Overall score of the result. Range [0, 1].

        • topicality float

          The relevancy of the annotation. Range [0, 1].

      • entityAnnotations object[]

        The entites detected in the image.

        • name string

          The entity resource name.

        • score float

          Overall score of the result. Range [0, 1].

        • boundingBox object

          A bounding polygon for the detected image annotation.

          • vertices object[]

            The bounding polygon vertices.

            • x int32

              X coordinate.

            • y int32

              Y coordinate.

      • knowledgeGraphAnnotations object[]

        The Animeshon Graph Knowledge-Base resources detected in the image.

        • resource string

          The Animeshon Graph Knowledge-Base resource name.

        • score float

          Overall score of the result. Range [0, 1].

        • boundingBox object

          A bounding polygon for the detected image annotation.

          • vertices object[]

            The bounding polygon vertices.

            • x int32

              X coordinate.

            • y int32

              Y coordinate.

      • webSearchAnnotations object[]

        The WebSearch resources (pages and images) detected in the image.

        • url string

          The result image URL.

        • score float

          Overall relevancy score for the image.

      • safeSearchAnnotation object
        • adult enum

          Possible values: [UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY]

          Represents the adult content likelihood for the image. Adult content may contain elements such as nudity, pornographic images or cartoons, or sexual activities.

        • racy enum

          Possible values: [UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY]

          Likelihood that the request image contains racy content. Racy content may include (but is not limited to) skimpy or sheer clothing, strategically covered nudity, lewd or provocative poses, or close-ups of sensitive body areas.

        • violence enum

          Possible values: [UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY]

          Likelihood that this image contains violent content.

        • medical enum

          Possible values: [UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY]

          Likelihood that this is a medical image.

        • juvenile enum

          Possible values: [UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY]

          Likelihood that the request image contains one or more individuals decipted as juveniles. Juvenile content may contain elements such as school-aged children, preschoolers, toddlers, infants, and newborns. The target age considered as juvenile is from 0 to 14~16 years old.

    • createTime date-time

      The creation time indicating when this revision was created.

  • nextPageToken string

    A token to retrieve next page of results.