ibm_watson.visual_recognition_v4 module

Provide images to the IBM Watson™ Visual Recognition service for analysis. The service detects objects based on a set of images with training data.

class VisualRecognitionV4(version: str, authenticator: ibm_cloud_sdk_core.authenticators.authenticator.Authenticator = None, service_name: str = 'visual_recognition')[source]

Bases: ibm_cloud_sdk_core.base_service.BaseService

The Visual Recognition V4 service.

DEFAULT_SERVICE_URL = 'https://api.us-south.visual-recognition.watson.cloud.ibm.com'
DEFAULT_SERVICE_NAME = 'visual_recognition'
analyze(collection_ids: str, features: str, *, images_file: BinaryIO = None, image_url: str = None, threshold: float = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Analyze images.

Analyze images by URL, by file, or both against your own collection. Make sure that training_status.objects.ready is true for the feature before you use a collection to analyze images. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Parameters
  • collection_ids (List[str]) – The IDs of the collections to analyze.

  • features (List[str]) – The features to analyze.

  • images_file (list[FileWithMetadata]) – (optional) An array of image files (.jpg or .png) or .zip files with images. - Include a maximum of 20 images in a request. - Limit the .zip file to 100 MB. - Limit each image file to 10 MB. You can also include an image with the image_url parameter.

  • image_url (List[str]) – (optional) An array of URLs of image files (.jpg or .png). - Include a maximum of 20 images in a request. - Limit each image file to 10 MB. - Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width. You can also include images with the images_file parameter.

  • threshold (float) – (optional) The minimum score a feature must have to be returned.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

create_collection(*, name: str = None, description: str = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Create a collection.

Create a collection that can be used to store images. To create a collection without specifying a name and description, include an empty JSON object in the request body. Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Parameters
  • name (str) – (optional) The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

  • description (str) – (optional) The description of the collection.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

list_collections(**kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

List collections.

Retrieves a list of collections for the service instance.

Parameters

headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

get_collection(collection_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Get collection details.

Get details of one collection.

Parameters
  • collection_id (str) – The identifier of the collection.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

update_collection(collection_id: str, *, name: str = None, description: str = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Update a collection.

Update the name or description of a collection. Encode the name and description in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Parameters
  • collection_id (str) – The identifier of the collection.

  • name (str) – (optional) The name of the collection. The name can contain alphanumeric, underscore, hyphen, and dot characters. It cannot begin with the reserved prefix sys-.

  • description (str) – (optional) The description of the collection.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

delete_collection(collection_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Delete a collection.

Delete a collection from the service instance.

Parameters
  • collection_id (str) – The identifier of the collection.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

get_model_file(collection_id: str, feature: str, model_format: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Get a model.

Download a model that you can deploy to detect objects in images. The collection must include a generated model, which is indicated in the response for the collection details as “rscnn_ready”: true. If the value is false, train or retrain the collection to generate the model. Currently, the model format is specific to Android apps. For more information about how to deploy the model to your app, see the [Watson Visual Recognition on Android](https://github.com/matt-ny/rscnn) project in GitHub.

Parameters
  • collection_id (str) – The identifier of the collection.

  • feature (str) – The feature for the model.

  • model_format (str) – The format of the returned model.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

add_images(collection_id: str, *, images_file: BinaryIO = None, image_url: str = None, training_data: str = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Add images.

Add images to a collection by URL, by file, or both. Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.

Parameters
  • collection_id (str) – The identifier of the collection.

  • images_file (list[FileWithMetadata]) – (optional) An array of image files (.jpg or .png) or .zip files with images. - Include a maximum of 20 images in a request. - Limit the .zip file to 100 MB. - Limit each image file to 10 MB. You can also include an image with the image_url parameter.

  • image_url (List[str]) – (optional) The array of URLs of image files (.jpg or .png). - Include a maximum of 20 images in a request. - Limit each image file to 10 MB. - Minimum width and height is 30 pixels, but the service tends to perform better with images that are at least 300 x 300 pixels. Maximum is 5400 pixels for either height or width. You can also include images with the images_file parameter.

  • training_data (str) – (optional) Training data for a single image. Include training data only if you add one image with the request. The object property can contain alphanumeric, underscore, hyphen, space, and dot characters. It cannot begin with the reserved prefix sys- and must be no longer than 32 characters.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

list_images(collection_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

List images.

Retrieves a list of images in a collection.

Parameters
  • collection_id (str) – The identifier of the collection.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

get_image_details(collection_id: str, image_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Get image details.

Get the details of an image in a collection.

Parameters
  • collection_id (str) – The identifier of the collection.

  • image_id (str) – The identifier of the image.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

delete_image(collection_id: str, image_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Delete an image.

Delete one image from a collection.

Parameters
  • collection_id (str) – The identifier of the collection.

  • image_id (str) – The identifier of the image.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

get_jpeg_image(collection_id: str, image_id: str, *, size: str = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Get a JPEG file of an image.

Download a JPEG representation of an image.

Parameters
  • collection_id (str) – The identifier of the collection.

  • image_id (str) – The identifier of the image.

  • size (str) – (optional) The image size. Specify thumbnail to return a version that maintains the original aspect ratio but is no larger than 200 pixels in the larger dimension. For example, an original 800 x 1000 image is resized to 160 x 200 pixels.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

list_object_metadata(collection_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

List object metadata.

Retrieves a list of object names in a collection.

Parameters
  • collection_id (str) – The identifier of the collection.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

update_object_metadata(collection_id: str, object: str, new_object: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Update an object name.

Update the name of an object. A successful request updates the training data for all images that use the object.

Parameters
  • collection_id (str) – The identifier of the collection.

  • object (str) – The name of the object.

  • new_object (str) – The updated name of the object. The name can contain alphanumeric, underscore, hyphen, space, and dot characters. It cannot begin with the reserved prefix sys-.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

get_object_metadata(collection_id: str, object: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Get object metadata.

Get the number of bounding boxes for a single object in a collection.

Parameters
  • collection_id (str) – The identifier of the collection.

  • object (str) – The name of the object.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

delete_object(collection_id: str, object: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Delete an object.

Delete one object from a collection. A successful request deletes the training data from all images that use the object.

Parameters
  • collection_id (str) – The identifier of the collection.

  • object (str) – The name of the object.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

train(collection_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Train a collection.

Start training on images in a collection. The collection must have enough training data and untrained data (the training_status.objects.data_changed is true). If training is in progress, the request queues the next training job.

Parameters
  • collection_id (str) – The identifier of the collection.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

add_image_training_data(collection_id: str, image_id: str, *, objects: List[TrainingDataObject] = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Add training data to an image.

Add, update, or delete training data for an image. Encode the object name in UTF-8 if it contains non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters. Elements in the request replace the existing elements. - To update the training data, provide both the unchanged and the new or changed values. - To delete the training data, provide an empty value for the training data.

Parameters
  • collection_id (str) – The identifier of the collection.

  • image_id (str) – The identifier of the image.

  • objects (List[TrainingDataObject]) – (optional) Training data for specific objects.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

get_training_usage(*, start_time: str = None, end_time: str = None, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Get training usage.

Information about the completed training events. You can use this information to determine how close you are to the training limits for the month.

Parameters
  • start_time (str) – (optional) The earliest day to include training events. Specify dates in YYYY-MM-DD format. If empty or not specified, the earliest training event is included.

  • end_time (str) – (optional) The most recent day to include training events. Specify dates in YYYY-MM-DD format. All events for the day are included. If empty or not specified, the current day is used. Specify the same value as start_time to request events for a single day.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

delete_user_data(customer_id: str, **kwargs) → ibm_cloud_sdk_core.detailed_response.DetailedResponse[source]

Delete labeled data.

Deletes all data associated with a specified customer ID. The method has no effect if no data is associated with the customer ID. You associate a customer ID with data by passing the X-Watson-Metadata header with a request that passes data. For more information about personal data and customer IDs, see [Information security](https://cloud.ibm.com/docs/visual-recognition?topic=visual-recognition-information-security).

Parameters
  • customer_id (str) – The customer ID for which all data is to be deleted.

  • headers (dict) – A dict containing the request headers

Returns

A DetailedResponse containing the result, headers and HTTP status code.

Return type

DetailedResponse

class AnalyzeEnums[source]

Bases: object

class Features(value)[source]

Bases: enum.Enum

The features to analyze.

OBJECTS = 'objects'
class GetModelFileEnums[source]

Bases: object

class Feature(value)[source]

Bases: enum.Enum

The feature for the model.

OBJECTS = 'objects'
class ModelFormat(value)[source]

Bases: enum.Enum

The format of the returned model.

RSCNN = 'rscnn'
class GetJpegImageEnums[source]

Bases: object

class Size(value)[source]

Bases: enum.Enum

The image size. Specify thumbnail to return a version that maintains the original aspect ratio but is no larger than 200 pixels in the larger dimension. For example, an original 800 x 1000 image is resized to 160 x 200 pixels.

FULL = 'full'
THUMBNAIL = 'thumbnail'
class AnalyzeResponse(images: List[Image], *, warnings: List[Warning] = None, trace: str = None)[source]

Bases: object

Results for all images.

Attr List[Image] images

Analyzed images.

Attr List[Warning] warnings

(optional) Information about what might cause less than optimal output.

Attr str trace

(optional) A unique identifier of the request. Included only when an error or warning is returned.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.AnalyzeResponse[source]

Initialize a AnalyzeResponse object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class Collection(collection_id: str, name: str, description: str, created: datetime.datetime, updated: datetime.datetime, image_count: int, training_status: ibm_watson.visual_recognition_v4.TrainingStatus)[source]

Bases: object

Details about a collection.

Attr str collection_id

The identifier of the collection.

Attr str name

The name of the collection.

Attr str description

The description of the collection.

Attr datetime created

Date and time in Coordinated Universal Time (UTC) that the collection was created.

Attr datetime updated

Date and time in Coordinated Universal Time (UTC) that the collection was most recently updated.

Attr int image_count

Number of images in the collection.

Attr TrainingStatus training_status

Training status information for the collection.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.Collection[source]

Initialize a Collection object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class CollectionObjects(collection_id: str, objects: List[ObjectDetail])[source]

Bases: object

The objects in a collection that are detected in an image.

Attr str collection_id

The identifier of the collection.

Attr List[ObjectDetail] objects

The identified objects in a collection.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.CollectionObjects[source]

Initialize a CollectionObjects object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class CollectionsList(collections: List[Collection])[source]

Bases: object

A container for the list of collections.

Attr List[Collection] collections

The collections in this service instance.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.CollectionsList[source]

Initialize a CollectionsList object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class DetectedObjects(*, collections: List[CollectionObjects] = None)[source]

Bases: object

Container for the list of collections that have objects detected in an image.

Attr List[CollectionObjects] collections

(optional) The collections with identified objects.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.DetectedObjects[source]

Initialize a DetectedObjects object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class Error(code: str, message: str, *, more_info: str = None, target: Optional[ibm_watson.visual_recognition_v4.ErrorTarget] = None)[source]

Bases: object

Details about an error.

Attr str code

Identifier of the problem.

Attr str message

An explanation of the problem with possible solutions.

Attr str more_info

(optional) A URL for more information about the solution.

Attr ErrorTarget target

(optional) Details about the specific area of the problem.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.Error[source]

Initialize a Error object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class CodeEnum(value)[source]

Bases: enum.Enum

Identifier of the problem.

INVALID_FIELD = 'invalid_field'
INVALID_HEADER = 'invalid_header'
INVALID_METHOD = 'invalid_method'
MISSING_FIELD = 'missing_field'
SERVER_ERROR = 'server_error'
class ErrorTarget(type: str, name: str)[source]

Bases: object

Details about the specific area of the problem.

Attr str type

The parameter or property that is the focus of the problem.

Attr str name

The property that is identified with the problem.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ErrorTarget[source]

Initialize a ErrorTarget object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class TypeEnum(value)[source]

Bases: enum.Enum

The parameter or property that is the focus of the problem.

FIELD = 'field'
PARAMETER = 'parameter'
HEADER = 'header'
class Image(source: ibm_watson.visual_recognition_v4.ImageSource, dimensions: ibm_watson.visual_recognition_v4.ImageDimensions, objects: ibm_watson.visual_recognition_v4.DetectedObjects, *, errors: List[Error] = None)[source]

Bases: object

Details about an image.

Attr ImageSource source

The source type of the image.

Attr ImageDimensions dimensions

Height and width of an image.

Attr DetectedObjects objects

Container for the list of collections that have objects detected in an image.

Attr List[Error] errors

(optional) A container for the problems in the request.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.Image[source]

Initialize a Image object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ImageDetails(source: ibm_watson.visual_recognition_v4.ImageSource, *, image_id: str = None, updated: datetime.datetime = None, created: datetime.datetime = None, dimensions: Optional[ibm_watson.visual_recognition_v4.ImageDimensions] = None, errors: List[Error] = None, training_data: Optional[ibm_watson.visual_recognition_v4.TrainingDataObjects] = None)[source]

Bases: object

Details about an image.

Attr str image_id

(optional) The identifier of the image.

Attr datetime updated

(optional) Date and time in Coordinated Universal Time (UTC) that the image was most recently updated.

Attr datetime created

(optional) Date and time in Coordinated Universal Time (UTC) that the image was created.

Attr ImageSource source

The source type of the image.

Attr ImageDimensions dimensions

(optional) Height and width of an image.

Attr List[Error] errors

(optional) Details about the errors.

Attr TrainingDataObjects training_data

(optional) Training data for all objects.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ImageDetails[source]

Initialize a ImageDetails object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ImageDetailsList(*, images: List[ImageDetails] = None, warnings: List[Warning] = None, trace: str = None)[source]

Bases: object

List of information about the images.

Attr List[ImageDetails] images

(optional) The images in the collection.

Attr List[Warning] warnings

(optional) Information about what might cause less than optimal output.

Attr str trace

(optional) A unique identifier of the request. Included only when an error or warning is returned.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ImageDetailsList[source]

Initialize a ImageDetailsList object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ImageDimensions(*, height: int = None, width: int = None)[source]

Bases: object

Height and width of an image.

Attr int height

(optional) Height in pixels of the image.

Attr int width

(optional) Width in pixels of the image.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ImageDimensions[source]

Initialize a ImageDimensions object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ImageSource(type: str, *, filename: str = None, archive_filename: str = None, source_url: str = None, resolved_url: str = None)[source]

Bases: object

The source type of the image.

Attr str type

The source type of the image.

Attr str filename

(optional) Name of the image file if uploaded. Not returned when the image is passed by URL.

Attr str archive_filename

(optional) Name of the .zip file of images if uploaded. Not returned when the image is passed directly or by URL.

Attr str source_url

(optional) Source of the image before any redirects. Not returned when the image is uploaded.

Attr str resolved_url

(optional) Fully resolved URL of the image after redirects are followed. Not returned when the image is uploaded.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ImageSource[source]

Initialize a ImageSource object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class TypeEnum(value)[source]

Bases: enum.Enum

The source type of the image.

FILE = 'file'
URL = 'url'
class ImageSummary(*, image_id: str = None, updated: datetime.datetime = None)[source]

Bases: object

Basic information about an image.

Attr str image_id

(optional) The identifier of the image.

Attr datetime updated

(optional) Date and time in Coordinated Universal Time (UTC) that the image was most recently updated.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ImageSummary[source]

Initialize a ImageSummary object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ImageSummaryList(images: List[ImageSummary])[source]

Bases: object

List of images.

Attr List[ImageSummary] images

The images in the collection.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ImageSummaryList[source]

Initialize a ImageSummaryList object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class Location(top: int, left: int, width: int, height: int)[source]

Bases: object

Defines the location of the bounding box around the object.

Attr int top

Y-position of top-left pixel of the bounding box.

Attr int left

X-position of top-left pixel of the bounding box.

Attr int width

Width in pixels of of the bounding box.

Attr int height

Height in pixels of the bounding box.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.Location[source]

Initialize a Location object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ObjectDetail(object: str, location: ibm_watson.visual_recognition_v4.Location, score: float)[source]

Bases: object

Details about an object in the collection.

Attr str object

The label for the object.

Attr Location location

Defines the location of the bounding box around the object.

Attr float score

Confidence score for the object in the range of 0 to 1. A higher score indicates greater likelihood that the object is depicted at this location in the image.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ObjectDetail[source]

Initialize a ObjectDetail object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ObjectMetadata(*, object: str = None, count: int = None)[source]

Bases: object

Basic information about an object.

Attr str object

(optional) The name of the object.

Attr int count

(optional) Number of bounding boxes with this object name in the collection.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ObjectMetadata[source]

Initialize a ObjectMetadata object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ObjectMetadataList(object_count: int, *, objects: List[ObjectMetadata] = None)[source]

Bases: object

List of objects.

Attr int object_count

Number of unique named objects in the collection.

Attr List[ObjectMetadata] objects

(optional) The objects in the collection.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ObjectMetadataList[source]

Initialize a ObjectMetadataList object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class ObjectTrainingStatus(ready: bool, in_progress: bool, data_changed: bool, latest_failed: bool, rscnn_ready: bool, description: str)[source]

Bases: object

Training status for the objects in the collection.

Attr bool ready

Whether you can analyze images in the collection with the objects feature.

Attr bool in_progress

Whether training is in progress.

Attr bool data_changed

Whether there are changes to the training data since the most recent training.

Attr bool latest_failed

Whether the most recent training failed.

Attr bool rscnn_ready

Whether the model can be downloaded after the training status is ready.

Attr str description

Details about the training. If training is in progress, includes information about the status. If training is not in progress, includes a success message or information about why training failed.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.ObjectTrainingStatus[source]

Initialize a ObjectTrainingStatus object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class TrainingDataObject(*, object: str = None, location: Optional[ibm_watson.visual_recognition_v4.Location] = None)[source]

Bases: object

Details about the training data.

Attr str object

(optional) The name of the object.

Attr Location location

(optional) Defines the location of the bounding box around the object.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.TrainingDataObject[source]

Initialize a TrainingDataObject object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class TrainingDataObjects(*, objects: List[TrainingDataObject] = None)[source]

Bases: object

Training data for all objects.

Attr List[TrainingDataObject] objects

(optional) Training data for specific objects.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.TrainingDataObjects[source]

Initialize a TrainingDataObjects object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class TrainingEvent(*, type: str = None, collection_id: str = None, completion_time: datetime.datetime = None, status: str = None, image_count: int = None)[source]

Bases: object

Details about the training event.

Attr str type

(optional) Trained object type. Only objects is currently supported.

Attr str collection_id

(optional) Identifier of the trained collection.

Attr datetime completion_time

(optional) Date and time in Coordinated Universal Time (UTC) that training on the collection finished.

Attr str status

(optional) Training status of the training event.

Attr int image_count

(optional) The total number of images that were used in training for this training event.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.TrainingEvent[source]

Initialize a TrainingEvent object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class TypeEnum(value)[source]

Bases: enum.Enum

Trained object type. Only objects is currently supported.

OBJECTS = 'objects'
class StatusEnum(value)[source]

Bases: enum.Enum

Training status of the training event.

FAILED = 'failed'
SUCCEEDED = 'succeeded'
class TrainingEvents(*, start_time: datetime.datetime = None, end_time: datetime.datetime = None, completed_events: int = None, trained_images: int = None, events: List[TrainingEvent] = None)[source]

Bases: object

Details about the training events.

Attr datetime start_time

(optional) The starting day for the returned training events in Coordinated Universal Time (UTC). If not specified in the request, it identifies the earliest training event.

Attr datetime end_time

(optional) The ending day for the returned training events in Coordinated Universal Time (UTC). If not specified in the request, it lists the current time.

Attr int completed_events

(optional) The total number of training events in the response for the start and end times.

Attr int trained_images

(optional) The total number of images that were used in training for the start and end times.

Attr List[TrainingEvent] events

(optional) The completed training events for the start and end time.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.TrainingEvents[source]

Initialize a TrainingEvents object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class TrainingStatus(objects: ibm_watson.visual_recognition_v4.ObjectTrainingStatus)[source]

Bases: object

Training status information for the collection.

Attr ObjectTrainingStatus objects

Training status for the objects in the collection.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.TrainingStatus[source]

Initialize a TrainingStatus object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class UpdateObjectMetadata(object: str, count: int)[source]

Bases: object

Basic information about an updated object.

Attr str object

The updated name of the object. The name can contain alphanumeric, underscore, hyphen, space, and dot characters. It cannot begin with the reserved prefix sys-.

Attr int count

Number of bounding boxes in the collection with the updated object name.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.UpdateObjectMetadata[source]

Initialize a UpdateObjectMetadata object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class Warning(code: str, message: str, *, more_info: str = None)[source]

Bases: object

Details about a problem.

Attr str code

Identifier of the problem.

Attr str message

An explanation of the problem with possible solutions.

Attr str more_info

(optional) A URL for more information about the solution.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.Warning[source]

Initialize a Warning object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.

class CodeEnum(value)[source]

Bases: enum.Enum

Identifier of the problem.

INVALID_FIELD = 'invalid_field'
INVALID_HEADER = 'invalid_header'
INVALID_METHOD = 'invalid_method'
MISSING_FIELD = 'missing_field'
SERVER_ERROR = 'server_error'
class FileWithMetadata(data: BinaryIO, *, filename: str = None, content_type: str = None)[source]

Bases: object

A file with its associated metadata.

Attr BinaryIO data

The data / content for the file.

Attr str filename

(optional) The filename of the file.

Attr str content_type

(optional) The content type of the file.

classmethod from_dict(_dict: Dict)ibm_watson.visual_recognition_v4.FileWithMetadata[source]

Initialize a FileWithMetadata object from a json dictionary.

to_dict() → Dict[source]

Return a json dictionary representing this model.