Class AudioMetricsDetails
- All Implemented Interfaces:
com.ibm.cloud.sdk.core.service.model.ObjectModel
public class AudioMetricsDetails
extends com.ibm.cloud.sdk.core.service.model.GenericModel
-
Constructor Summary
Constructors Constructor Description AudioMetricsDetails()
-
Method Summary
Modifier and Type Method Description List<AudioMetricsHistogramBin>
getClippingRate()
Gets the clippingRate.List<AudioMetricsHistogramBin>
getDirectCurrentOffset()
Gets the directCurrentOffset.Float
getEndTime()
Gets the endTime.Float
getHighFrequencyLoss()
Gets the highFrequencyLoss.List<AudioMetricsHistogramBin>
getNonSpeechLevel()
Gets the nonSpeechLevel.Float
getSignalToNoiseRatio()
Gets the signalToNoiseRatio.List<AudioMetricsHistogramBin>
getSpeechLevel()
Gets the speechLevel.Float
getSpeechRatio()
Gets the speechRatio.Boolean
isXFinal()
Gets the xFinal.Methods inherited from class com.ibm.cloud.sdk.core.service.model.GenericModel
equals, hashCode, toString
-
Constructor Details
-
AudioMetricsDetails
public AudioMetricsDetails()
-
-
Method Details
-
isXFinal
Gets the xFinal.If `true`, indicates the end of the audio stream, meaning that transcription is complete. Currently, the field is always `true`. The service returns metrics just once per audio stream. The results provide aggregated audio metrics that pertain to the complete audio stream.
- Returns:
- the xFinal
-
getEndTime
Gets the endTime.The end time in seconds of the block of audio to which the metrics apply.
- Returns:
- the endTime
-
getSignalToNoiseRatio
Gets the signalToNoiseRatio.The signal-to-noise ratio (SNR) for the audio signal. The value indicates the ratio of speech to noise in the audio. A valid value lies in the range of 0 to 100 decibels (dB). The service omits the field if it cannot compute the SNR for the audio.
- Returns:
- the signalToNoiseRatio
-
getSpeechRatio
Gets the speechRatio.The ratio of speech to non-speech segments in the audio signal. The value lies in the range of 0.0 to 1.0.
- Returns:
- the speechRatio
-
getHighFrequencyLoss
Gets the highFrequencyLoss.The probability that the audio signal is missing the upper half of its frequency content. * A value close to 1.0 typically indicates artificially up-sampled audio, which negatively impacts the accuracy of the transcription results. * A value at or near 0.0 indicates that the audio signal is good and has a full spectrum. * A value around 0.5 means that detection of the frequency content is unreliable or not available.
- Returns:
- the highFrequencyLoss
-
getDirectCurrentOffset
Gets the directCurrentOffset.An array of `AudioMetricsHistogramBin` objects that defines a histogram of the cumulative direct current (DC) component of the audio signal.
- Returns:
- the directCurrentOffset
-
getClippingRate
Gets the clippingRate.An array of `AudioMetricsHistogramBin` objects that defines a histogram of the clipping rate for the audio segments. The clipping rate is defined as the fraction of samples in the segment that reach the maximum or minimum value that is offered by the audio quantization range. The service auto-detects either a 16-bit Pulse-Code Modulation(PCM) audio range (-32768 to +32767) or a unit range (-1.0 to +1.0). The clipping rate is between 0.0 and 1.0, with higher values indicating possible degradation of speech recognition.
- Returns:
- the clippingRate
-
getSpeechLevel
Gets the speechLevel.An array of `AudioMetricsHistogramBin` objects that defines a histogram of the signal level in segments of the audio that contain speech. The signal level is computed as the Root-Mean-Square (RMS) value in a decibel (dB) scale normalized to the range 0.0 (minimum level) to 1.0 (maximum level).
- Returns:
- the speechLevel
-
getNonSpeechLevel
Gets the nonSpeechLevel.An array of `AudioMetricsHistogramBin` objects that defines a histogram of the signal level in segments of the audio that do not contain speech. The signal level is computed as the Root-Mean-Square (RMS) value in a decibel (dB) scale normalized to the range 0.0 (minimum level) to 1.0 (maximum level).
- Returns:
- the nonSpeechLevel
-