Node.js client library to use the Watson APIs.
On 9 August 2021, IBM announced the deprecation of the Natural Language Classifier service. The service will no longer be available from 8 August 2022. As of 9 September 2021, you will not be able to create new instances. Existing instances will be supported until 8 August 2022. Any instance that still exists on that date will be deleted.
As an alternative, we encourage you to consider migrating to the Natural Language Understanding service on IBM Cloud that uses deep learning to extract data and insights from text such as keywords, categories, sentiment, emotion, and syntax, along with advanced multi-label text classification capabilities, to provide even richer insights for your business or industry. For more information, see Migrating to Natural Language Understanding.
Version v6.0.0 of the SDK has been released and includes a number of breaking changes - see what's changed in the migration guide.
Watson API endpoint URLs at watsonplatform.net are changing and will not work after 26 May 2021. Update your calls to use the newer endpoint URLs. For more information, see https://cloud.ibm.com/docs/watson?topic=watson-endpoint-change.
IBM Watson™ Personality Insights is discontinued. For a period of one year from 1 December 2020, you will still be able to use Watson Personality Insights. However, as of 1 December 2021, the offering will no longer be available.
As an alternative, we encourage you to consider migrating to IBM Watson™ Natural Language Understanding, a service on IBM Cloud® that uses deep learning to extract data and insights from text such as keywords, categories, sentiment, emotion, and syntax to provide insights for your business or industry. For more information, see About Natural Language Understanding.
IBM Watson™ Visual Recognition is discontinued. Existing instances are supported until 1 December 2021, but as of 7 January 2021, you can't create instances. Any instance that is provisioned on 1 December 2021 will be deleted.
IBM Watson™ Compare and Comply is discontinued. Existing instances are supported until 30 November 2021, but as of 1 December 2020, you can't create instances. Any instance that exists on 30 November 2021 will be deleted. Consider migrating to Watson Discovery Premium on IBM Cloud for your Compare and Comply use cases. To start the migration process, visit https://ibm.biz/contact-wdc-premium.
The SDK will no longer be tested with Node versions 6 and 8. Support will be officially dropped in v5.
This package has been moved under the name ibm-watson
. The package is still available at watson-developer-cloud
, but that will no longer receive updates. Use ibm-watson
to stay up to date.
npm install ibm-watson
import DiscoveryV1 from 'ibm-watson/discovery/v1';
import { IamAuthenticator } from 'ibm-watson/auth';
const discoveryClient = new DiscoveryV1({
authenticator: new IamAuthenticator({ apikey: '{apikey}' }),
version: '{version}',
});
// ...
The examples folder has basic and advanced examples. The examples within each service assume that you already have service credentials.
Starting with v5.0.0, the SDK should work in the browser, out of the box, with most bundlers.
See the examples/
folder for Browserify and Webpack client-side SDK examples (with server-side generation of auth tokens.)
Note: not all services currently support CORS, and therefore not all services can be used client-side. Of those that do, most require an auth token to be generated server-side via the Authorization Service.
Watson services are migrating to token-based Identity and Access Management (IAM) authentication.
Authentication is accomplished using dedicated Authenticators for each authentication scheme. Import authenticators from ibm-watson/auth
or rely on externally-configured credentials which will be read from a credentials file or environment variables.
To learn more about the Authenticators and how to use them with your services, see the detailed documentation.
To find out which authentication to use, view the service credentials. You find the service credentials for authentication the same way for all Watson services:
On this page, you should be able to see your credentials for accessing your service instance.
In your code, you can use these values in the service constructor or with a method call after instantiating your service.
There are two ways to supply the credentials you found above to the SDK for authentication:
With a credentials file, you just need to put the file in the right place and the SDK will do the work of parsing it and authenticating. You can get this file by clicking the Download button for the credentials in the Manage tab of your service instance.
The file downloaded will be called ibm-credentials.env
. This is the name the SDK will search for and must be preserved unless you want to configure the file path (more on that later). The SDK will look for your ibm-credentials.env
file in the following places (in order):
IBM_CREDENTIALS_FILE
As long as you set that up correctly, you don't have to worry about setting any authentication options in your code. So, for example, if you created and downloaded the credential file for your Discovery instance, you just need to do the following:
const DiscoveryV1 = require('ibm-watson/discovery/v1');
const discovery = new DiscoveryV1({ version: '2019-02-01' });
And that's it!
If you're using more than one service at a time in your code and get two different ibm-credentials.env
files, just put the contents together in one ibm-credentials.env
file and the SDK will handle assigning credentials to their appropriate services.
Special Note: Due to legacy issues in Assistant V1 and V2 as well as Visual Recognition V3 and V4, the following parameter serviceName
must be added when creating the service object:
const AssistantV1 = require('ibm-watson/assistant/v1');
const assistant = new AssistantV1({
version: '2020-04-01',
serviceName: 'assistant',
})
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');
const assistant = new VisualRecognitionV3({
version: '2018-03-19',
serviceName: 'visual-recognition',
})
It is worth noting that if you are planning to rely on VCAP_SERVICES for authentication then the serviceName
parameter MUST be removed otherwise VCAP_SERVICES will not be able to authenticate you. See Cloud Authentication Prioritization for more details.
If you would like to configure the location/name of your credential file, you can set an environment variable called IBM_CREDENTIALS_FILE
. This will take precedence over the locations specified above. Here's how you can do that:
export IBM_CREDENTIALS_FILE="<path>"
where <path>
is something like /home/user/Downloads/<file_name>.env
. If you just provide a path to a directory, the SDK will look for a file called ibm-credentials.env
in that directory.
The SDK also supports setting credentials manually in your code, using an Authenticator.
Some services use token-based Identity and Access Management (IAM) authentication. IAM authentication uses a service API key to get an access token that is passed with the call. Access tokens are valid for approximately one hour and must be regenerated.
To use IAM authentication, you must use an IamAuthenticator
or a BearerTokenAuthenticator
.
IamAuthenticator
to have the SDK manage the lifecycle of the access token. The SDK requests an access token, ensures that the access token is valid, and refreshes it if necessary.BearerTokenAuthenticator
if you want to manage the lifecycle yourself. For details, see Authenticating with IAM tokens. If you want to switch your authenticator, you must override the authenticator
property directly.To use the SDK in a Cloud Pak, use the CloudPakForDataAuthenticator
. This will require a username, password, and URL.
When uploading your application to IBM Cloud there is a certain priority Watson services will use when looking for proper credentials. The order is as follows:
You can set or reset the base URL after constructing the client instance using the setServiceUrl
method:
const DiscoveryV1 = require('ibm-watson/discovery/v1');
const discovery = DiscoveryV1({
/* authenticator, version, etc... */
});
discovery.setServiceUrl('<new url>');
All SDK methods are asynchronous, as they are making network requests to Watson services. To handle receiving the data from these requests, the SDK offers support with Promises.
const DiscoveryV1 = require('ibm-watson/discovery/v1');
const discovery = new DiscoveryV1({
/* authenticator, version, serviceUrl, etc... */
});
// using Promises
discovery.listEnvironments()
.then(body => {
console.log(JSON.stringify(body, null, 2));
})
.catch(err => {
console.log(err);
});
// using Promises provides the ability to use async / await
async function callDiscovery() { // note that callDiscovery also returns a Promise
const body = await discovery.listEnvironments();
}
Custom headers can be passed with any request. Each method has an optional parameter headers
which can be used to pass in these custom headers, which can override headers that we use as parameters.
For example, this is how you can pass in custom headers to Watson Assistant service. In this example, the 'custom'
value for 'Accept-Language'
will override the default header for 'Accept-Language'
, and the 'Custom-Header'
while not overriding the default headers, will additionally be sent with the request.
const assistant = new watson.AssistantV1({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message({
workspaceId: 'something',
input: {'text': 'Hello'},
headers: {
'Custom-Header': 'custom',
'Accept-Language': 'custom'
}
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
The SDK now returns the full HTTP response by default for each method.
Here is an example of how to access the response headers for Watson Assistant:
const assistant = new AssistantV1({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message(params).then(
response => {
console.log(response.headers);
},
err => {
console.log(err);
/*
`err` is an Error object. It will always have a `message` field
and depending on the type of error, it may also have the following fields:
- body
- headers
- name
- code
*/
}
);
Every SDK call returns a response with a transaction ID in the X-Global-Transaction-Id
header. Together the service instance region, this ID helps support teams troubleshoot issues from relevant logs.
const assistant = new AssistantV1({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message(params).then(
response => {
console.log(response.headers['X-Global-Transaction-Id']);
},
err => {
console.log(err);
}
);
const speechToText = new SpeechToTextV1({
/* authenticator, version, serviceUrl, etc... */
});
const recognizeStream = recognizeUsingWebSocket(params);
// getTransactionId returns a Promise that resolves to the ID
recognizeStream.getTransactionId().then(
globalTransactionId => console.log(globalTransactionId),
err => console.log(err),
);
However, the transaction ID isn't available when the API doesn't return a response for some reason. In that case, you can set your own transaction ID in the request. For example, replace <my-unique-transaction-id>
in the following example with a unique transaction ID.
const assistant = new AssistantV1({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message({
workspaceId: 'something',
input: {'text': 'Hello'},
headers: {
'X-Global-Transaction-Id': '<my-unique-transaction-id>'
}
}).then(
response => {
console.log(response.headers['X-Global-Transaction-Id']);
},
err => {
console.log(err);
}
);
By default, all requests are logged. This can be disabled of by setting the X-Watson-Learning-Opt-Out
header when creating the service instance:
const myInstance = new watson.WhateverServiceV1({
/* authenticator, version, serviceUrl, etc... */
headers: {
"X-Watson-Learning-Opt-Out": true
}
});
The SDK provides the user with full control over the HTTPS Agent used to make requests. This is available for both the service client and the authenticators that make network requests (e.g. IamAuthenticator
). Outlined below are a couple of different scenarios where this capability is needed. Note that this functionality is for Node environments only - these configurtions will have no effect in the browser.
To use the SDK (which makes HTTPS requests) behind an HTTP proxy, a special tunneling agent must be used. Use the package tunnel
for this. Configure this agent with your proxy information, and pass it in as the HTTPS agent in the service constructor. Additionally, you must set proxy
to false
in the service constructor. If using an Authenticator that makes network requests (IAM or CP4D), you must set these fields in the Authenticator constructor as well.
See this example configuration:
const tunnel = require('tunnel');
const AssistantV1 = require('ibm-watson/assistant/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const httpsAgent = tunnel.httpsOverHttp({
proxy: {
host: 'some.host.org',
port: 1234,
},
});
const assistant = new AssistantV1({
authenticator: new IamAuthenticator({
apikey: 'fakekey-1234'
httpsAgent, // not necessary if using Basic or BearerToken authentication
proxy: false,
}),
version: '2020-01-28',
httpsAgent,
proxy: false,
});
To send custom certificates as a security measure in your request, use the cert
, key
, and/or ca
properties of the HTTPS Agent. See this documentation for more information about the options. Note that the entire contents of the file must be provided - not just the file name.
const AssistantV1 = require('ibm-watson/assistant/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const certFile = fs.readFileSync('./my-cert.pem');
const keyFile = fs.readFileSync('./my-key.pem');
const assistant = new AssistantV1({
authenticator: new IamAuthenticator({
apikey: 'fakekey-1234',
httpsAgent: new https.Agent({
key: keyFile,
cert: certFile,
})
}),
version: '2019-02-28',
httpsAgent: new https.Agent({
key: keyFile,
cert: certFile,
}),
});
The HTTP client can be configured to disable SSL verification. Note that this has serious security implications - only do this if you really mean to! ⚠️
To do this, set disableSslVerification
to true
in the service constructor and/or authenticator constructor, like below:
const discovery = new DiscoveryV1({
serviceUrl: '<service_url>',
version: '<version-date>',
authenticator: new IamAuthenticator({ apikey: '<apikey>', disableSslVerification: true }), // this will disable SSL verification for requests to the token endpoint
disableSslVerification: true, // this will disable SSL verification for any request made with this client instance
});
You can find links to the documentation at https://cloud.ibm.com/developer/watson/documentation. Find the service that you're interested in, click API reference, and then select the Node tab.
There are also auto-generated JSDocs available at http://watson-developer-cloud.github.io/node-sdk/master/
If you have issues with the APIs or have a question about the Watson services, see Stack Overflow.
Use the Assistant service to determine the intent of a message.
Note: You must first create a workspace via IBM Cloud. See the documentation for details.
const AssistantV2 = require('ibm-watson/assistant/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const assistant = new AssistantV2({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.assistant.watson.cloud.ibm.com',
version: '2018-09-19'
});
assistant.message(
{
input: { text: "What's the weather?" },
assistantId: '<assistant id>',
sessionId: '<session id>',
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Assistant service to determine the intent of a message.
Note: You must first create a workspace via IBM Cloud. See the documentation for details.
const AssistantV1 = require('ibm-watson/assistant/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const assistant = new AssistantV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.assistant.watson.cloud.ibm.com',
version: '2018-02-16'
});
assistant.message(
{
input: { text: "What's the weather?" },
workspaceId: '<workspace id>'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
IBM Watson™ Compare and Comply is discontinued. Existing instances are supported until 30 November 2021, but as of 1 December 2020, you can't create instances. Any instance that exists on 30 November 2021 will be deleted. Consider migrating to Watson Discovery Premium on IBM Cloud for your Compare and Comply use cases. To start the migration process, visit https://ibm.biz/contact-wdc-premium.
Use the Compare Comply service to compare and classify documents.
const fs = require('fs');
const CompareComplyV1 = require('ibm-watson/compare-comply/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const compareComply = new CompareComplyV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.compare-comply.watson.cloud.ibm.com',
version: '2018-12-06'
});
compareComply.compareDocuments(
{
file1: fs.createReadStream('<path-to-file-1>'),
file1Filename: '<filename-1>',
file1Label: 'file-1',
file2: fs.createReadStream('<path-to-file-2>'),
file2Filename: '<filename-2>',
file2Label: 'file-2',
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Discovery Service to search and analyze structured and unstructured data.
const DiscoveryV2 = require('ibm-watson/discovery/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const discovery = new DiscoveryV2({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.discovery.watson.cloud.ibm.com',
version: '2019-11-22'
});
discovery.query(
{
projectId: '<project_id>',
collectionId: '<collection_id>',
query: 'my_query'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Discovery Service to search and analyze structured and unstructured data.
const DiscoveryV1 = require('ibm-watson/discovery/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const discovery = new DiscoveryV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.discovery.watson.cloud.ibm.com',
version: '2017-09-01'
});
discovery.query(
{
environmentId: '<environment_id>',
collectionId: '<collection_id>',
query: 'my_query'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
Translate text from one language to another or idenfity a language using the Language Translator service.
const LanguageTranslatorV3 = require('ibm-watson/language-translator/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const languageTranslator = new LanguageTranslatorV3({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.language-translator.watson.cloud.ibm.com',
version: 'YYYY-MM-DD',
});
languageTranslator.translate(
{
text: 'A sentence must have a verb',
source: 'en',
target: 'es'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
languageTranslator.identify(
{
text:
'The language translator service takes text input and identifies the language used.'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
Use Natural Language Classifier service to create a classifier instance by providing a set of representative strings and a set of one or more correct classes for each as training. Then use the trained classifier to classify your new question for best matching answers or to retrieve next actions for your application.
const NaturalLanguageClassifierV1 = require('ibm-watson/natural-language-classifier/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const classifier = new NaturalLanguageClassifierV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.natural-language-classifier.watson.cloud.ibm.com'
});
classifier.classify(
{
text: 'Is it sunny?',
classifierId: '<classifier-id>'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
See this example to learn how to create a classifier.
Natural Language Understanding is a collection of natural language processing APIs that help you understand sentiment, keywords, entities, high-level concepts and more.
const fs = require('fs');
const NaturalLanguageUnderstandingV1 = require('ibm-watson/natural-language-understanding/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const nlu = new NaturalLanguageUnderstandingV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
version: '2018-04-05',
serviceUrl: 'https://api.us-south.natural-language-understanding.watson.cloud.ibm.com'
});
nlu.analyze(
{
html: file_data, // Buffer or String
features: {
concepts: {},
keywords: {}
}
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
On 1 December 2021, Personality Insights will no longer be available. Consider migrating to Watson Natural Language Understanding. For more information, see https://github.com/watson-developer-cloud/node-sdk/tree/master#personality-insights-deprecation
Analyze text in English and get a personality profile by using the Personality Insights service.
const PersonalityInsightsV3 = require('ibm-watson/personality-insights/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const personalityInsights = new PersonalityInsightsV3({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
version: '2016-10-19',
serviceUrl: 'https://api.us-south.personality-insights.watson.cloud.ibm.com'
});
personalityInsights.profile(
{
content: 'Enter more than 100 unique words here...',
contentType: 'text/plain',
consumptionPreferences: true
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
Use the Speech to Text service to recognize the text from a .wav
file.
const fs = require('fs');
const SpeechToTextV1 = require('ibm-watson/speech-to-text/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const speechToText = new SpeechToTextV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.speech-to-text.watson.cloud.ibm.com'
});
const params = {
// From file
audio: fs.createReadStream('./resources/speech.wav'),
contentType: 'audio/l16; rate=44100'
};
speechToText.recognize(params)
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
// or streaming
fs.createReadStream('./resources/speech.wav')
.pipe(speechToText.recognizeUsingWebSocket({ contentType: 'audio/l16; rate=44100' }))
.pipe(fs.createWriteStream('./transcription.txt'));
Use the Text to Speech service to synthesize text into an audio file.
const fs = require('fs');
const TextToSpeechV1 = require('ibm-watson/text-to-speech/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const textToSpeech = new TextToSpeechV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.text-to-speech.watson.cloud.ibm.com'
});
const params = {
text: 'Hello from IBM Watson',
voice: 'en-US_AllisonVoice', // Optional voice
accept: 'audio/wav'
};
// Synthesize speech, correct the wav header, then save to disk
// (wav header requires a file length, but this is unknown until after the header is already generated and sent)
// note that `repairWavHeaderStream` will read the whole stream into memory in order to process it.
// the method returns a Promise that resolves with the repaired buffer
textToSpeech
.synthesize(params)
.then(response => {
const audio = response.result;
return textToSpeech.repairWavHeaderStream(audio);
})
.then(repairedFile => {
fs.writeFileSync('audio.wav', repairedFile);
console.log('audio.wav written with a corrected wav header');
})
.catch(err => {
console.log(err);
});
// or, using WebSockets
textToSpeech.synthesizeUsingWebSocket(params);
synthStream.pipe(fs.createWriteStream('./audio.ogg'));
// see more information in examples/text_to_speech_websocket.js
Use the Tone Analyzer service to analyze the emotion, writing and social tones of a text.
const ToneAnalyzerV3 = require('ibm-watson/tone-analyzer/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const toneAnalyzer = new ToneAnalyzerV3({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
version: '2016-05-19',
serviceUrl: 'https://api.us-south.tone-analyzer.watson.cloud.ibm.com'
});
toneAnalyzer.tone(
{
toneInput: 'Greetings from Watson Developer Cloud!',
contentType: 'text/plain'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
IBM Watson™ Visual Recognition is discontinued. Existing instances are supported until 1 December 2021, but as of 7 January 2021, you can't create instances. Any instance that is provisioned on 1 December 2021 will be deleted.
Use the Visual Recognition service to recognize the following picture.
const VisualRecognitionV4 = require('ibm-watson/visual-recognition/v4');
const { IamAuthenticator } = require('ibm-watson/auth');
const visualRecognition = new VisualRecognitionV4({
serviceUrl: 'https://api.us-south.visual-recognition.watson.cloud.ibm.com',
version: '2019-02-11',
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
});
const params = {
collectionIds: ['<collectionId1','collectionId2','collectionId3'],
features: 'objects'
};
visualRecognition.classify(params)
.then(response => {
const image = response.result.images[0]
const detectedObjects = image.objects.collections[0].objects
console.log(detectedObjects)
})
.catch(err => {
console.log(err);
});
Use the Visual Recognition service to recognize the following picture.
const fs = require('fs');
const VisualRecognitionV3 = require('ibm-watson/visual-recognition/v3');
const { IamAuthenticator } = require('ibm-watson/auth');
const visualRecognition = new VisualRecognitionV3({
serviceUrl: 'https://api.us-south.visual-recognition.watson.cloud.ibm.com',
version: '2018-03-19',
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
});
const params = {
imagesFile: fs.createReadStream('./resources/car.png')
};
visualRecognition.classify(params)
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
The SDK always expects an authenticator to be passed in. To make an unautuhenticated request, use the NoAuthAuthenticator
.
const watson = require('ibm-watson');
const { NoAuthAuthenticator } = require('ibm-watson/auth');
const assistant = new watson.AssistantV1({
authenticator: new NoAuthAuthenticator(),
});
This library relies on the axios
npm module written by
axios to call the Watson Services. To debug the apps, add
'axios' to the NODE_DEBUG
environment variable:
$ NODE_DEBUG='axios' node app.js
where app.js
is your Node.js file.
Running all the tests:
$ npm test
Running a specific test:
$ jest '<path to test>'
Find more open source projects on the IBM Github Page.
See CONTRIBUTING.
We love to highlight cool open-source projects that use this SDK! If you'd like to get your project added to the list, feel free to make an issue linking us to it.
This library is licensed under Apache 2.0. Full license text is available in [COPYING][license].
Generated using TypeDoc