Hello Friends, with the growing demands for technology, Apple has come up with the Machine Learning where it supports features like face tracking, face detection, landmarks, text detection, object tracking, image registration and much more. So, let’s take a ride on how build Machine Learning App and we hope it will be very helpful for iOS developers.

To analyze the images, objects,etc.; Apple has given the built in models such as MobileNet, SqueezeNet, Places205-GoogLeNet, Inception v3, VGG16.

Looking on developer’s side, one can create their own custom trained models like Turi Create and IBM Watson Services. Let’s focus on IBM Watson Services which Apple recently introduced on March 20, 2018.

With IBM Watson Services, one can quickly analyze images, provide dynamic, accurate and deep understanding for faces, colors, food, etc. One can even train with their Custom Models.      iOS

1. First you need to Login to IBM Watson Studio but make sure you have registered to IBM Cloud.

https://dataplatform.ibm.com/registration/stepone?target=watson_vision_combined&context=wdp&apps=watson_studio&cm_sp=WatsonPlatform-WatsonPlatform-_-OnPageNavCTA-IBMWatson_VisualRecognition-_-CoreMLGithub

2. Create a New Project

iOS

3. Name your Project and you’ll see something like this

iOS

Select your Storage Device and if you have LITE Account , object-storage account will be created and then you will be able to move further.

4. Then after creating an object, you will see this kind of dashboard where you can upload .zip files in Assets Tab. Minimum 10 photos you can upload in Data Panel of ‘Find and add Data Tab’ in Load Section.

iOS

And then zip file will be  in Object Storage Model and it will be shown in Data Assets Model.

5. After uploading a Data Assets, Create a Model, name Your Model and then click on Trained Models.

6. Copy the Model ID Classifier and the Visual Recognition instance overview page in Watson Studio. Click the Credentials tab, and then click View credentials. Copy the api_key of the service.(A token that is used to call the Watson IoT Platform HTTP APIs. API keys are assigned roles that grant them authorization to call certain sets of HTTP APIs. Find the API key by going to https://console.ng.bluemix.net/dashboard/services, clicking the Cloud Object Storage service, clicking Service credentials in the left pane, and then clicking View credentials in the Actions column of the Service Credentials table. Copy the value of api_key,)

7. Those keys add in your Xcode Project where we dismiss pickerView and place an Object.

8. After that, you need to classify an image, you need to create a Handler and create an Object of         VNCoreMLRequest  which uses CoreML Model to process it.

iOS

9. Call that handler function inside another function and pass that picked image from UIPickerview inside this handler.

10. Add your Label which you want to display message of classified Image.

Here is the Result,

image

image

That’s it, enjoy your own apps building Custom Models in your app with ML

Conclusion

IBM Watson Services not only provides Visual recognition but provides many services like Natural Language Processing, Text-To-Speech, Tone Analyzer and many more.

How Let’s Nurture can help to make with Machine Learning Applications?

We, at Let’s Nurture Infotech, provide custom iOS mobile app development, Android app development, website design and development, IoT based solutions and more software solutions to our clients globally. We have delivered more than 200 iPhone app development projects to our clients due to our expertise and adaptation to latest technologies and skill set.

If you want to know more about custom mobile app development or anything else you want to implement for your business or an idea, get a FREE consultation now.

We would be happy to help you!

Want to work with us? We're hiring!