Choose the best fitting model based on error estimates

Image by author.

This post aims to present the bias-variance trade-off through a practical example in Python.

The bias-variance trade-off refers to the balance between two competing properties of machine learning models.

The goal of supervised machine learning problems is to find the mathematical representation (f) that explains the relationship between input predictors (x) and an observed outcome (y):


And how the better performances of IBM Watson Assistant tackle the most common pain points in the adoption of a conversational solution

Photo by Alex Knight on Unsplash

Introduction

Today, virtual assistants (or chatbots) represent a globally recognised booming trend:

With regards to chatbots, which are in many ways the most recognisable form of AI, 80% of sales and marketing leaders say they already use these in their CX or plan to do so by 2020¹.

25 percent of customer service operations will use virtual customer assistants by 2020².

By 2025, customer service organizations that embed AI in their multichannel customer engagement platform will elevate operational efficiency by 25%³.

It is clear that chatbots are here to stay.

Their limitless potential in terms of business versatility — is there…


And serving it as a REST API

An example of spell-checking functionality from Google.

Introduction

This post describes how to build a simple multilingual spell-checker service in Python.

Spell-checkers are a common utility in a variety of softwares of daily use, and imbue search engines, text messaging applications and virtual assistants with unparalleled support to the end user.

In this post, we leverage handy APIs, writing a wrapper around them, without diving into the inner mechanisms underneath the spell-checking process.

An insightful explanation of such mechanisms can be found, for example, in this post by Peter Norvig, that also inspired implementations/libraries, such as pyspellcheck.

Approach

Given an input sentence…


How to select the number of principal components and application of PCA to new observations

Photo by Volodymyr Hryshchenko on Unsplash

Introduction

Principal Component Analysis (PCA) is an unsupervised technique for dimensionality reduction.

What is dimensionality reduction?

Let us start with an example. In a tabular data set, each column would represent a feature, or dimension. It is commonly known that it is difficult to manipulate a tabular data set that has a lot of columns/features, especially if there are more columns than observations.

Given a linearly modelable problem having a number of features p=40, then the best subset approach would fit a about trillion (2^p-1) possible models and submodels, making their computation extremely onerous.

How does PCA come to aid?

PCA…

Nicolo Cosimo Albanese

Data Scientist & philomath.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store