At the beginning of the post, we fabricated synthetic data in the shape of: y = sin(πx) + Ɛ, where Ɛ is normally distributed noise. The noise is created by calling: np.random.normal(). To obtain always the same Ɛ (hence the same data and outcome) across multiple runs, one should declare np.random.seed(42) before the generation of Ɛ (in the same code snippet). I added it to the original post as well to support reproducibility.
Note: the split in train and test sets may also lead in inconsistent outcomes between different runs. For this reason, we had specified a random_state in the sklearn.model_selection.train_test_split function.