We present a self-supervised pre-training scheme for single image denoising based on a novel pretext task. Our work is inspired by the success of self-supervised learning (SSL) methods in transfer learning. These methods have been shown to be extremely effective when used to pre-train a model that is then fine-tuned on small datasets. As pretext task, we propose to train a denoising network on patches of the downsampled input image, which we treat as pseudo-clean image patches, and an adaptive noise estimator to learn the specific noise distribution of the input image. By carrying out the pre-training on the single input image, rather than on a separate dataset, we avoid the well-known noise distribution gap between images in the training dataset and the single input image used at test time. We evaluate our SSL method for single image denoising via extensive experiments on both synthetic and real-world noisy image datasets. We demonstrate SotA results compared to existing unsupervised denoising methods, by transferring our pre-training to IDR, thus showing that SSL pre-training is a promising framework also in image denoising.