Algorithms are interfering more and more in our daily life like decision support algorithms (recommendation or scoring algorithm) or autonomous algorithms embedded in intelligent machines (autonomous vehicles). Deployed in many sectors and industries for their efficiency, their results are increasingly discussed and disputed. In particular, they are accused of being black boxes and of leading to discriminatory practices linked to gender or ethnic origin. This article aims to describe the biases related to the algorithms and to outline ways to address them. We are particularly interested in the results of algorithms considering equity objectives, and their consequences in terms of discrimination. Three questions motivate this article: By which mechanisms can algorithm biases occur? Can we avoid them? And, finally, can we correct or limit them? In the first part, we describe how a statistical learning algorithm works. In a second part we are interested in the origin of these biases which can be of cognitive, statistical or economic nature. In a third part, we present some promising statistical or algorithmic approaches that can correct biases. We conclude the article by discussing the main societal issues raised by statistical learning algorithms such as interpretability, explainability, transparency, and responsibility.