Per fragment lighting

I am reading the Learning Modern 3D Graphics Programming tutorial and I have finished reading this page:
http://www.arcsynthesis.org/gltut/Illumination/Tut10%20Fragment%20Lighting.html

Essentially, this section demonstrates this technique for per-fragment lighting for a point source light:

CPU:
Transforms the world space position of a point source light into model space, and sends it to the fragment shader as a uniform variable.

Vertex shader:
Forward model space coordinates (a vertex attribute) to fragment shader.
Forward vertex normal (a vertex attribute) to fragment shader.
Interpolates the model space coordinates (Nothing fancy here, just “out vec3 modelSpacePosition;” with the implied smooth interpolation qualifier).

Fragment shader:
Use the interpolated model space coordinate and model space light position to get a “direction to light” vector.
Do diffuse lighting with that vector and the vertex normal. (To be specific: Dot that vector with the vertex normal to get the cosine of the incident angle between the normal and the light, and multiply that by the diffuse color).

But if you go to the section titled “Gradient Matters” apparently this technique can reveal the triangles in the underlying mesh. You can see it here: http://www.arcsynthesis.org/gltut/Illumination/Cylinder%20Close%20Light.png
There are slightly noticeable dark lines that go up and down the cylinder. The reason why this happens is explained in the link.

Personally, these boundaries that “pop out” at you seem really subtle and unimportant, but I would still like to know how to get rid of them. The author mentions the correct solution in this quote, but I am not exactly sure what that solution specifically is. How do I “encode the normal for a surface at many points, rather than simply interpolating vertex normals”?

As a color texture encodes a color value for many points on the surface, a normal map can do the same for normals. You might want to look up ‘normal mapping’.