The trend of acquiring equipment and obtaining high resolution remote sensed images by Unmanned Aerial Vehicles (UAV) have been followed by sugarcane producers in Brazil, given its low cost. The images taken from fields have been used for retrieval of information like Digital Terrain Models (DTMs) from stereoscopy of overlapping images and spatial variance of biomass. In sugarcane production, driving deviations occur during planting because of manual steering inaccuracy, sliding of machines sideways on terrain slopes, side offset of planter-tractor along curves and GNSS errors. Given the accuracy of identification of vegetation, a demand was presented by parties in the sugarcane sector to identify and extract vectorized lines along the plant rows. These lines must present a degree of accuracy to be used as guidance reference, in order to keep these machines confined in tracks to avoid damages to parallel rows (strict CTF - Controlled Traffic Farming). In addition, it is important to retrieve the spaces along the rows with absent plants to estimate yield impacts and identify possible intervention (re-planting). A methodology was created and implemented to: (1) extract an approximate vegetation index from an RGB image and apply a filter to identify pixel values in the center of the plant-rows; (2) re-create lines (straight and curved) using a procedure of building line-segments along points for multiple parallel lines and adjustable offsets; (3) obtain a local thresholding classification from VI values and fragment the lines upon the classified pixel values. The methodology was implemented with the use of free software by the development of three small applications created in an open-source programming platform.