stingund wrote: ↑Mon May 03, 2021 12:43 am
In the end, it doesn't matter how the underlying hardware works. The only thing which matters is the contract that the API offers. And for a 2D platform such as Love2D, specified pixel locations should *absolutely* be accurate because they should directly map to an underlying surface, for example an 800x600 grid (discounting the scaling that the OS later does as it blits the surface, as I want to cover things that are in Love's realm of control).
Having established this, if you specify (2,2) for drawing a primitive of any type, be it a point, line or triangle, then pixel [2,2] in the surface should be unconditionally targetted. This should certainly hold whenever drawing axis aligned primitives. When drawing oblique lines, the rasterization process could change things slightly, but I would still expect the starting point of a line to be exactly at the pixel you've specified.
Again, Love2D is a 2D platform, like pico-8. If you specify you want a point or a line at (3,3), that's where it should be. This is the API contract that should be presented to the user. And from reading the description of the various functions in the API, this is what seems to be implied.
I am confused to see you justify how there are essentially different behaviours for the 3 primitives you mentioned, and further differentation depending on whether we are talking about horizontal or vertical lines. We shouldn't even get there.
I hope to some day to find some time to peek at the implementation to understand what is causing this behaviour. For now, I'm able to get consistent behaviour by taking this into account, but a user of the API should certainly not have to do that. Behaviour should be consistent and predictable.
Behaviour is consistent and predictable, to the extent that floating-point math allows it. If you pass unambiguous coordinates depending on the task at hand, you'll get predictable results.
The problem seems to be that you're used to an integer coordinate system where each pixel is assigned an X and Y coordinate number, and that assumption is wrong in the case of OpenGL and, by extension, in the case of Löve. Pixels have areas and lengths; the coordinates are floats and can be anywhere within a pixel. In this system, integer coordinates actually lie between
pixels. You need to get rid of that prejudice of a perfect grid where each pixel has a unique integer coordinate; that's not how it works. It's how simpler systems like Pico8 or the programming languages of many old computers work; but it's not how more modern systems work. In Pico8, you can't smoothly move an image in 0.1 pixel increments, for example, while in Löve you can. In Android, by default the coordinates are not even pixels; instead they are some arbitrary units whose size is set by the vendor.
Purely speaking, you can't draw a line. A line is an infinitely thin mathematical object, and therefore it can't be visible. Bresenham's algorithm and similar ones work by drawing pixels that are closest to the centres of lines; that actually results in variable average thickness depending on angle - for example, a 45° line has an average thickness of sqrt(2)/2 ≅ 0.7071 pixels, while a horizontal or vertical line has a thickness of 1 pixel. That's what Pico8 can do. In contrast, Löve tries to draw lines with uniform thickness, the one given in love.graphics.setLineWidth. That's closer to what vector drawing programs like Inkscape, and the more sophisticated bitmap drawing programs like GIMP, actually do. You seem to expect the behaviour of classic pixel-drawing programs like Deluxe Paint or Autodesk Animator, and that's not how things work in today's hardware.
Rectangles in Löve work just as you expect. The border of the rectangle is aligned with the border of the pixel, and the borders of pixels lie in integer coordinates, therefore passing an integer coordinate to a rectangle causes no ambiguity.
Lines are different. They start and end at the given endpoints, and extend half the given line width to each side (i.e. they extend 0.5 pixels to each side by default), forming a rectangle
(Löve does not support different line caps, as some drawing programs do; only different joints). Conceptually, a "rough" line and a "smooth" line are treated the same; the difference is that "rough" forces pixel alpha to be always 1 or 0. They are still drawn using OpenGL quads, not pixel by pixel, and not with Bresenham's algorithm. If you draw a horizontal line of width 1 using all integer coordinates, you're creating a rectangle that extends horizontally from the exact coordinate you've specified, to the other exact coordinate you've specified; and vertically, from half the top pixel to half the bottom pixel. This is ambiguous when using rough line style. However, if you offset the vertical
coordinate by 0.5 pixels, the rectangle will exactly cover a row of pixels, and the ambiguity disappears. (The mistake I made before was believing that this also applied to the horizontal coordinate, but that's not the case because of the aforementioned fact that lines are not capped). For vertical lines, the situation is the inverse, because now the line extends 0.5 pixels to the left and right of the coordinate you specify, while the top and bottom are unambiguous when coordinates are integral.
Points are similar to lines. The conceptual object has no width or height, and OpenGL (and Löve) creates a rectangle around it of 0.5 pixels in every direction, in order to make it have a width and height of 1 pixel. If you draw it in an intersection of 4 pixels, as you would if you used integral coordinates, the point would overlap each surrounding pixel; I believe some graphics drivers actually do that. In order to get a single pixel drawn, you need to draw it in the centre of a pixel, that's why you need to add 0.5.
Finally, about hardware abstraction. Löve is a thin layer on top of OpenGL, by design, because that's how you do accelerated graphics and stay competitive. It's not a "let's do things the way the user expects at all costs, even if that means the inability to take advantage of hardware acceleration" kind of engine. It takes compromises. And if you feed it ambiguous data, it won't always make the same choice, because it leaves the resolution of the ambiguity in the hands of OpenGL, which is to say in the hands of the OpenGL driver writers. But with an understanding of how coordinates work, you are able to feed it unambiguous data which will work the same in all systems.