-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image vs Texture confusion #968
Comments
I've discussed this with Leeor on a few occasions. They really should be two separate classes. However, splitting them does create certain API challenges. The proposed We never had a solution to that problem, so never attempted to tackle this design change. |
You can generalize the interface via a You use the Image class to generate the blob (if loading from a file; memory textures only use an array of color data) and ask the Renderer to create the Texture for you: See my Renderer::Create1DTexture and Renderer::Create2DTexture. After validating and cleaning up the filepath, it constructs an instance of the Image class that loads the data from disk and parses it into its various parts: bpp, dimensions, and data which is then fed into the DirectX-specific process to create a texture. (I have a texture database on the back end because DirectX texture IDs are not simple integers). Memory textures (if not being stored for later use) don't need this information so they can be created with just an array and dimensions. (See: the various CreateDefault*Texture functions for examples of that). |
This has problems. You can create a "generic" Example: class Texture;
class TextureOpenGL : public Texture;
class TextureOther : public Texture;
class Renderer;
class RendererOpenGL : public Renderer;
class RendererOther : public Renderer;
// ...
// Assume base class method: Renderer::draw(Point, const Texture&)
rendererOpenGl.draw(position, textureOther); // Error (no compile time checking possible)
rendererOther.draw(position, textureOpenGL); // Error (no compile time checking possible) It would seem we can't have both |
I'll have to look for the specific code somewhere in the 11,000+ source files, but Unreal Engine manages to do it just fine. :P |
One possibility might be to abandon the |
To be clear, we would need buy in from Leeor before considering such a change. I don't think that will be readily coming. I mention it because most projects probably assume just a single type of renderer. Having a single renderer would simplify the problem, and avoid the above mentioned issues with splitting |
Mine is a single-type renderer and splits it up. :> |
I've thought about this and I don't really have a good solution. Gonna try to keep this as short as possible so bear with me. The idea behind Image was to provide a high level interface so that the user can instantiate an Image class and use it in a Renderer without worrying about internal formats and what the Renderer expects. Basically: I want to load this Image file and I want it to be displayed on the screen at Point(x, y). Very simple. No thoughts given to how the hardware handles it, how to optimize memory for it, etc. This worked well in the early code that relied almost exclusively on SDL 1.2 and software based rendering (in fact, Renderer used to be called a blitter... SDL_Blitter). We eventually added the OpenGL renderer (OGL_Blitter at the time) because some of the effects I wanted to use (image rotation, tinting, blending etc.) was very slow in software mode. The SDL renderer was left in for a time as a sort of 'safe fallback'. Eventually the idea of a software renderer was dumped in favor of an OpenGL only renderer since virtually all hardware that users had would have some form of an OpenGL implementation. The Image class went through a number of changes for handling the OpenGL texture ID's that were generated from the raw image data. Originally it stored a field internally that would map to a texture ID (seems that's still the case). This later was changed to the Renderer handling the mapping of Image's to TextureID's. This was eventually moved back to the Image class. So here we are, literally 13 years later. And the Image class does two things. Ultimately I want the interface to have the concept of an Image class which represents the image data itself and also allows for a number of image operations (e.g., changing color channels, desaturation, pixel plotting, reading from/writing to disk, etc.). These are all things that make sense for an Image. I would still like the Renderer class to take the Image as part of its interface. However, the Image will need to be converted to an appropriate Texture that the Renderer can handle appropriately. I don't think textures deriving from the Renderer class is necessary -- I think the Texture class can handle this on its own or at least provide the fields needed for a Renderer to make whatever decisions it needs to make (e.g., a void pointer or whatever that an OpenGL renderer or a DirectX renderer or a Vulkan renderer can make use of). Not sure that's the best way to handle it. Then there's the question of what object is responsible for handling Image->Texture mappings. That's the real challenge here because I really want this to be transparent to the user. The user should only really be worrying about the high-level concept of an Image. How a Renderer needs image data to be represented should be irrelevant to the user. |
There is a difference between an Image and a Texture.
An Image is just a blob of data representing the pixels, the bits per pixel, and the intended dimensions of the data.; all of which can be freely read from and written to disk and/or packed into a binary format for fast read/writes.
A Texture is a GPU-specific format that is different for OpenGL, DirectX, SDL, etc.
As it is now, the Image class represents both. This is bad.
Refactoring out the GPU-specific data and functions (re: all the OpenGL functions that are being used) into a separate Texture class would be much better.
The text was updated successfully, but these errors were encountered: