Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image vs Texture confusion #968

Open
cugone opened this issue Aug 2, 2021 · 8 comments
Open

Image vs Texture confusion #968

cugone opened this issue Aug 2, 2021 · 8 comments

Comments

@cugone
Copy link
Contributor

cugone commented Aug 2, 2021

There is a difference between an Image and a Texture.

An Image is just a blob of data representing the pixels, the bits per pixel, and the intended dimensions of the data.; all of which can be freely read from and written to disk and/or packed into a binary format for fast read/writes.

A Texture is a GPU-specific format that is different for OpenGL, DirectX, SDL, etc.

As it is now, the Image class represents both. This is bad.

Refactoring out the GPU-specific data and functions (re: all the OpenGL functions that are being used) into a separate Texture class would be much better.

@DanRStevens
Copy link
Collaborator

I've discussed this with Leeor on a few occasions. They really should be two separate classes. However, splitting them does create certain API challenges. The proposed Texture class would be specific to a particular Renderer subtype, so can't be part of the generic base class interface. The base class interface would need to continue using Image for generality. But then, how do you map from a generic Image instance to a Renderer subtype specific Texture class?

We never had a solution to that problem, so never attempted to tackle this design change.

@cugone
Copy link
Contributor Author

cugone commented Aug 2, 2021

But then, how do you map from a generic Image instance to a Renderer subtype specific Texture class?

You can generalize the interface via a Texture interface and various CreateTexture functions.

You use the Image class to generate the blob (if loading from a file; memory textures only use an array of color data) and ask the Renderer to create the Texture for you:

See my Renderer::Create1DTexture and Renderer::Create2DTexture.

After validating and cleaning up the filepath, it constructs an instance of the Image class that loads the data from disk and parses it into its various parts: bpp, dimensions, and data which is then fed into the DirectX-specific process to create a texture. (I have a texture database on the back end because DirectX texture IDs are not simple integers).

Memory textures (if not being stored for later use) don't need this information so they can be created with just an array and dimensions. (See: the various CreateDefault*Texture functions for examples of that).

@DanRStevens
Copy link
Collaborator

This has problems. You can create a "generic" Texture class, but it won't necessarily have good usage characteristics. The Texture class is necessarily tied to a Renderer subclass. If you were passing Texture objects around, you would need to pass the derived class that matches the Renderer. If you passed any other subclass around, it wouldn't be valid for that Renderer. This would imply the Renderer should have method signatures that only accept the specific matching Texture subclass. But then that breaks having a common base class Renderer interface.

Example:

class Texture;
class TextureOpenGL : public Texture;
class TextureOther : public Texture;

class Renderer;
class RendererOpenGL : public Renderer;
class RendererOther : public Renderer;

// ...

// Assume base class method: Renderer::draw(Point, const Texture&)
rendererOpenGl.draw(position, textureOther); // Error (no compile time checking possible)
rendererOther.draw(position, textureOpenGL); // Error (no compile time checking possible)

It would seem we can't have both Renderer subclass specific Texture subclasses, while also maintaining a generic Renderer base class, and still be able to compile time check that we are passing the right parameter types around.

@cugone
Copy link
Contributor Author

cugone commented Aug 3, 2021

I'll have to look for the specific code somewhere in the 11,000+ source files, but Unreal Engine manages to do it just fine. :P

@DanRStevens
Copy link
Collaborator

One possibility might be to abandon the Renderer base interface. I suspect if you checked another project, you'd effectively find a solution such as that.

@DanRStevens
Copy link
Collaborator

To be clear, we would need buy in from Leeor before considering such a change. I don't think that will be readily coming. I mention it because most projects probably assume just a single type of renderer. Having a single renderer would simplify the problem, and avoid the above mentioned issues with splitting Image and Texture.

@cugone
Copy link
Contributor Author

cugone commented Aug 4, 2021

To be clear, we would need buy in from Leeor before considering such a change. I don't think that will be readily coming. I mention it because most projects probably assume just a single type of renderer. Having a single renderer would simplify the problem, and avoid the above mentioned issues with splitting Image and Texture.

Mine is a single-type renderer and splits it up. :>

@ldicker83
Copy link
Member

ldicker83 commented Aug 12, 2021

I've thought about this and I don't really have a good solution. Gonna try to keep this as short as possible so bear with me.

The idea behind Image was to provide a high level interface so that the user can instantiate an Image class and use it in a Renderer without worrying about internal formats and what the Renderer expects. Basically: I want to load this Image file and I want it to be displayed on the screen at Point(x, y). Very simple. No thoughts given to how the hardware handles it, how to optimize memory for it, etc.

This worked well in the early code that relied almost exclusively on SDL 1.2 and software based rendering (in fact, Renderer used to be called a blitter... SDL_Blitter). We eventually added the OpenGL renderer (OGL_Blitter at the time) because some of the effects I wanted to use (image rotation, tinting, blending etc.) was very slow in software mode. The SDL renderer was left in for a time as a sort of 'safe fallback'. Eventually the idea of a software renderer was dumped in favor of an OpenGL only renderer since virtually all hardware that users had would have some form of an OpenGL implementation.

The Image class went through a number of changes for handling the OpenGL texture ID's that were generated from the raw image data. Originally it stored a field internally that would map to a texture ID (seems that's still the case). This later was changed to the Renderer handling the mapping of Image's to TextureID's. This was eventually moved back to the Image class.


So here we are, literally 13 years later. And the Image class does two things.

Ultimately I want the interface to have the concept of an Image class which represents the image data itself and also allows for a number of image operations (e.g., changing color channels, desaturation, pixel plotting, reading from/writing to disk, etc.). These are all things that make sense for an Image.

I would still like the Renderer class to take the Image as part of its interface. However, the Image will need to be converted to an appropriate Texture that the Renderer can handle appropriately.

I don't think textures deriving from the Renderer class is necessary -- I think the Texture class can handle this on its own or at least provide the fields needed for a Renderer to make whatever decisions it needs to make (e.g., a void pointer or whatever that an OpenGL renderer or a DirectX renderer or a Vulkan renderer can make use of). Not sure that's the best way to handle it.

Then there's the question of what object is responsible for handling Image->Texture mappings. That's the real challenge here because I really want this to be transparent to the user. The user should only really be worrying about the high-level concept of an Image. How a Renderer needs image data to be represented should be irrelevant to the user.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants