I’m starting to work on a physics system, the place there are a number of objects on this planet that get rendered. The present strategy I’m contemplating consists of pseudo-OOP in C the place every object has information that’s used for rendering, and information that’s not used for rendering.
Fast instance:
double Object_update(Object *this, double now) {
double delta = now - this->last_update;
this->place[0] += this->velocity[0];
this->place[1] += this->velocity[1];
this->place[2] += this->velocity[2];
// Test for collisions and many others.
}
struct Class {
size_t measurement;
const struct Class *tremendous;
double (*replace)(double);
} Object = {
};
struct Object {
struct Class *class;
double last_update;
float place[3], rotation[3][3], velocity[3], mass;
// Others is perhaps added later, e.g., bounding_box, angular_velocity, drive, torque, and many others.
};
So right here is the issue. If these objects have their place
and rotation
embedded in them, then how may I effectively move these (and never the opposite properties) to the GPU for rendering?
I’ve thought-about storing these particularly in a parallel array in GPU-accessible reminiscence however that may get sophisticated after I encounter the age previous drawback of retaining stated array contiguous, i.e., eradicating gaps when objects are destroyed, and many others. And will get much more sophisticated after I add ‘subclasses’ with further information that’s related for rendering, corresponding to texture IDs or meshes, which means the weather are not homogeneous in measurement.
How can I greatest preserve GPU-accessible information in my Object
together with properties that don’t must be GPU accessible?
I may loop by way of them within the render loop after which setVertexBytes
to the place/rotation information of every object, however that may incur one draw name per object, which isn’t environment friendly for numerous objects.