At the earliest place, immediatly after driverdata is set.
(Doing it in SDL_render.c, after creation, would be too late, because there're renderers that already use/change those values in the CreateRender() function).
This means the allocator's caller doesn't need to use SDL_OutOfMemory directly
if the allocation fails.
This applies to the usual allocators: SDL_malloc, SDL_calloc, SDL_realloc
(all of these regardless of if the app supplied a custom allocator or we're
using system malloc() or an internal copy of dlmalloc under the hood),
SDL_aligned_alloc, SDL_small_alloc, SDL_strdup, SDL_asprintf, SDL_wcsdup...
probably others. If it returns something you can pass to SDL_free, it should
work.
The caller might still need to use SDL_OutOfMemory if something that wasn't
SDL allocated the memory: operator new in C++ code, Objective-C's alloc
message, win32 GlobalAlloc, etc.
Fixes#8642.
This uses the same `SDL_VerbNoun` format as the rest of SDL3, and also
adds stronger effort to invalidate cached state in the backend, so cooperation
improves with apps that are using lowlevel rendering APIs directly.
Fixes#367.
Otherwise, a massive line might generate gigabytes worth of points to render,
which the backend would simply throw away anyhow.
Fixes#8113.
(cherry picked from commit 4339647d90)
The following objects now have properties that can be user modified:
* SDL_AudioStream
* SDL_Gamepad
* SDL_Joystick
* SDL_RWops
* SDL_Renderer
* SDL_Sensor
* SDL_Surface
* SDL_Texture
* SDL_Window
Also switched the D3D11 and D3D12 renderers to use real NV12 textures for NV12 data.
The combination of these two changes allows us to implement 0-copy video decode and playback for D3D11 in testffmpeg without any access to the renderer internals.
Battle for Wesnoth apparently relies on being able to disable rendering
of UI elements by setting the clip rectangle to be empty.
Resolves: https://github.com/libsdl-org/SDL/issues/6896
Fixes: 00f05dcf "render: only enable clipping when the rectangle is valid"
Signed-off-by: Simon McVittie <smcv@collabora.com>
On some system like MacBook Pro Intel with AMD card, asking for the default device will always return the AMD GPU.
This is not an issue for 99% of the case when the renderer context is here to provide the maximum performance level like for game.
However, for video application using GPU for 1 quad and 1 texture, using the discrete GPU for that lead to an important power consumption (4 to 8W), heat increase, and fan noise.
With this patch, I successfully amend ffplay to only use the integrated GPU (i.e. the Intel one), instead of the discrete GPU (i.e. the AMD one).
Also, for some reason ID3D11DeviceContext_OMGetRenderTargets() was failing in the second read pixels call in the "testautomation --filter render_testViewport" test.
We already know the target view, so just use that.
The destination rectangle passed to SDL_BlitSurface() and SDL_BlitSurfaceScaled() is non-const and filled in with the final destination rectangle after clipping, and now documented as such.
Fixes https://github.com/libsdl-org/SDL/issues/7911