Everything else has a default constructor that does the straightforward
thing of initializing most members to a default value, except for the
size.
We explicitly initialize the size (and others, for consistency), to
prevent potential uninitialized reads from occurring. Particularly given
the largeish surface area that this struct is used in.
On the texture cache we handle multisampled images by keeping their real
size in samples (e.g. 1920x1080 with 4 samples is 3840x2160).
This works nicely with size matches and other comparisons, but the
calculation for guest sizes was not having this in mind, and the size
was being multiplied (again) by the number of samples per dimension.
For example a 3840x2160 texture cache image had its width and height
multiplied by 2, resulting in a much larger texture.
Fix this issue.
- Fixes performance regression on cooking related titles when an
unrelated bug was fixed.
Images used as render targets were not being "prepared", causing
desynchronizations on the texture cache. Needs #6669 to avoid
performance regressions on certain cooking titles.
- Fixes black shadows on Age of Calamity.
Removes common_sizes.h in favor of having `_KiB`, `_MiB`, `_GiB`, etc
user-literals within literals.h.
To keep the global namespace clean, users will have to use:
```
using namespace Common::Literals;
```
to access these literals.
Users may want to fall back to the CPU ASTC texture decoder due to hangs
and crashes that may be caused by keeping the GPU under compute heavy
loads for extended periods of time. This is especially the case in games
such as Astral Chain which make extensive use of ASTC textures.
* Wrong alignment in u64 LOG_DEBUG -> memcpy.
* Huge shift exponent in stride calculation for linear buffer, unused result -> skipped.
* Large shift in buffer cache if word = 0, skip checking for set bits.
Non of those were critical, so this should not change any behavior.
At least with the assumption, that the last one used masking behavior, which always yield continuous_bits = 0.
This line can only ever be reached if src is null, so dereferencing it
here is a logic bug that slipped through.
Instead, we dereference dst instead which is guaranteed to be valid.
Amends implicit sign conversions occurring with usages of std::reduce
and also relocates it to its own utility function to reduce verbosity a
little bit.
In order to force the BGRA8 conversion on Nvidia using OpenGL, we need to forbid texture copies and views with other formats.
This commit also adds a boolean relating to this, as this needs to be done only for the OpenGL api, Vulkan must remain unchanged.