- 18 Sep, 2011 3 commits
-
-
Andreas Schiffler authored
-
Andreas Schiffler authored
-
Andreas Schiffler authored
-
- 16 Sep, 2011 4 commits
-
-
Andreas Schiffler authored
-
Andreas Schiffler authored
-
Ryan C. Gordon authored
-
Ryan C. Gordon authored
This was a Google Summer of Code 2011 project, sponsored by the Haiku project. We thank Nathan and the other Haiku developers for their support!
-
- 15 Sep, 2011 1 commit
-
-
Andreas Schiffler authored
-
- 10 Sep, 2011 1 commit
-
-
Markus Kauppila authored
from the log if the fuzzer isn't used.
-
- 12 Sep, 2011 3 commits
-
-
Ryan C. Gordon authored
--HG-- extra : rebase_source : 420916ed06d79e2d3c1d50e5fb40314ac7d94d85
-
Andreas Schiffler authored
-
Andreas Schiffler authored
Fix regression introducted by added parameter check in SDL_EnclosePoints. Add special case to speedup when no result was requested.
-
- 11 Sep, 2011 3 commits
-
-
Ryan C. Gordon authored
--HG-- extra : rebase_source : ab97ecaafc5a22451ea1bf8d4740380cf56f2f98
-
Ryan C. Gordon authored
-
Ryan C. Gordon authored
Original email... Date: Sat, 10 Sep 2011 13:01:20 -0500 From: Patrick Baggett To: SDL Development List <sdl@lists.libsdl.org> Subject: Re: [SDL] SDL_memcpyMMX uses SSE instructions In SDL_blit_copy.c, the function SDL_memcpyMMX() actually use SSE instructions. It is called in this context: #ifdef __MMX__ if (SDL_HasMMX() && !((uintptr_t) src & 7) && !(srcskip & 7) && !((uintptr_t) dst & 7) && !(dstskip & 7)) { while (h--) { SDL_memcpyMMX(dst, src, w); src += srcskip; dst += dstskip; } _mm_empty(); return; } #endif This implies that the minimum CPU features are just MMX. There is a separate SDL_memcpySSE() function. The SDL_memcpyMMX() function does: #ifdef __SSE__ _mm_prefetch(src, _MM_HINT_NTA); #endif ...which tests at compile time if SSE intrinsics are available, not at run time. It generates the PREFETCHNTA instruction. It also uses _mm_stream_pi() intrinsic, which generates the MOVNTQ instruction. If you replace the "MMX" code with: __m64* d64 = (__m64*)dst; __m64* s64 = (__m64*)src; for(i= len / 64; i--;) { d64[0] = s64[0]; d64[1] = s64[1]; d64[2] = s64[2]; d64[3] = s64[3]; d64[4] = s64[4]; d64[5] = s64[5]; d64[6] = s64[6]; d64[7] = s64[7]; d64 += 8; s64 += 8; } Then MSVC generates the correct movq instructions. GCC (4.5.0) seems to think that using 2x movl is still better, but then again, GCC isn't actually that good at optimizing intrinsics as I've found. At least the code won't crash on my P2 though. :) Also, there is no requirement for MMX to be aligned to the 8th byte. I think the author assumed that SSE's 16 byte alignment requirement must retroactively mean that MMX requires 8 byte alignment. Attached is the full patch. Patrick
-
- 09 Sep, 2011 5 commits
-
-
Ryan C. Gordon authored
Fixes Buildbot.
-
Ryan C. Gordon authored
-
Ryan C. Gordon authored
-
Ryan C. Gordon authored
-
Ryan C. Gordon authored
It was only needed for Mac OS X 10.0 through 10.2, so it seems silly to keep it around for SDL 1.3. I'll leave it in the 1.2 branch for now, though.
-
- 08 Sep, 2011 1 commit
-
-
Andreas Schiffler authored
-
- 07 Sep, 2011 2 commits
-
-
Ryan C. Gordon authored
-
Andreas Schiffler authored
-
- 05 Sep, 2011 3 commits
-
-
Andreas Schiffler authored
-
Andreas Schiffler authored
-
Andreas Schiffler authored
-
- 04 Sep, 2011 1 commit
-
-
Andreas Schiffler authored
-
- 02 Sep, 2011 3 commits
-
-
Ryan C. Gordon authored
Thanks to Mattias Holm for the patch!
-
Ryan C. Gordon authored
Thanks to Mattias Holm for the patch!
-
Ryan C. Gordon authored
-
- 01 Sep, 2011 4 commits
-
-
Ryan C. Gordon authored
-
Ryan C. Gordon authored
Fixes Bugzilla #1297. Thanks to jon @ rafkind for the patch!
-
Ryan C. Gordon authored
-
Ryan C. Gordon authored
Otherwise, these leak memory and maybe operating system handles.
-
- 30 Aug, 2011 1 commit
-
-
Ryan C. Gordon authored
-
- 29 Aug, 2011 2 commits
-
-
-
Ryan C. Gordon authored
-
- 28 Aug, 2011 3 commits
-
-
Markus Kauppila authored
-
Markus Kauppila authored
-
Markus Kauppila authored
-