1. 18 Sep, 2011 3 commits
  2. 16 Sep, 2011 4 commits
  3. 15 Sep, 2011 1 commit
  4. 10 Sep, 2011 1 commit
  5. 12 Sep, 2011 3 commits
  6. 11 Sep, 2011 3 commits
    • Ryan C. Gordon's avatar
      More work on cleaning out compiler warnings. · d7cb5c41
      Ryan C. Gordon authored
      --HG--
      extra : rebase_source : ab97ecaafc5a22451ea1bf8d4740380cf56f2f98
      d7cb5c41
    • Ryan C. Gordon's avatar
      Fixed Win64 builds with MingW. · 0cbf1928
      Ryan C. Gordon authored
      0cbf1928
    • Ryan C. Gordon's avatar
      Some MMX fixes from Patrick Baggett. · ad214ecb
      Ryan C. Gordon authored
      Original email...
      
      Date: Sat, 10 Sep 2011 13:01:20 -0500
      From: Patrick Baggett
      To: SDL Development List <sdl@lists.libsdl.org>
      Subject: Re: [SDL] SDL_memcpyMMX uses SSE instructions
      
      In SDL_blit_copy.c, the function SDL_memcpyMMX() actually use SSE
      instructions.
      
      It is called in this context:
      
      #ifdef __MMX__
          if (SDL_HasMMX() &&
              !((uintptr_t) src & 7) && !(srcskip & 7) &&
              !((uintptr_t) dst & 7) && !(dstskip & 7)) {
              while (h--) {
                  SDL_memcpyMMX(dst, src, w);
                  src += srcskip;
                  dst += dstskip;
              }
              _mm_empty();
              return;
          }
      #endif
      
      This implies that the minimum CPU features are just MMX. There is a
      separate SDL_memcpySSE() function.
      
      
      The SDL_memcpyMMX() function does:
      
      #ifdef __SSE__
              _mm_prefetch(src, _MM_HINT_NTA);
      #endif
      
      ...which tests at compile time if SSE intrinsics are available, not at run
      time. It generates the PREFETCHNTA instruction. It also uses _mm_stream_pi()
      intrinsic, which generates the MOVNTQ instruction.
      
      If you replace the "MMX" code with:
      
      __m64* d64 = (__m64*)dst;
      __m64* s64 = (__m64*)src;
       for(i= len / 64; i--;) {
         d64[0] = s64[0];
         d64[1] = s64[1];
         d64[2] = s64[2];
         d64[3] = s64[3];
         d64[4] = s64[4];
         d64[5] = s64[5];
         d64[6] = s64[6];
         d64[7] = s64[7];
         d64 += 8;
         s64 += 8;
       }
      
      Then MSVC generates the correct movq instructions. GCC (4.5.0) seems to
      think that using 2x movl is still better, but then again, GCC isn't actually
      that good at optimizing intrinsics as I've found. At least the code won't
      crash on my P2 though. :)
      
      Also, there is no requirement for MMX to be aligned to the 8th byte. I
      think the author assumed that SSE's 16 byte alignment requirement must
      retroactively mean that MMX requires 8 byte alignment. Attached is the full
      patch.
      
      Patrick
      ad214ecb
  7. 09 Sep, 2011 5 commits
  8. 08 Sep, 2011 1 commit
  9. 07 Sep, 2011 2 commits
  10. 05 Sep, 2011 3 commits
  11. 04 Sep, 2011 1 commit
  12. 02 Sep, 2011 3 commits
  13. 01 Sep, 2011 4 commits
  14. 30 Aug, 2011 1 commit
  15. 29 Aug, 2011 2 commits
  16. 28 Aug, 2011 3 commits