UNSPEC_PALIGNR optimizations and clean-ups on x86.

This patch is a follow-up to Hongtao's fix for PR target/105854.  That
fix is perfectly correct, but the thing that caught my eye was why is
the compiler generating a shift by zero at all.  Digging deeper it
turns out that we can easily optimize __builtin_ia32_palignr for
alignments of 0 and 64 respectively, which may be simplified to moves
of the highpart and lowpart respectively.

After adding optimizations to simplify the 64-bit DImode palignr, I
started to add the corresponding optimizations for vpalignr (i.e.
128-bit).  The first oddity is that sse.md uses TImode and a special
SSESCALARMODE iterator, rather than V1TImode, and indeed the comment
above SSESCALARMODE hints that this should be "dropped in favor of
VIMAX_AVX2_AVX512BW".  Hence this patch includes the migration of
<ssse3_avx2>_palignr<mode> to use VIMAX_AVX2_AVX512BW, basically
using V1TImode instead of TImode for 128-bit palignr.

This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-,32},
with no new failures.  Ok for mainline?

2022-07-05  Roger Sayle  <roger@nextmovesoftware.com>
	    Hongtao Liu  <hongtao.liu@intel.com>

gcc/ChangeLog
	* config/i386/i386-builtin.def (__builtin_ia32_palignr128): Change
	CODE_FOR_ssse3_palignrti to CODE_FOR_ssse3_palignrv1ti.
	* config/i386/i386-expand.cc (expand_vec_perm_palignr): Use V1TImode
	and gen_ssse3_palignv1ti instead of TImode.
	* config/i386/sse.md (SSESCALARMODE): Delete.
	(define_mode_attr ssse3_avx2): Handle V1TImode instead of TImode.
	(<ssse3_avx2>_palignr<mode>): Use VIMAX_AVX2_AVX512BW as a mode
	iterator instead of SSESCALARMODE.
	(ssse3_palignrdi): Optimize cases where operands[3] is 0 or 64,
	using a single move instruction (if required).

gcc/testsuite/ChangeLog
	* gcc.target/i386/ssse3-palignr-2.c: New test case.
4 files changed