Three links bundled together as an EtherChannel will yield 3Gbps of bandwidth, but no single session can exceed the speed of one of those links (1Gbps). EtherChannel is not like Point-to-Point Protocol which can take a session and run it across multiple links at the same time (if configured to do so), or like ATM which can do pretty much the same thing.
Operating mode of each link should be full duplex, there's no good reason I can think of to run a switch-to-switch connection at half-duplex.
Make sure all three Gigabit links are configured identically (that is, all the same speed and duplex; all the same access VLAN or all VLAN trunk ports).
Regarding question 2:
A single Gigabit uplink will be blocking coming out of that 3550 only if you send more than 1Gbps of traffic into the pipe. That would be one really fast server with a Gig NIC, or ten (10) fast workstations with 10/100 NICs running at 100 Full Duplex.
You can jam only so many concurrent bits per second into a Gig pipe at any given time. If it's a 24-port 10/100 3550, this won't be as bad of a problem (unless you're using it for a heavily-accessed server farm, then it could be an issue). If it's a 48-port switch, the blockages or congestion could be expected to be noticeable or frequent.
I concur with everything that konigl with the following clarification:
1) The 3550 appears to support load distribution based on MAC address only. This means that there will likely be a link or two more heavily utilized than the others in the bundle for traffic LEAVING THE 3550. For traffic coming back, the OTHER SWITCH DECIDES WHAT LINKS TO USE. If the other swithc supports source or dest IP or tcp/udp port numbers then the return flow traffic will be more likely distributed evenly across the links.
The 3550 has many clients and is connected to another 3550 which is in turn connected to the 4006 (which, unlike your scenario, is also channeling) which is connected to many servers.
The traffic (typically requests) flowing to the servers may not be spread across all links. But, requests are typically small, so it should be noticeable.
The traffic flowing from the servers back to the clients could be spread more evenly across the links if the 4006 were to support dst-port load distribution.
But this depends on your switch and software.
2) It will be a 'blocking architecture' only if the amount of traffic destined for the link exceeds the bandwidth of the link. Having 3Gb between the 3550s and only 1Gb to the rest of your network may not buy you too much if the bulk of your traffic is going between the 3550s and the 4006.
We are pleased to announce availability of Beta software for 16.6.3. 16.6.3 will be the second rebuild on the 16.6 release train targeted towards Catalyst 9500/9400/9300/3850/3650 switching platforms. We are looking for early feedback from custome...