I do make sense because there is no room for zfs to stripe the data across the vdevs. On the other hand if you add 2 new vdevs then the data should be stripped across the new vdevs.
I guess it pays to monitor the pool usage and plan for capacity increase before the pool is completely full.
I read through the zfs admin guide and is still not clear on disk failure.
Let say I have raidz pool with 6 drives configured as follow:
rzpool ONLINE 0 0 0
raidz-0 ONLINE 0 0 0
disk1 ONLINE 0 0 0...