When dealing with long string fields, you have to consider basic block size. This might sound old-fashioned, but there are a lot of clients using older hardware who need to account for resource usage. You also need to account for keys and indexes, which could be comprised of multiple composite fields. By adding in big
text fields the IO handling becomes much less efficient. Honestly, using NVARCHAR when dealing with ASCII characters is just wasting IO resources
We typically reserve NVARCHAR for open-ended text fields (descriptions, etc.). For code fields, it depends on your specific circumstances. For example, NVARCHAR supports Unicode without problems, but it isn't necessary if the source system is older and deals with ASCII strings instead of Unicode strings or something else . On the other hand, you'll probably want to use NVARCHAR with countries that have non-ASCII or Unicode strings such as Russia, China and the Czech Republic.
What we usually do is pre-define a few standard string lengths (10, 20, 50, 100, 250, 1000 and 4000) then assign a group based on the current max field length. If the length is close to the limit, then it's moved up a level.