In my opinion, it is possible to test expertise in computer languages and tooling in better ways than for instance JavaBlackBelt does.
Exam questions in JavaBlackBelt and comparable systems tend to be
- multiple choice
- answerable by searching the documentation or online forums
- answerable by copy-paste-compilation
- "mined" by tiny syntax errors
The way I would test knowledge:
- present a UML schema of a system
- let the answer be a region of grid cells within the image or a set of related entities/connections in the architecture
- ask where the system is least efficient, contains an anti-pattern, is overly complex or restrictive, or some other question requirying non-online-searchable insights
- do you know your ultimate goal when you automate something?
- are you familiar with idioms, patterns and anti-patterns? (not reinventing the wheel)
- can you forecast program usage, user issues and refactoring steps?
- do you have an eye for bottlenecks in system configurations? - do you think small or think big, using minimal resources (mobile app) versus building without limits (cloud service)
- how do you go about building solutions (human + software) that CAN guarantee outcomes, no matter how imperfect the underlying resource
- are you a human symbolic logic parser :) - can you crop the meaningless (in design/requirements docs)
- do you have the mindset for fearless and natural paradigm shifts
- have you got a feel for when to break the rules