That is what I did say, chips use more amps when overclocked
Voltage control is for undervolting, which I also said.
Basic electronics, the voltage is set by the things providing power, the amount of amps drawn is set by the thing using the power (up to the limit of the thing providing the power). If you draw too many amps then the voltage can drop, but that isn't really a way of controlling the voltage. Certainly on a switch you wouldn't want to then try to increase the voltage to counteract that.
I’m not convinced you know a whole lot about “basic electronics” based on what you’ve said, not that it matters. You seem to be under the impression that overclocking is achieved by some kind of current control, that it’s actively “set” somehow (at least that’s what it seems like), which is not at all how it works - an overclocked chip can possibly (not necessarily, manufacturers often leave a lot of headroom on the table) require more current, but only by proxy. I’ll post a more detailed explanation on how power delivery works, without getting too much into the engineering side of things to keep it understandable for any future visitors to this thread.
The “amount of amps” isn’t “set” by anything in a typical CPU/GPU/SOC arrangement - power delivery is controlled entirely via voltage regulation because that’s the simplest way to achieve the desired result, and this translates to nearly all of the consumer electronics you could think of, including the Switch. You can’t “increase the current” without increasing the voltage - a conductor will provide as much flow as its resistance permits it, that’s just physics. The actual, real life current flow is load-dependent, so either there is a load or there isn’t. In peak demand conditions, when the circuit draws as much as it possibly can, in order to pass more current you *must* necessarily increase the voltage, and consumer electronics do that automatically without your input, within parameters specified by the manufacturer. Conversely, if a chip is idling and very little current flows you can afford to drop the voltage, and consumer electronics do that as well. The chip’s consumption directly correlates with the load, but it is not actively “controlled”, voltage is. A processor is just a bunch of various transistors formed into logic gates and etched in silicon. Either they’re actively opening/closing, and consuming electricity to do so, or they’re sitting idle and there’s very little load, so the current flow drops, *regardless* of operating voltage. Current cannot flow within a circuit if there’s nothing in it that actively demands it. Anyone can test this by hooking up any electronic device to a Kill-o-Watt and observe the power consumption in relation to the workload the device is performing.
In practice, power delivery is achieved with a voltage regulating module, or VRM. A basic VRM consists of a MOSFET (let’s keep it simple and call it a transistor, an electronic on/off switch), an inductor to stabilise current (it’s a coil of wire, usually encased in a ceramic package), a capacitor (nowadays solid state, but you see the occasional electrolytic still) to stabilise voltage and some kind of voltage regulating controller. The first component will dictate the voltage in the circuit by rapidly switching, the next two provide smoothing, because computer chips don’t like current or voltage spikes, the last uses built-in logic to calculate the frequency of switching required to operate, as requested by the processor in question. You have two active components, the transistor and the controller, and two passive ones, an inductor and a capacitor. By changing how quickly the transistor switches on and off, the circuit can output higher or lower voltage, and thus supply more or less power to the chip, based on demand. In layman’s terms, if you supply a VRM with an input voltage of 5V, but the frequency of switching is such that the VRM is only in the “on” state 1/5th of the time, the output of that VRM will equal 1V, and the maximum current will depend on the resistance of the circuit - I = V/R, current equals voltage divided by resistance.
As far as current control goes, the only thing you can usually observe in a setup like this is overcurrent protection. It effectively functions as a fuse which trips when current grossly exceeds the spec, but that’s usually located closer to the input, not the VRM itself. The reason for that is that it’s kind of irrelevant *where* your overcurrent protection is - draw is draw. Meanwhile, for the purposes of maintaining precision (avoiding noise, voltage drop etc.), the VRM is located closely to the chip. It’s in the name - Voltage Regulator Module. It’s the voltage that is actively controlled, higher or lower headroom (not flow) for current is the consequence of regulating said voltage. Voltage control is not just for “undervolting” - that’s one possible use case. Often times you can find that a chip will operate exactly the same at a lower voltage, so there’s no point in supplying one that’s higher than needed, it’s a matter of fine-tuning. More importantly, voltage control is there for the circuit to operate at all - without it you would have a static voltage which is incredibly wasteful and not optimal, since as I’ve mentioned before, you’re aiming at the lowest possible voltage required to operate at any given moment.
The reason for this setup is simple, and it’s remained unchanged for god knows how long. A typical CPU/GPU/SOC works within very tight parameters in regards to voltage, usually somewhere between 1-2V, closer to 1V. Even a small change of a mV can drastically change how the circuit operates because we’re talking about billions of transistors on a very small silicon die, it’s all highly sensitive (hence smoothing). That’s obviously not the supply voltage, which is much higher, so you need some kind of circuit to take whatever supply you’ve got and buck it down to a desired level, with extreme precision, and with the ability to adjust dynamically. The VRM does just that. By supplying a higher voltage, you can overcome resistance easier, and as a consequence of that, more current *can* flow and more power can be delivered, but that doesn’t necessarily mean that it will - it depends on whether it’s needed. In this regard, the Switch is no different. Its power delivery is, fundamentally, built in the same manner as what you’d see on a desktop computer, or any other similar consumer device.
Long post, so tl;dr - an overclocked chip will likely have a higher power consumption than a chip running at stock frequency when at peak - it’s doing more work, or the same amount of work in a smaller amount of time. This may or may not equal higher current - that depends on the voltage it’s running at and the workload it’s performing. 15A at 1V equals 15W, however 12.5A at 1.2V *still* equals 15W. Higher voltage, lower current, still the exact same amount of power, so clearly, this is a matter of how much is the demand versus how much can be supplied more so than anything else, aforementioned waste aside. It is likely that overall consumption will be higher, but that’s not a statement on current, it’s a statement on power. A blanket statement like “it consumes more amps” is misleading, amperage is not a measurement of power, it’s a measurement of current. One can’t make a statement like that without explaining the circumstances or understanding the circuit, because it’s not always true.