Game Design, Programming and running a one-man games business…

Solar Farm: 3rd site visit during construction

In case you didn’t know. I run a small energy company and am building a small solar farm with the money I made from selling video games. Here is the company website: http://www.positechenergy.com

We made another trip to the site yesterday. Its a 350 mile round trip, and part of it was in the rain. This was the first time we visited in bad weather, but to be honest it wasn’t TOO bad. The site is basically the crest of a hill, so drainage is excellent, and it wasn’t too muddy. Despite, this they were still transporting solar panels on pallets using tracked vehicles, because…mud is still a thing.

I bought these solar panels about 8 months ago, in a fit of enthusiasm to push the project forwards. This turned out to be madness, as I then had to store 3,000 of them in a warehouse at huge expense. The panel prices did then rise…but then fell. I think overall, it was a bad decision, but not catastrophic. Despite owning solar panels all this time, I had never seen them until today. Also, 60 tons of solar panels sounds a lot, but does it look a lot in person…?

The answer is YES. It does. Its a lot of boxes, and thats not all of them, a lot of them were already fitted to the frames. Its multiple big articulated lorries full of them. On the plus side, I am no longer getting monthly storage bills. On the negative side, I had to pay £700 to the warehouse to load them on to the truck. This sounded a lot, but it is a lot of panels so… I guess its understandable.

The real excitement for this trip was to see panels actually on frames, to get an idea of what the finished project will look like. Our first impressions were that they were being fitted pretty quickly, and that the frames are really high. No danger of long grass obscuring the bottom of a panel! (A disaster when that happens, as it effectively shadows the whole panel, and indeed the whole string).

I posted that picture to give some idea of scale. Thats not a complete row of panels, we currently have a ‘gap’ awaiting the moving of the HV cable. once that is moved we can fill it in and have longer rows. You can just about see that a few rows are now double paneled, and some they have just done the bottom row. The bottom row is first, then they go back with a sort of frame to stand on so they can access the top row and fit those. There was a team of 4 people working together to mark out the location, attach the fixtures, and then place the panels on the rails. While they do this, fresh panels are delivered to points along each row on pallets ready to be fitted.

At this point, the panels are attached, but there is no wiring. A separate team of people (electricians this time) are connecting the panels together to form strings which eventually get wired to the inverters at the end of some rows.

Thats one of the inverters already mounted. Its a 100kw inverter, so about 25x the power of the kind you have if you get panels on your house. The box to the right is the emergency cut out switch. All that metal stuff underneath is just a bracket to attach a LOT of DC cabling to the inverter, which will run at head height along the length of the panels. In cases where the inverter is connected to panels on another row, there is underground armored cable & ducting to bring all the DC cabling together. Further underground ducts will connect each inverter to the substation using AC cable, but that work has not started yet. The substation design is holding everything up!

A picture to show the scale of each solar table next to a mere human. As someone with a small ground-mount array of 10 panels in his garden, its surprising how high up these are, and how tall they are at the top of the final panel. They are fitted in 2 rows height, in portrait mode (some farms are landscape), and each panel is ‘split’ into two, hence the white line in the middle. They are effectively 2 panels each, and have 2 connectors on the back of each one. Each panel outputs 410 watts

and lastly…

When people repeat stupid oil-company propaganda nonsense about losing farming land to solar panels, I’ll be tempted to reply with this. The grass is still very appealing to the sheep, and plenty of room between and under the panels for them to graze. Frankly on a day like yesterday the sheep probably thought the shelter was awesome. Technically the sheep are supposed to be out of the way during construction, mostly for their safety, but they got in somehow, and it turns out getting them out again isn’t easy. They seem to co-exist with the construction site pretty happily :D.

There will likely be a delay of a few weeks before I go back, unless something exciting happens. There are a lot of panels to attach, and then a LOT of wiring to be done, which is still a manual procedure. There is no clever automation or robotics that can do this yet. Actual humans have to walk to each panel, grab a cable, plug it in, then probably cable tie the cabling nicely out of the way, so it stays there in thunderstorms for 25 years. Maybe one day Teslabots will do this, but not this year.

The real hold up on this project has been the grid connection. First it was planning, then grid connection. I could write a whole epic space opera about how much grief it has been. I HOPE that we are now zeroing in on final agreement as to how everything will work, and thus we can a) start building the earthing mesh and connections for the substation and b) get a date for the DNO to move that overhead cable so we can fill-in the final mounting frames and tables and have everything connected.

The project is still a cause of daily worry and stress, because it involves literally dozens of people talking to each other in email threads that are contradictory, out of synch and confusing. Hopefully things get much better soon.

Meanwhile, a reminder of how important it is that we do projects like this.

Solar farm: 2nd site visit (July 2023)

A week after the first visit to the solar farm, we decided to head back. Its a LONG drive, but a lot had changed, and the next week looks like bad weather so I thought I’d grab the opportunity to take another look.

There is now a LOT of metal in the field! When you see a completed solar farm, it looks like just rows and rows of glass and plastic panels, but what is mostly hidden from you is the amount of metal support needed to keep it all in place. As I’ve mentioned before, you need to be sure that everything stays firmly in the ground for 25 years minimum, so there are no half measures.

With the frames in place now for most of one of the 3 zones, we get a true feel for how large each of the groups of panels (called ‘tables’) are. They are much bigger than the small array I have in my garden :D. I can just about touch the top of the main support posts if I’m on tip-toes.

Now that the frames are in, you get a much clearer picture of how undulating the land is. This looks like a few long rows of panels, but its actually not the full width. We wont be able to finish any of the rows of panels until the overhead high voltage cable has been buried, because you cant be piling tall metal posts in that close to an 11,000 volt cable. Hopefully we get an actual date for that very soon!

In a perfect world, the field for this farm would be completely flat, so the layout of the panels would be very uniform, with one inverter mounted at the end of each row, and each group of DC cables from the tables running along the top, at head height (so we don’t electrocute any sheep!) and they would plug right into the end-row inverters. Then a single trench would run along all the inverters taking their AC cables underground (armored, with sand laid on top, then topsoil, so totally buried and safe) to the substation.

Because this field is NOT flat, you need to wire the tables in groups that will get the same level of sunshine at all times. because of the hill, at sunset some panels in one region might be getting more sun than others. This means that each row is NOT a single inverter, but varied clumps of panels are wired to each inverter. That means SOME cabling in that gap between the rows you see above. That will be DC cabling, but will also need to be buried safely underground. Its an annoying extra complication.

I was very pleased with the progress in the 7 days between site visits. I’m still very nervous about the timing of getting the substation built. We cannot do this until the earthing is buried, and we cannot bury earthing until the design is complete. We HAVE now paid the connection cost to the grid in full (ouch), but still have neither confirmation that its received, or a set date for the earthing. I’ll have to try chasing that up soon, which I hate doing.

Solar farm building is 45% planning permission, 45% chasing people to do stuff, and 10% paperwork and spreadsheets. It’s a horribly stressful business, and its unusual for it to fall ultimately on an individual who owns a company. I suspect most people making the financial decisions for solar farms are accountants at big multinationals, for whom its not *their money*. I do not recommend this path as easy money or a quiet life!

In 4 days time panels will start to arrive and get fitted, which will make the site look much more finished and like a real solar farm. We already have the inverters (now back to Solis instead of Huawei. Don’t ask) ready to fit. BTW the inverters are BIG. I had in my mind that they MUST be big, but when you see pictures they look like domestic ones, but no. Each pair is on a pallet and the size of a chest freezer.

I’ll probably not be at the site next week, but will go back again after that to view the panels and inverters for myself. I like visiting the site and would go there twice a week if it was local, but 8 hours of driving in a day is a bit much, even if I can do about 80% of that on autopilot in my car. How anybody drives a manual car with gears and without cruise control or autopilot for that long each day is beyond me! (I used to do that in the past, but in my twenties).

BTW, just in case you think I am nuts for doing this, and that I should just relax instead, here is a reminder why I am doing it.

First site visit to the in-construction solar farm!

Today we drove 8 hours (4 hours from my house to site, 4 hours back) to visit the solar farm for the first time since we actually started work, and only the second time ever. We visited the morning after we got planning permission, but that was about 9 months ago now, which is crazy but true. At last, stuff is actually happening, and I wanted to see it for myself!

Amusingly, one of the benefits of visiting the site while its being built is there are two signs that say ‘site traffic’ which you can follow. Its REALLY hard to find otherwise. It’s so tricky that even with the postcode you can go the wrong way. Last time we blundered around for ages looking for the right field, but luckily this time we could just follow signs, even the amusingly amateur ‘solar’ sign to make it clear we are at the right field :D.

Apparently the gate just next to this had to be widened to allow some of the bigger trucks to get into the site. We then have all the excitement of our new road! We built this road, and it will be there as a permanent access road to the finished site. Its not exactly a tarmacked motorway, but its actually not too bad.

At the end of this road we have a temporary construction area, where a metal interlinked floor has been laid down (which took a whole day), so that HGVs can drive in, and reverse and get out again without destroying the field or getting stuck in mud. Apparently you can put 100 tons on each section of this stuff.

That green box is actually pretty cool, its a diesel generator plus kitchen plus office space all in a snazzy prefab unit that you can just drop on site as a kind of instant construction site HQ. There were plans on the walls showing the site layout, and the most important pre-construction hardware: a kettle. There are some other shipping containers used for secure storage for stuff like the inverters, when they show up, and are currently packed with sacks and sacks of panel-attachment fixings.

The rest of the site consists of lots and lots of rows of metal posts, and 2 tracked machines that basically repeatedly drop big heavy weights in a controlled way to bash metal posts very VERY firmly into the soil:

These are the main posts that form the chunkiest part of the frames. They are taller than they look in this picture, and pretty thick. There are also connecting pieces that will define the slope that the panels will rest on, then finally the rails that will connect them all together so that panels can be attached to them. At the moment, its just a matter of bashing the posts in. They are aiming to get 90 of them done per day, and we need a lot of them. I was told we have another 6 people joining the team on Monday, and a week later, the panels will be on site being fitted. Its going to move pretty quickly from here on. Also, they are in the middle of building the ‘stock fence’ which will be used to manage sheep so they can graze the other half of the field during construction. There is also a ‘deer fence’ that will form the entire perimeter of the site, and eventually some metal gates and a substation! Also CCTV masts.

Thats me trying to look like I do this all the time. Those two rows of cones define a zone of the field we cannot currently work on, because an 11,000 volt power cable is overhead. You really don’t want the pile-driver top to accidentally touch it! Soon, (but annoyingly we are not sure when), that power line will be buried by the DNO in a trench around the exterior of the site, and re-emerge near the substation. Currently, people are working either to the east of the line, or the west, but ignoring the middle bit until the cable is gone. Its a logistical pain in the ass, but its what we have to do in order to be working now, rather than wait for the DNO. We have waited long enough, so its really time to get building now.

I think these are the connecting frame bits, rather than the posts, but I’m not 100% sure TBH. There is a LOT of metal on the site. There is a surprisingly amount of stuff required to build a solar farm that is not solar panels. The big problem you have is longevity. Sure, you can bang any old metal post in a hole and screw a solar panel to it, but the issue is ensuring that its going to stay solid and upright for 25 years (40 preferred), despite driving rain, baking heat, and the occasional incredibly strong wind. Plus sheep scratching up against them, and god knows what else. Everything is pretty industrial, because it has to be built to last.

So… In terms of how physically big it is… its actually pretty big. I half expected to visit the site and go ‘oh its kinda small really, a bit trivial now I see it’ but no. Its going to be pretty awesome. The site looks impressive when you are there even just as a bunch of cones and posts. When I go back and see all the posts in, and some of the frames, its going to be super awesome. With panels and a substation it will be hilarious.

I suspect everyone who does stuff like this is very nervous with the first project, and obviously almost everyone experiences imposter syndrome to some degree or another. Despite that, today’s site visit went really well. I am very happy with the progress, it was good to meet the site manager in person for the first time and talk to him and other people there. I also forgot that its a REALLY nice spot. On a sunny day, the views from the site are really nice. Most solar farms are places pretty flat and boring, but this one is unusually hilly, and surrounded by other hills. I’m definitely excited to go back and see more!

What I learned from fixing a dumb bug in my graphics code

I’ve recently been on a bit of a mission to improve the speed at which my game Democracy 4 runs on the intel Iris Xe graphics chip. For some background: Democracy 4 uses my own engine, and its a 2D game that uses a lot of text and vector graphics. The Iris Xe graphics chip is common on a lot of low end laptops, and especially laptops not intended for gaming. Nonetheless, its a popular chip, and almost all of the complaints I get regarding performance (and TBH there are not many) come from people who are unlucky enough to have this chip. In many cases, recommending a driver update fixes it, but not all.

Recently a fancy high-end laptop I own basically bricked itself during a bungled windows 11 update. I was furious, but also determined to get something totally different, so got a cheap laptop made partly from recycled materials. By random luck, it has this exact graphics chipset, which made the task of optimising code for that chip way easier.

If you are a coder working on real-time graphics stuff like games, and you have never used a graphics profiler, you need to fix that right away. They are amazing things. You might be familiar with general case profilers like vtune, but you really cannot beat a profiler made by the hardware vendor for your graphics card or chip. In this case, its the intel graphics monitor, which launches separate apps to capture frame traces, and then analyze them.

I’m not going to go through all the technical details of using the intel tools suite, as thats specific to their hardware, and the exact method of launching these programs, and analyzing a frame of a game varies between intel, AMD and nvidia. They all provide programs that do basically the same thing, so I’ll talk about the bug I found in general terms, not tied to vendor or API, which I think is much more useful. The web is too full of hyper-specific code examples and too lacking in terms of general advice.

All frame capture programs let you look at a single frame of your game, and lists every single draw-call made in that frame, showing visually whats drawn, what parameters were passed, and how long it took. You are probably aware that the complexity of the shader (if any), the number of primitives and the number of pixels rendered all combine in some way to determine how much GPU time is being sent on a specific draw call. A single tiny triangle flat shaded is quick, a multi-render-target combined shader that fills the screen with 10,000 triangles is slow. We all know this.

The reason I’m writing this article is precisely because this was NOT the case, and discovering the cause therefore took a lot of time. More than 2 weeks in fact. I was following my familiar route of capturing a frame, noting that there were a bunch of draw calls I could collapse together, and doing this as I watched the frame rate climb. This was going fine until I basically hit a wall. I could not reduce the draw calls any more, and performance still sucked. Why?

Obviously my first conclusion was that the Iris Xe graphics chip REALLY sucks, and such is life. But I was doing 35-40 draw calls a frame. Thats nothing. The amount of overdraw was also low. Was it REALLY this bad? can it be that a modern laptop would struggle with just 40 draw calls a frame? Luckily there was a way to see if this was true. I could simply run other games and see what they did.

One of the games I tested was Shadowhand. I chose this because it uses a different engine (gamemaker). I didnt even code this game, but the beauty of graphics profilers is this: You do NOT NEED A DEBUG BUILD OR SOURCE CODE. You can use them on any game you like! So I did, and noticed Shadowhand sometimes had 600 draw calls at 60 frames a second. I was struggling with 35 draw calls at 40fps. What the hell?

One of the advanced mode options in the intel profiler is to split open every draw call so you see now only the draw calls, but every call to an opengl API that happens between them. This was very very helpful. I’m not an opengl coder, I prefer directx, and the opengl code is legacy stuff coded by someone else. I immediately expect bad code, and do a lot of reading up on opengl syntax and so on. Eventually, just staring at this of API calls makes me realize there is a ton of redundancy. Certain render states get set to a value, then reverted, then set again, then reverted, then a draw call is made. There seems to be a lot of unnecessary calls to setting various blend modes. Could this be it?

Initially I thought that some inefficiency was arising from a function that set a source blend state, and then a destination blend state as two different calls, when there was a perfectly good OpenGL API call that did both at once. I rewrote the code to do this, and was smug about having halved the number of blend mode state calls. This made things a bit faster, but not enough. Crucially, the number of totally redundant set and reset calls was still scattered all over the place.

To understand why this matters, you need to understand that most graphics APIs are buffered command lists. When you make a draw call, it just gets put into a list of stuff to be done, and if you make multiple draws without changing states, sometimes the card gets to make some super-clever optimisations and batch things better for you. This is ‘lazy’ rendering, and very common, and a very good idea. However, when you change certain render states, graphics APIs cannot do this. They effectively have to ‘flush’ the current list of draw calls, and everything has to sit and wait until they are finished before proceeding. This is ‘stalling’ the graphics pipeline, and you don’t want to do it unless you have to. You REALLY don’t want to constantly flip back and forth between render states.

Obviously I was doing exactly that. But how?

The answer is why I wrote this article, because its a general case piece of wisdom every coder should have. Its not even graphics related. Here is what happened:

I wrote some code ages ago that takes some data about a chunk of text, and processes all the data into indexed vertexes in a vertexbuffer full of vector-rendered crisp text. It makes a note of all this stuff but does not render anything. You can make multiple calls to this AddText() function, without caring if this is the first, last or middle bit of text in this window. The only caveat is to remember to call DrawText() before the window is done, so that text doesnt ‘spill through’ onto any later windows rendered above this one.

DrawText() goes through the existing list, and renders all that text in one huge efficient draw call. Clean, Fast, Optimised, Excellent code.

Thats how all my games work, even the directx ones, as its API-agnostic. However, there is a big, big problem in the actual implementation. The problem is this: The code DrawText() stores the current API render states, then sets them to be the ones needed for text rendering, then goes through the pending list of text, and does the draw call, then resets all those render states back how they were. Do you see the bug? I didn’t. Not for years!

The problem didn’t exist until I spotted the odd bug in my code where I had rendered text, but forgotten to call DrawText() at the end of a window, so you saw text spill over into a pop-up dialog box now and then. This was an easy fix though, as I could just go through every window where I render some text and add a DrawText() call to the end of that window draw function. I even wrote it as a DRAWTEXT macro to make it a bit easier. I spammed this macro all over my code, and all of my bugs disappeared. Life was good.

Have you spotted it now?

The redundant render state changes eventually clued-me-in. Stupidly, the code for DrawText() didn’t make the simple, obvious check of whether or not there was even anything in the queue of text at all. If I had spammed this call at the end of a dialog box that already had drawn all its text, or even had none at all, then the function still went through all the motions to draw some. It stored the current render states, set new ones, then did nothing…because the text queue was empty, then reset everything. And this happened LOTS of time each frame, creating a stupid number of stalls in the rendering pipeline in order to achieve NOTHING. It was fixed with a single line of code. (A simple .empty() check on a vector and some curly brackets… to return without doing anything).

Three things conspired to make finding this bug hard. First: I previously owned no hardware I could reproduce it on. Second: It was something that didn’t even show up when looking at each draw call, it manifested as making every draw call slower. Third: it was not a bad API call, or use of the wrong function, or a syntax error, but a conceptual code design fuck-up by me, My design of the text renderer was flawed, in a way that had zero side-effects apart from redundant API calls.

What can be learned?

Macros, and functions can be evil, because they hide a lot of sins. When we write an entire game as a massive long list of assembly instructions (do not do this) it becomes painfully obvious that we just typed a bazillion lines of code. When we hide code in a function, and then hide even the function call in a macro, we totally forget whats in there. I managed to hide a lot of sins inside this:

DRAWTEXT

Whereas what it really should have been though of was this

STORERENDERSTATESANDTHENSETTHEMTHENGOTHROUGHALISTTHENRESETEVERYTHINGBACK

This is an incredibly common problem that happens in large code bases, and is made way worse when you have a lot of developers. Coder A writes a fast, streamlined function that does X. Coder B finds that the function needs to do Y and Z as well, and expands upon it. Coder A knows its a fast function so he spams calls to it whenever he thinks he needs it, because its basically ‘free’ from a performance POV. Producer C then asks why the game is slow, and nobody knows.

As programmers, we are aware that some code is slow (saving a game state to disk) and some is fast (adding 2 variables together). What we forget is how fast or slow all those little functions we work on during development have become. I’ve only really worked on 3 massive games (Republic: The Revolution, an unshipped X Box game, and The Movies), but my memory of large codebases is that they all suffer from this problem. You are busy working on your bit of the code. Someone else coded some stuff you now need to interface with. They tell you that function Y does this, and they get back to their job, you get back to yours. They have no idea that you are calling function Y in a loop 30,000 times a frame. They KNOW its slow, why would anybody do that? But you don’t. Why would you? its someone else’s code.

Using code you are not familiar with is like using machinery you are not familiar with. Most safety engineers would say its dangerous to just point somebody at the new amazing LaserLathe3000 and tell them to get on with it, but this is the default way in which programmers communicate,

Have you EVER seen an API spec that lists the average time each function call will take? I haven’t. Not even any supporting documentation that says ‘This is slow btw’. We have got so used to infinite RAM and compute that nobody cares. We really SHOULD care about this stuff. At the moment we use code like people use energy. Your lightbulb uses 5 watts, your oven probably 3,000 watts. Do you think like that? Do you imagine turning on 600 light bulbs when you switch the oven on? (You should!).

Anyway, we need better documentation of what functions actually do, what their side effects are, what CPU time they use up, and when and how to use them. An API spec that just lists variable types and a single line of description is just not good enough. I got tripped up by code I wrote myself. Imagine how much of the API calls we make are doing horrendously inefficient redundant work that we just don’t know about. We really need to get better at this stuff.

Footnote: Amusingly, this change got me to 50 FPS. It really bugged me that it was still not 60 FPS> Hilariously I realised that just plugging my laptop in to a mains charger bumped it to 60. Damn intel and their stealth GPU-speed-throttling when on battery power. At least tell me when you do that!

Solar Farm Update: New connection plans

I’m going to try to get into the habit of more frequent updates to my ongoing solar-farm-project, because stuff is starting to happen now. Here is a brief run-down on whats happened since the last blog post on April 3 2023.

Well we had a meeting about the farm progress on 12th May, and there was a lot to discuss. We finally had the details from the DNO about the earthing requirements for the farm, so our high-voltage engineer could look at that and then design the earthing layout for us.

There was then some bad news about the inverters. We were going to use Solis inverters, but the ones we wanted were no longer available, so the plan was now to switch to Huawei inverters instead. These are Chinese, and some people have views of using kit made my Huawei but I don’t. I really don’t think there is a backdoor in them that will let the Chinese communist party turn off my solar farm, but I do think there are a lot of conspiracy-theorists that probably do :D. This change to different inverters might seem like no big deal, but its actually hellishly complicated…

Changing inverters means you suddenly have a different number of strings, and the way the panels are wired up is suddenly different. You might assume (as I did) that the panels were in nice neat rows, and there was an inverter for each row, or each two rows. No! Apparently thats not how its done, especially on undulating land. Its actually a lot more complex than that, and the positioning of the inverters and which panels get attached to each inverter is hellishly complicated, and done by some complex solar-farm management software.

Anyway, this was generally a good news meeting. Mounting frames were ordered, people booked in to do work.

There was then a LOT of emails back and forth about voltages and another meeting on 26th May. This is where stuff got super involved…

I had assumed that you always connected a solar farm at high voltage, and needed a transformer up to 11kv, at which point the DNO takes over. BUT NO. Suddenly the DNO says they have changed their internal regulations regarding what size inverters they let connect directly to them, and it had gone from 50kw to 200kw. Our inverters are about 110kw, so they were suddenly chill about us effectively giving them nine cables from 9 110kw inverters at low voltage (about 410v I think).

This is a BIG DEAL, because it vastly changes who pays for what, and whats included. Suddenly we arent doing ANY high voltage cabling, or switchgear, and we wont need a transformer at all. This saves a LOT of money in terms of connection offers. Apparently I will get a new connection offer tommorow, and the old one will be cancelled.

In theory this is good, but I wouldnt be surprised to find that the lower connection offer is offset by all the other materials and labor price increases that have happened since we started. Sure, its GOOD news to have a number actually come DOWN instead of up, but I’m not getting too excited by it.

Its hard to convey just how complex a project like this is. The change to the offer would mean that we can move our (much smaller) substation closer to the one for the DNO. That means the site layout can be slightly smaller, which means less fencing (which would be cheaper!). However, it also totally screws up our earthing design, because we now have different stuff, in a different place, at a different voltage… arggghhh. The one bright spot is that apparently moving our substation will not be a planning issue. Thank Christ.

Anyway, the truly exciting news is that a ground survey was done 2 days ago, and the ‘setting out’ engineer will visit the site this week, I think wednesday. Thats the very last step before people show up to build an access road and start building stuff with concrete bases and installing ground mount frames and the security fence. This is all getting very close!

The current plan is for setting out on 31st, then the fencing, then access road. 26th June is pencilled in for the day they start building the ground mount frames. I will nip up there when that happens and take a look at it all, and take a stupid number of photos. Its looking quit likely that we get energised this year, and start generating power, and maybe (gasp) earning revenue this year!