Game Design, Programming and running a one-man games business…

Are all modern entertainment/global careers impossible now?

I’ve been thinking about this for a week or so. I had a bit of an epiphany when it came to how hard it to make a living from video games, and also as someone who was a musician (briefly) and who is well acquainted with a novelist. If you have ever tried to make a career out of making your own video games, music, writing or art, then I suspect you are aware that it is hyper-competitive and very hard to make a living. I suspect its going to get way, way worse, and its got absolutely nothing to do with AI. Just a matter of population, scale, choice overload and numbers.

First I have to burst a bubble. You are not special. You might *think* you are, and everyone ironically thinks they are both special and has above average IQ. To quote the crowd chanting in unison in a classic monty python film “We are all individuals”. In reality, you, or more specifically your tastes are not individual. At least not to the point where it really matters given the problem of choice overload.

When I was a kid, there were 3 TV channels. Just 3. I remember watching the first ever show to be broadcast on channel 4. (It was countdown). These days we have blasted way way past ‘too many channels to count’ and we now have streaming and youtube. The amount of content available for me to watch right now is staggering. The problem thus becomes not one of availability but discoverability. There MIGHT be a youtube channel you would like 5% more than the one you are about to watch, but will you spend 8 hours scrolling to find it? As someone who sells games, I know you will not. In old language, being ‘above the fold’ was how you got noticed. (This is from which news stories you could read on a newspaper that was folded to fit in a display stand). These days being on the front page of steam is a huge big deal. Being on page 2 is way worse. On page 199 you might as well not exist.

There is a limit then, to how much effort we will go to in order to find something that suits our tastes. Given 20 choices, we probably pick the most suitable. We do not go looking for 100 or 1,000 choices. This is just human nature and probably a survival instinct. Maybe buffalo #199 has more meat on it, but if we don’t decide to hunt one of the first 20 we find, we will lose the light and go hungry?

So given that we do not have hyper-individual tastes, and that choice-overload funnels us into one of the top 20 choices anyway, what are the implications? Well the implications are awesome for a tiny tiny tiny tiny proportion of content creators, and catastrophically terrible for everybody else trying to make a living. And frankly, I think there is no solution. Let me explain with some illustrative numbers.

The year is 1546 and lute-playing is the new hotness. There is no ability to record music, and no powered amplification. The lute can be heard in just one room, so maybe 50 people can attend a performance. A lute is an expensive instrument few could afford anyway. A roaming lute pop-star has to travel by horse or donkey and cannot cover that wide an area. Perhaps you are the best lute player in somerset, and wildly reknown among the locals for your l33t skillz. You earn a reasonable living. There are rumors of even better lute players in Wales, Scotland, the Midlands, Kent and Sussex, but thats many days ride away so they will never come here. You have a decent middle class income, and are ‘pretty good’ at the lute. Life is good.

The year is now 2025 and Taylor Swift is a huge pop star. Recorded music is available and ubiquitous, deliverable to almost everyone on the planet. That population has grown by 1,632%, but thanks to the power of amplification and global media delivery, Taylor Swift can perform all over the planet, often to crowds of 90,000 or more in one night. Even if there were no recordings, she can entertain >1,500 times as many people as the lute player. The potential ceiling for revenue from entertainment stardom is staggering compared to 1546.

But hold on, with a huge global population, surely the industry can now support way, way more people than it did back in the lute days? NO. In fact probably FEWER people as a percentage of the population. In 1546 our village needed our local lute player, because local was all there was. But in 2025 Sting can ‘entertain’ people with his lute playing on a global scale. People are still being entertained, but they have not taken advantage of the global growth in population to have a larger number of ‘entertainers’. They do not need them. There is already enough choice. Way, way too much choice. We used to just have Joe ‘the lute guy’. Now we have more than 20 lute players. Enough already.

So what does this mean as we extrapolate forwards to an even more connected world and an even more global culture (witness the rise of kpop, and korean TV dramas like squid game, both relatively new, and the global rise of anime, again in global terms very new)? It means that the top 20 choices of anything can (and will) dominate the entire planet. Thats depressing enough as it is, but it gets worse than this. Because the population goes up, but the number of megastars we support doesn’t seem to change, it means the standards go up, and up, and up, until frankly you have to be a genetic abnormality, have serious obsessive mental health issues, or a staggeringly lucky combination of the exact zeitgeist skills and looks and charisma to even have a chance of being in that top 20.

Top athletes do not have much of a social life. How could they? Competition is extreme. Top models have basically never eaten a cake. Top musicians are absolutely oblivious to anything that is not a metronome or a practice schedule. Top artists have, for a very long time, been people who have a smorgasbord of issues you would not choose, but have the fortunate side effect of helping them create great art. Van Gogh was not a chill dude with work-life balance. This applies to entrepreneurs too. Elon Musk is clearly staggeringly brilliant and hard working, but also unimaginably stressed, distressed and in need of serious psychological help. Do we really think that is uncommon? Social media heralded a mass wave of ‘cancelling’ as people suddenly had access to the personal opinions and thoughts of celebrities who have a tenuous grip on reality, and the world outside their profession.

So my unfortunate and depressing conclusion is this: Global population growth and the persistence of choice-overload are combining to ensure that the standard of work required to be successful in entertainment is so high, that only people who dedicate every waking moment to it AND who have some sort of natural/genetic ability or mental health issue that helps them work can possibly, ever hope to succeed. And obviously as the standard at the top rockets up, the standards at all levels also rises alongside them. Can you have work-life balance and a career in writing/art/music/indie games? Of course not.

My first released indie game in 1997 was ‘Asteroid Miner’. It was on the front page of ‘download.com’ the biggest download site on the entire internet, for a week or so because ‘Look! someone made an asteroids game in color’. Its 2025 now and getting to the front page of steam (just one of many games stores, let alone stores in general) is staggeringly, impossibly hard. And it will only get harder and harder from now on. This is what it took in 1997:

Now the cheerful bit!

Do you want it though? If you have to become such a tortured soul, so in pain, so obsessed, so focused in order to ‘make it’, then is the price worth it? How many rock stars drunk or drugged themselves to death. Do you want to be Kurt Cobain? Do you want to turn out like Michael Jackson or Elvis? Do you want to be Elon Musk? That level of fame and recognition is impossibly hard to deal with *even for people with perfect mental health*. Be aware that when you look at people who are hugely successful in the entertainment field you work in, these people are often ill, unhappy, stressed. That might be the price you have to pay. It probably is not a price worth paying. You can get a lot of happiness and fulfilment by having a normal career and making games/writing books/making music as a hobby. Its probably a much more balanced and stable life.

And yes, I know that I have sold a ton of games and obviously done well, and don’t want to come across as telling people to give up on their dreams. I’m not the worlds perfectly balanced mental health exhibit either. I’m an anxious, stressed, hyperactive workaholic who finds it almost impossible to relax. Not many people would choose those characteristics, even it meant selling more games.

Coding a load-balanced multithreaded particle system

Background: I am coding a 2D space-based autobattler with ridiculous levels of effects called ‘Ridiculous Space Battles‘. I code my own engine, for fun, in directx9 using C++ and Visual Studio.

Because I love coding, yesterday I found myself wondering how well the multithreading in my game was holding up. So I fired up the trusty Concurrency Visualizer in Visual Studio. I love this tool and have used it a lot on previous games. One of the biggest demands on the CPU for my game is particle processing. There are a LOT of explosions, and other effects, and a crazy number of particle emitters (thousands of emitters, hundreds of thousands of particles). Obviously this would normally be a code bottleneck, so I have a task-based generic multithreading engine that handles it. The main thread builds up a bunch of tasks, the threads grab the next task on the list, and the main thread will wait until the task list is empty. If there are still tasks in the queue, the main thread will do one as well. So how did things look?

Disastrous!

So what is going wrong here? I have a generic ‘marker’ set up to show the span of the main thread’s drawing of the game (GUI_Game::Draw()). Inside that, a bunch of unlabeled stuff happens, but I added spans to show when the multithreaded task called UPDATE_PARTICLE_LIST is called. There are two massive things going wrong here. Firstly, there are only two other threads joining in to process the particles, and secondly one of those updates seems to take 20x as long as the other. Worse still, its the one the main thread chose… so its a huge bottleneck. Technically this is still a speedup, but its marginal. How have I fucked up?

Some background to my rendering algorithm is needed: The game has 2 blend modes for particles. A ‘Burn’ mode, that saturates color and is used for fire, lasers, sparks etc, and a ‘Normal’ mode for smoke and debris etc. The particle effects are batched as much as possible, but I cannot mix those blend modes in a draw call. Also, some particles are below the action (the ships) and some above, to give a semi-3D look and make it look like the explosions engulf the ships. So this means particle emitters fall into one of 4 lists: NormalBelow, BurnBelow, NormalAbove, BurnAbove. This is all fine and works ok. In action, the game looks like this:

Because you can freeze frame and scroll around, everything has to be properly simulated, including particle effects currently offscreen. Anyway it all works, and I have FOUR particle emitter lists. So naturally, to load-balance everything, I gave one list to each thread and considered the job done.

BUT NO.

It turns out that those 4 groups are not equal in size. They are laughably unequal. The ‘BurnAbove’ list contains all of the fire and spark emitters on all of the pieces of all of the hulks from destroyed ships, plus sparks from fiery plasma torpedoes, expended drone explosions, and missed or intercepted missile explosions. Thats MOST of the particles. When I checked, about 95% of particles are ‘BurnAbove’. I had 4 lists multithreaded, but they were not vaguely really load balanced.

Once I realized that the solution was theoretically easy, but fiddly to implement and debug. I decided I would add a new load-balanced list system on top. I created 8 different lists, and when an emitter was created it was added to the ‘next’ list (the next value circled round through all 8), and told what list it was in. When it was deleted, it was removed from the appropriate list. Note that ‘deleted’ is a vague term. I delete no emitters, they get put into a reusable pool of dead emitters, which complicates matters a lot…

So in theory I now have a nice load-balanced series of 8 lists that contains every particle emitter that is currently live. The original 4 lists are still valid and used for rendering and blend mode data, but this ‘parallel’ list system existed alongside it, purely to handle load-balancing. What this means is, that a load-balanced-list may contains particles from all 4 render groups, but this does not matter as I am running update code on them, not rendering!

It didn’t work.

Crashes and bugs and corrupt data ahoy. I worked on it for ages, then watched a movie to try and forget it. Then this morning, after some digging, it was all fixed. What actually was going wrong was related to smoke plumes. Because there are a lot of smoke plumes, and they always reuse the same particle config data, they exist in a separate system, updated separately. I had forgotten this! And what was happening was my new load-balanced lists stupidly included these emitters when they should have been kept out of it. The emitters would expire and be deleted in the multithreaded code, then later accessed by the plume code. CRASH.

I worked it out this morning before breakfast! I was very pleased. You might be thinking, what about the only 2 threads thing? LOL I had hard coded the game to use maximum of 4 threads, probably as a debug test. Idiot. I just changed it to be 10 and everything worked:

This is more like it. I wasted ages trying to get the dumb concurrency visualiser to show my custom thread names instead of ‘Worker Thread’ but apparently thats the category. Not much help. FFS show us the thread names! (They work in the debugger). But anyway, that image above is a snapshot inside a busy battle for the GUI_Game::Draw() showing how UpdateParticles tasks get spread over 8 threads. I’m still not sure why that sixth thread misses out on a task, which gets nabbed by the main thread…

Anyway, the point is it works now, and in theory updating particles is 8x faster than it would be with single threading. I do need to apply the multithreading to a lot more of the game code to get the best possible results. I am testing this on a fairly beefy GPU and CPU (Ryzen 9 5900X 12 Core @3.7GHZ and RTX 3080) in only 1920×1080 res. I want the game to look awesome at 5120 res or on a five year old cheap laptop, so plenty more to do.

If for some reason this tips you over the edge to wishlist the game, here is the link :D.

Solar farm: six months of proper generation plus REGOs.

If you follow this blog a lot, you will know that we actually started generating power on our solar farm in October 2024, but there was some downtime in November, and then more downtime in December due to an extreme storm damaging the site (we lost 10 panels, since replaced), so we didnt have a straight six months of data until yesterday.

Also, because of the way solar is generated over the year in the UK, you want either jan-june or july-dec in order to extrapolate. The output of solar panels is basically a bell curve, peaking mid year, so once you have six months of data, you can make an educated guess (but only a guess) about the whole year. Just who shallow or steep that curve is will depend on a few factors, but in general I think that improvements in panels has flattened that curve a bit. Lots of technical changes to panels mean that they are more tolerant of ‘partial shade’ than they used to be. Handy in a cloudy country!

So how much power have we generated in six months? well its…. 766,739kwh. (766 MWH). So we can assume that if this is a typical year, that would mean 1,533MWH in a year, or 1.5 gigawatt-hours. Enough to run the average TV for 1,700 years. Or enough to fill a large electric car from empty 19,000 times. In some ways this seems a lot, but how does it compare to what we expected?

Well it depends… If I dig up the oldest email about the sites initial plan, the assumed generation output was just 957MWH so this sounds amazing. That was then changed to 1,245MWH it was later changed to 1,347MWH . The final setup for the export meter seems to say that the expected output is 1,339MWH. So the actual expectations are all over the place, but all below the current value. I THINK that we had a very very good April, and that is skewing the results. If we hit 1,400MWH in the year that will be very pleasing. It is too early to really be definitive about whether or not this means that the farm makes a profit. I would be nervous of judging that without a full year. Especially as a maintenance contract is still not signed yet.

In other news…

You may have read posts by me banging on about REGOs. A REGO is a Renewable Energy Guarantee of Origin certificate. When you generate a single MWH of power from a renewable source, you can tell a government regulator (ofgem), and they give you a certificate. There is no subsidy, but you can sell them. Who buys them? Companies and energy retailers who want renewable energy. To be able to sell your power to customers as 100% renewable, you need to buy REGOs to cover all your energy. There is an open and competitive market for REGOs, and currently they are worth £10-£15 per MWH. Thats equivalent to 0.1 to 0.15 pence per unit of electricity, no not a lot from a consumer POV. Like most companies, we have our REGO sales ‘bundled’ in with the PPA (Power purchase agreement) we have with the company who buys our power. In this case OVO Energy.

Thats the theory. In practice, this process is HELLISH. It genuinely feels like ofgem have been told to make the process of getting accredited as impossible as they can. The amount of bureaucracy, inefficiency, radio-silence and pickiness over the grammar and punctuation in every piece of text in every document required in order to qualify is beyond insane. I applied as SOON as was possible (your site has to already be generating, which is stupid as hell), and it still took SIX MONTHS to basically have a form processed and accepted. We even had to argue basic maths with them, such as arguing that 400+500 = 900, and not as they claimed… 810.

Anyway, that insane process (which reminded me of the DNO legal process, which reminded me of the planning process….etc) is finally complete, although not without a formal complain to ofgem about it. And I will not bore you with the details, but suffice it to say that obviously they are still arguing about it, and saying its not right, even though they have approved my site, and approved my output data, and credited me with the certificates which I have lready sold. Its insane beyond words.

And actually the entire process is a colossal waste of time, because there are already a network of organizations who entire existence is simply reading meters and verifying that they are accurate, and reporting that data online. I am charged about £250 a year for this ‘service’ (the real cost is likely under £1, its just a meter). Now ofgem could trivially just connect to the databases for those independent companies, read everyone’s data automatically, and credit the REGOs without any human interaction whatsoever. And as for verifying that the data really is renewable, the people who 100% absolutely independently KNOW what equipment is connected, are the DNO (Energy distribution network operator). There are a handful of meter readers, and a handful of DNOs, and this would be easy…but NO! Lets set up a torturous six month minimum process that involves websites that crash, tons of paperwork, complicated rules and processes and a staggering waste of everybody’s time. Because thats the UK energy industry. Are you surprised your bills are high?

So yeah… its been interesting.

I am HOPING that within a month, I will have stopped getting irritating emails from ofgem, I will not get any more letters about lawyers and leases, from one set of lawyers to another (I would love to jettison the entire profession into the sun at this point), and have a maintenance agreement signed.. and then finally I will be able to basically forget about the farm, apart from enjoying checking the stats!

I do still LOVE the fact that I built it, and own it though. Its awesome :D.

Optimizing load times

I recently watched a 2 hour documentary on the ZX spectrum, which means little to people from the USA, but it was a really early computer here in the UK. I am so old I actually had the computer BEFORE that, which was the ZX81, just a year earlier. The ZX81 was laughable by modern standards, and I expect the keyboard I am using has more processing power. It had an amazing 1kb of RAM (yes kb, not MB), no storage, no color, no sound, and no monitor. You needed to tune your TV into it and use that as a black and white monitor. Its this (terrible) PC I used to learn BASIC programming on.

Anyway, one of the features of ZX81/Spectrum days was loading a game from an audio cassette, instead of the alternative, which is copying the source code (line by line) from a gaming magazine and entering the ENTIRE SOURCE CODE of the game if you wanted to play it. Don’t forget, no storage, so if your parents then wanted to watch TV and made you turn it off, you had to type the source code again tomorrow. I can now type very fast… but the documentary also reminded me of another horror of back then, which was the painfully slow process of loading a game.

These days games load…a bit quicker, but frankly not THAT much quicker, especially given the incredible speed of modern hard drives, and massively so when talking about SSDS. Everything is so fats now, from SSD to VRAM bandwidth, to the CPU. Surely games should be able to load almost instantly…and yet they do not. So today I thought I’d stare at some profiling views of loading a large battle in Ridiculous Space Battles to see if I am doing anything dumb…

This is a screengrab from the AMD UProf profiler. My desktop PC has an AMD chip. I’ve started the game, gone to the ‘select mission’ screen, picked one, loaded the deployment screen, then clicked fight, let the game load, and then quit. These are the functions that seem to be taking up most of the time. Rather depressing to see my text engine at the top there… but its a red herring. This is code used to DISPLAY text, nothing to do with loading the actual game. So a better way to look at it is a flame graph:

I love flame graphs. They are so good at presenting visual information about where all the time is going, and also seeing the call-stack depth at various points. This shows everything I did inside WinMain() which is the whole app, but I can focus in on the bit I care about right now which is actual mission loading…

And now its at least relevant. It looks like there are basically 3 big things that happen during the ‘loading battle’ part of the game, and they are “Loading the ships” “Loading the background” “Preloading assets”. The GUI_LoadingBar code is given a big list of textures I know I’ll need in this battle, and it then loads them all in, periodically stopping to update a loading progress bar. Is there anything I can do here?

Well ultimately, although it takes a bit of a call stack to get there, it does look like almost all of the delay here is inside some direct9 functions that load in data. I am very aware of the fact that directx had some super slow functions back in directx9, in its ‘d3dx’ API, which I mostly replaced, but ultimately I am using some of that code still, specifically D3DXCreateTextureFromFileInMemoryEx…

Now I have already tried my best to make stuff fast, because I’ve made sure to first find the texture file (normally a DDS format, which is optimised for directx to use) on disk, and load the whole file into a single buffer in RAM before I even tell directx to do anything. Not only that, but I do have my own ‘pak’ file format, which crunches all of the data together and loads it in one go, which presumably is faster due to less windows O/S file system and antivirus accessing slowdowns. However I’m currently not using that system… so I’ll swap to it (its a 1.8GB pak file with all the graphics in) and see what difference it makes…

Wowzers. It makes almost no difference. I wont even bore you with the graph.

And at this point I start to question how accurate these timings are, so I stick some actual timers in the code. In a test run, the complete run of GUI_Game::Activate() takes 3,831ms and the background initialise is just 0.0099. This is nonsense! I switched from instruction based to time-based sampling in uprof. That doesn’t now give me a flame graph, but it does also flag up that the D3DX png reading code is taking a while. The only png of significance is the background graphic, which my timers suggest is insignificant, but I think this I because it was loaded in the previous screen. I deliberately free textures between screens, but its likely still in RAM… I’ll add timers to the code that loads that file.

Whoah that was cool. I can now put that into excel and pick the slowest loaders…

Loaded [data/gfx/\backgrounds\composite3.png] in 73.0598
Loaded [data/gfx/\scanlines.bmp] in 20.0463
Loaded [data/gfx/\planets\planet6s.dds] in 11.8662
Loaded [data/gfx/\ships\expanse\expanse_stormblade_frigate_damaged.dds] in 10.7132
Loaded [data/gfx/\ships\ascendency\g6battleship.dds] in 9.3622
Loaded [data/gfx/\ships\ascendency\g5frigate.dds] in 6.9765

OMGZ. So yup, that png file is super slow, and my bmp is super slow too. The obvious attempted fix is to convert that png to dds and see if it then loads faster. Its likely larger on disk, but requires virtually no CPU to process compared to png so here goes… That swaps a 2MB png for a 16MB (!!!!) dds file, but is it faster?

NO

Its 208ms compared with 73ms earlier. But frankly this is not an accurate test as some of this stuff may be cached. Also when I compare pngs of the same size, I’m noticing vast differences between how long they take to load:

Loaded [data/gfx/\backgrounds\composite11.png] in 113.9637
Loaded [data/gfx/\backgrounds\composite3.dds] in 208.7471
Loaded [data/gfx/\backgrounds\composite5.png] in 239.3122

So best to do a second run to check…

Loaded [data/gfx/\backgrounds\composite11.png] in 112.8554
Loaded [data/gfx/\backgrounds\composite3.dds] in 84.9467
Loaded [data/gfx/\backgrounds\composite5.png] in 108.4374

WAY too much variation here to be sure of whats going on. To try and be sure my RAM is not flooded with data I’d otherwise be loading, I’ll load Battlefield 2042 to use up some RAM then try again… Interestingly it only takes up 6GB. Trying again anyway…

Loaded [data/gfx/\backgrounds\composite11.png] in 114.0210
Loaded [data/gfx/\backgrounds\composite3.dds] in 85.6767
Loaded [data/gfx/\backgrounds\composite5.png] in 105.8643

Well that IS actually getting a bit more consistent. I’ll do a hard reboot…

Loaded [data/gfx/\backgrounds\composite11.png] in 104.3017
Loaded [data/gfx/\backgrounds\composite3.dds] in 207.8332
Loaded [data/gfx/\backgrounds\composite5.png] in 141.2645

Ok so NO, a hard reboot is the best test, and swapping to DDS files for the huge background graphics is a FAIL. These are 2048 x 2048 images. At least I know that. The total GUI_Game::Activate is 7,847ms. That png is only about 1-2% of this, and it makes me wonder if converting all the dds files to png would in fact be the best solution to speed up load times? The only other option would be to speed up DDS processing somehow. Having done some reading, it IS possible to use multithreading here, but it looks like my actual file-access part of the code is not vaguely the bottleneck, although I’ll split out my code from the directx code to check (and swap back to a png…)

Creating texture [data/gfx/\backgrounds\composite11.png]
PreLoad Code took 1.0205
D3DXCreateTextureFromFileInMemoryEx took 111.4467
PostLoad Code took 0.0001
Creating texture [data/gfx/\backgrounds\composite3.png]
PreLoad Code took 28.4150
D3DXCreateTextureFromFileInMemoryEx took 71.1481
PostLoad Code took 0.0001
Creating texture [data/gfx/\backgrounds\composite5.png]
PreLoad Code took 0.9654
D3DXCreateTextureFromFileInMemoryEx took 105.2158
PostLoad Code took 0.0001

Yeah…so its all the directx code that is the slowdown here. Grok suggests writing my own D3DXCreateTextureFromFileInMemoryEx function, which sounds possible but annoying.

Ok…mad though it sounds, I’ve done that. Lets try again!

Creating texture [data/gfx/\backgrounds\composite11.png]
PreLoad Code took 0.8327
D3DXCreateTextureFromFileInMemoryEx took 103.4365
PostLoad Code took 0.0001
Creating texture [data/gfx/\backgrounds\composite3.png]
PreLoad Code took 0.6053
D3DXCreateTextureFromFileInMemoryEx took 73.9393
PostLoad Code took 0.0002
Creating texture [data/gfx/\backgrounds\composite5.png]
PreLoad Code took 0.9069
D3DXCreateTextureFromFileInMemoryEx took 105.0180
PostLoad Code took 0.0001

Am I just wasting my life? at least I now have the source code to the DDS loader because it is MY code bwahahaha. So I can tryu and get line level profiling of this stuff now… I’ll try the visual studio CPU profiler:

Thanks Microsoft. But there may be more…

The Visual studio flame graph is saying that actually the raw reading from disk of the file IS a major component of all this, and so is a memcpy I do somewhere… Actually its inside the fast DDS loader, so the flame graph is confusing. The DDS loops doing memcpy calls for each line of data. This is very bad. With a big file, there will be 2,048 calls to memcpy just to read it in. Surely we can improve on that? and yet its clear thats what D3DXCreateTextureFromFileInMemoryEx is doing, as seen earlier. Hmmmm. And now people have come to visit and I have to stop work at this vital cliffhanger…

Visiting the solar farm, 8 months after energization

Because we happened to be (vaguely) in the same part of the country, we decided to go pay a quick visit to the solar farm. Its been energized for about 8 months now, although there have been 2 periods of downtime for some work since then, so we still do not yet have a nice clean 6 months of data to extrapolate from. Also I had my drone with me to take ‘finished farm’ pictures :D.

The situation with the farm is that it is 99% finished. There is some tree planting to do (one of the planning constraints), which will have to wait until later in the year, and also it has a problem regarding shutdown. When the site loses power (due to a grid outage), it then does NOT come right back online automatically, which is frustrating. It should, and its back to negotiations between the construction company and the DNO as to why this doesn’t work yet, and fixing it.

From my point of view, there are also two other things that are still *not done* yet. These are, to get a maintenance contract in place (we are still waiting for quotes from fire suppression system inspectors) and also to get ofgem to finally accept that this is indeed a solar farm. That last point is especially irritating, but I finally think, 8 months after switching on that we are close to the end game on that one. The bureaucracy is insane. Why they need to know how many panels are on each string of each inverter is beyond me. The DNO didn’t even care about this, and we connect this kit to THEIR network… As a reminder, this is so we get accredited to produce REGOs, which are certificates to prove a MWH of power was renewable. You can sell those certificates for about £10 each to companies who want to claim their power is 100% renewable.

Anyway…

Its always pretty cool to see the site, and remember that I actually own it! I love my 10 home solar panels, so going to see the other 3,024 is pretty cool. I was surprised just how NOISY inverters are in summer. I assume this is active cooling, as we were there early afternoon in June. If you think your home inverter for your panels never makes a noise, thats likely because its a 4kw one, and 100kw ones have way more juice flowing through them. I think I could hear the inverters from about 15 feet away.

Broadly things were ok, I was VERY happy to see how clean the panels are, 8 months into energization and probably a year into mounting, so this bodes well for minimal cleaning costs. How grubby panels get really depends on circumstances. This is a livestock field, so crop dust is not constantly blowing near them, which probably helps. I did encounter a bunch of things that I had to complain to the construction company about. I guess its just like having builders come work on your house, but 100x bigger in scale. I really hate that side of the project, but it comes with the territory. It was also good to meet up with the landowner, who is a great guy, very understanding, and a great ‘man on the ground’ who can tell me about any problems directly without it being filtered through a third party.

One of the main reasons I wanted to take a look again was to try and get better drone pictures, as last time the site was not 100% finished and my drone had software issues (DJI apps suck!). This time it worked, and I took some, as you see, but it was pretty windy. Being on a hilltop does not help, and I braved the ‘LAND DRONE IMMEDIATELY’ warnings as long as I could, but they are obviously not pro level snaps :D. I also found one broken panel, from when the site suffered storm damage, which shouldn’t be left there really. It was interesting to see a folded and broken solar panel though. You don’t see many of those.

Overall I’m happy, the site is generating nicely in summer. the end of this month will be when I can do a proper financial analysis, as the output mirrors around midsummer so 6 months data gives me a great yearly prediction. I really want it to break even!