Nov. 27, 2023, 2:12 p.m. |
Added
35
|
{"external_links": []}
|
|
Nov. 20, 2023, 2:02 p.m. |
Added
35
|
{"external_links": []}
|
|
Nov. 13, 2023, 1:33 p.m. |
Added
35
|
{"external_links": []}
|
|
Nov. 6, 2023, 1:30 p.m. |
Added
35
|
{"external_links": []}
|
|
Aug. 14, 2023, 1:30 p.m. |
Added
35
|
{"external_links": []}
|
|
Aug. 7, 2023, 1:31 p.m. |
Added
35
|
{"external_links": []}
|
|
July 31, 2023, 1:33 p.m. |
Added
35
|
{"external_links": []}
|
|
July 24, 2023, 1:35 p.m. |
Added
35
|
{"external_links": []}
|
|
July 17, 2023, 1:33 p.m. |
Added
35
|
{"external_links": []}
|
|
July 10, 2023, 1:25 p.m. |
Added
35
|
{"external_links": []}
|
|
July 3, 2023, 1:26 p.m. |
Added
35
|
{"external_links": []}
|
|
June 26, 2023, 1:25 p.m. |
Added
35
|
{"external_links": []}
|
|
June 19, 2023, 1:26 p.m. |
Added
35
|
{"external_links": []}
|
|
June 12, 2023, 1:29 p.m. |
Added
35
|
{"external_links": []}
|
|
June 5, 2023, 1:32 p.m. |
Added
35
|
{"external_links": []}
|
|
May 29, 2023, 1:27 p.m. |
Added
35
|
{"external_links": []}
|
|
May 22, 2023, 1:28 p.m. |
Added
35
|
{"external_links": []}
|
|
May 15, 2023, 1:31 p.m. |
Added
35
|
{"external_links": []}
|
|
May 8, 2023, 1:36 p.m. |
Added
35
|
{"external_links": []}
|
|
May 1, 2023, 1:27 p.m. |
Added
35
|
{"external_links": []}
|
|
April 24, 2023, 1:34 p.m. |
Added
35
|
{"external_links": []}
|
|
April 17, 2023, 1:29 p.m. |
Added
35
|
{"external_links": []}
|
|
April 10, 2023, 1:24 p.m. |
Added
35
|
{"external_links": []}
|
|
April 3, 2023, 1:26 p.m. |
Added
35
|
{"external_links": []}
|
|
Jan. 28, 2023, 11:08 a.m. |
Created
43
|
[{"model": "core.projectfund", "pk": 24745, "fields": {"project": 1932, "organisation": 2, "amount": 0, "start_date": "2020-09-30", "end_date": "2024-09-29", "raw_data": 39014}}]
|
|
Jan. 28, 2023, 10:51 a.m. |
Added
35
|
{"external_links": []}
|
|
April 11, 2022, 3:45 a.m. |
Created
43
|
[{"model": "core.projectfund", "pk": 16848, "fields": {"project": 1932, "organisation": 2, "amount": 0, "start_date": "2020-09-30", "end_date": "2024-09-29", "raw_data": 8503}}]
|
|
April 11, 2022, 3:45 a.m. |
Created
41
|
[{"model": "core.projectorganisation", "pk": 63646, "fields": {"project": 1932, "organisation": 1377, "role": "LEAD_ORG"}}]
|
|
April 11, 2022, 3:45 a.m. |
Created
40
|
[{"model": "core.projectperson", "pk": 39306, "fields": {"project": 1932, "person": 2815, "role": "STUDENT_PER"}}]
|
|
April 11, 2022, 3:45 a.m. |
Created
40
|
[{"model": "core.projectperson", "pk": 39305, "fields": {"project": 1932, "person": 959, "role": "SUPER_PER"}}]
|
|
April 11, 2022, 1:47 a.m. |
Updated
35
|
{"title": ["", "Applying reinforcement learning to grid-connected energy systems"], "description": ["", "\nMy PhD project focuses on applying reinforcement learning (RL) to building energy optimisation. The make-up of electrical grids is changing: there is an increasing number of energy systems that involve renewable energy generation, energy storage, smart controllable devices, electric vehicle charging and other recently improved technologies. As an example of the rapid rate of change, the global capacity of solar photovoltaic installations has been estimated to have increased by over 700 percent between 2011 and 2019. The energy systems introduced by these changes raise complex control problems. If controlled well, the systems may be able to effectively replace emission-intensive grid energy with local renewable energy and prevent demand peaks that would need to be covered by fossil-fuel generators. Thereby, controllers that are well adapted for these systems have the potential to help mitigate climate change. Existing control methods, such as model predictive control, often lack the flexibility to fully capture the potential cost and emission savings enabled by these systems. \n\nIn this PhD project I aim to investigate the use of reinforcement learning (RL) in place of such conventional energy system controllers. RL is a general machine learning-based control method that may provide more flexibility than other existing methods. Within the last ten years, the integration deep neural networks in RL methods has allowed for RL to be used to outperform human level performance for the first time for several tasks, including at Atari games and the board game of Go. Building on this work, and other work applying RL to energy systems, this project aims to investigate how RL can be best used to improve energy efficiency in buildings.\n\nThe initial focus of the project is on a specific kind of residential energy system that combines solar photovoltaic panels with a home battery. Based on the findings from this specific case, more general solutions in the space will be investigated.\n\n"], "extra_text": ["", "\n\n\n\n"], "status": ["", "Active"]}
|
|
April 11, 2022, 1:47 a.m. |
Added
35
|
{"external_links": [7191]}
|
|
April 11, 2022, 1:47 a.m. |
Created
35
|
[{"model": "core.project", "pk": 1932, "fields": {"owner": null, "is_locked": false, "coped_id": "62e8df9e-8956-471c-8ba7-81386cfc5a25", "title": "", "description": "", "extra_text": "", "status": "", "start": null, "end": null, "raw_data": 8488, "created": "2022-04-11T01:32:54.159Z", "modified": "2022-04-11T01:32:54.159Z", "external_links": []}}]
|
|