Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting Brick v1.1.0 #10

Open
iamliamc opened this issue Sep 24, 2020 · 7 comments
Open

Supporting Brick v1.1.0 #10

iamliamc opened this issue Sep 24, 2020 · 7 comments

Comments

@iamliamc
Copy link

Hey Brick-server,

Really interesting project you've got going here. I'd love to learn more, and contribute!

So firstly, I have been able to pull brick-server, adjust my configuration, run the docker-compose up, get a jwt token with docker exec -t brickserver /app/tools/get_jwt_token update the pytest.ini with said jwt and run the tests successfully pytest -c pytest.ini tests/remote

The first thing I wanted to do was to make my own small Brick RDF graph, load it into the system do some entity queries against it, and then write a custom database adapter for it to fetch real BMS system data stored in another system I use.

I can get my test working with version Brick v1.0.3 (see below) but when trying to run the application with v1.1.0 gave me some errors that i haven't been able to address...

I tried setting the configuration.json to this:

    "brick": {
        "dbtype": "virtuoso",
        "host": "http://brickserver-virtuoso:8890/sparql",
        "brick_version": "1.1.0",
        "base_ns": "bldg",
        "base_graph": "brick-base-graph"
    },

But keep getting this error...

aiosparql.client.SPARQLRequestFailed: 500, message='SPARQL Request Failed', url=URL('http://brickserver-virtuoso:8890/sparql'), explanation='Virtuoso HT404 Error Resource "http://brickschema.org/schema/1.1.0/Brick.ttl" not found\n\nSPARQL query:\ndefine sql:big-data-const 0 PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>\n\nprefix : <bldg>\nprefix base: <bldg>\nprefix brick: <https://brickschema.org/schema/1.1.0/Brick#>\nprefix bf: <https://brickschema.org/schema/1.1.0/BrickFrame#>\nprefix brick_tag: <https://brickschema.org/schema/1.1.0/BrickTag#>\nprefix brick_use: <https://brickschema.org/schema/1.1.0/BrickUse#>\nprefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>\nprefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>\nprefix owl: <http://www.w3.org/2002/07/owl#>\nprefix foaf: <http://xmlns.com/foaf/0.1/>\nprefix prov: <http://www.w3.org/ns/prov#>\n\nLOAD <http://brickschema.org/schema/1.1.0/Brick.ttl> into <brick-base-graph>

But when I change my generation file to 1.0.3, as seen here I use this script to generate my Brick schema building graph .ttl

import pdb
from rdflib import RDF, RDFS, OWL, Namespace, Graph, Literal

# of some use: https://github.com/RDFLib/rdflib/tree/master/examples

g = Graph()

BLDG = Namespace("https://example.com/customer_1/building_1#")
g.bind("bldg", BLDG)

BRICK = Namespace("https://brickschema.org/schema/1.0.3/Brick#")
g.bind("brick", BRICK)

# # Should we use BRICK.Tag to get Spectral Timeseries Identifier in the system?
g.add((BRICK.TimeseriesUUID, RDF.type, OWL.Class))
# # We can make TimeseriesUUID a subclass of the more generic "Tag" class.
# # It is easy to change this later.
g.add((BRICK.TimeseriesUUID, RDFS.subClassOf, BRICK.Tag))

# (subject, predicate, object)
g.add((BLDG.BUILDING, RDF.type, BRICK.Building))

# Ability to add information to the nodes via a custom class...
g.add((BLDG.BUILDING, BRICK.TimeseriesUUID, Literal("UUID-1")))
print(f"The value BRICK.TimeseriesUUID on BLDG.BUILDING is {g.value(BLDG.BUILDING,BRICK.TimeseriesUUID)}")

g.add((BLDG.BLOCK_A, RDF.type, BRICK.Space))

g.add((BLDG.DISTRICT_HEATING, RDF.type, BRICK.Boiler))
g.add((BLDG.DISTRICT_HEATING, BRICK.feeds, BLDG.PRIMARY_HOT_WATER))

g.add((BLDG.PRIMARY_HOT_WATER, BRICK.isFedBy, BLDG.DISTRICT_HEATING))
g.add((BLDG.PRIMARY_HOT_WATER, RDF.type, BRICK.Water_Distribution))
g.add((BLDG.PRIMARY_COLD_WATER, RDF.type, BRICK.Water_Distribution))

g.add((BLDG.AHU_A, RDF.type, BRICK.Air_Handler_Unit))

g.add((BLDG["AHU_A.FAN_COIL"], RDF.type, BRICK.Fan_Coil_Unit))

g.add((BLDG["AHU_A.CALCULATED_SETPOINT"], RDF.type, BRICK.Air_Temperature_Setpoint))

g.add((BLDG["AHU_A.CALCULATED_SETPOINT.X1"], RDF.type, BRICK.Outside_Air_Temperature_Low_Reset_Setpoint))

g.add((BLDG["AHU_A.CALCULATED_SETPOINT.X2"], RDF.type, BRICK.Outside_Air_Temperature_High_Reset_Setpoint))
g.add((BLDG["AHU_A.CALCULATED_SETPOINT.Y1"], RDF.type, BRICK.Supply_Air_Temperature_Low_Reset_Setpoint))
g.add((BLDG["AHU_A.CALCULATED_SETPOINT.Y2"], RDF.type, BRICK.Supply_Air_Temperature_High_Reset_Setpoint))


for floor in range(9):
    g.add((BLDG[f"FLOOR_{floor}"], RDF.type, BRICK.Floor))
    g.add((BLDG.BUILDING, BRICK.hasPart, BLDG[f"FLOOR_{floor}"]))
    for sub_valve in range(3):
        g.add((BLDG[f"VAV_A_{floor}_{sub_valve}"], RDF.type, BRICK.Variable_Air_Volume_Box))
        g.add((BLDG[f"HVAC_Zone_A_{floor}_{sub_valve}"], RDF.type, BRICK.HVAC_Zone))
        g.add((BLDG[f"VAV_A_{floor}_{sub_valve}"], BRICK.feeds, BLDG[f"HVAC_Zone_A_{floor}_{sub_valve}"]))
        g.add((BLDG[f"VAV_A_{floor}_{sub_valve}.DPR"], RDF.type, BRICK.Damper))
        g.add((BLDG[f"VAV_A_{floor}_{sub_valve}.DPRPOS"], RDF.type, BRICK.Damper_Position_Setpoint))
        g.add((BLDG[f"VAV_A_{floor}_{sub_valve}.DPR"], BRICK.isControlledBy, BLDG[f"VAV_A_{floor}_{sub_valve}.DPRPOS"]))

g.add((BLDG.AHU_A, BRICK.feeds, BLDG.BLOCK_A))

g.add((BLDG["AHU_A.SUPPLY_TEMPERATURE"], RDF.type, BRICK.Supply_Air_Temperature_Sensor))
g.add((BLDG["AHU_A.CALCULATED_SETPOINT"], BRICK.isMeasuredBy, BLDG["AHU_A.SUPPLY_TEMPERATURE"]))

g.add((BLDG.AHU_A_CALCULATED_SETPOINT, BRICK.hasPoint, BLDG.AHU_A_CALCULATED_SETPOINT_X1))
g.add((BLDG.AHU_A_CALCULATED_SETPOINT, BRICK.hasPoint, BLDG.AHU_A_CALCULATED_SETPOINT_X2))
g.add((BLDG.AHU_A_CALCULATED_SETPOINT, BRICK.hasPoint, BLDG.AHU_A_CALCULATED_SETPOINT_Y1))
g.add((BLDG.AHU_A_CALCULATED_SETPOINT, BRICK.hasPoint, BLDG.AHU_A_CALCULATED_SETPOINT_Y2))

# g.parse("../Brick.ttl", format="ttl")

# basic selection
sensors = g.query(
    """SELECT ?sensor WHERE {
    ?sensor rdf:type brick:Supply_Air_Temperature_Sensor
}"""
)

def display_subject_results(subject, results):
    for row in results:
        import pdb; pdb.set_trace()
        print(f"Subject {subject} -> Predicate {row.p} -> Object {row.o}")


display_subject_results("bldg:DISTRICT_HEATING", g.query("SELECT ?p ?o {bldg:DISTRICT_HEATING ?p ?o}"))
import pdb; pdb.set_trace()

# building = g.query("""SELECT ?building WHERE {?building rdf:type brick:Building}""")
# boiler = g.query("""SELECT ?boiler WHERE {?boiler rdf:type brick:Boiler}""")

res = g.query(
            """
                SELECT ?s WHERE { 
                    ?s brick:isFedBy bldg:DISTRICT_HEATING . 
                    ?s rdf:type brick:Water_Distribution
                }
            """
        )


for row in res:
    print(row.s)

"""
We can "serialize" this model to a file if we want to load it into another program.
"""
with open("custom_brick_v103_sample_graph.ttl", "wb") as f:
    # the Turtle format strikes a balance beteween being compact and easy to read
    f.write(g.serialize(format="ttl"))

So for convenience I write another "test" to try to run a simple sparql query against this graph (one which is confirmed in a pure rdflib toy example in the generation code).

Basically my own version of this test.

def test_load_ttl():
    with open('examples/data/custom_brick_v103_sample_graph.ttl', 'rb') as fp:
        headers = authorize_headers({
            'Content-Type': 'text/turtle',
        })
        resp = requests.post(ENTITY_BASE + '/upload', headers=headers, data=fp, allow_redirects=False)
        assert resp.status_code == 200

        qstr = """
            select ?s where {
              ?s a brick:Supply_Air_Temperature_Sensor
            }
        """

    headers = authorize_headers({
        'Content-Type': 'sparql-query'
    })
    resp2 = requests.post(QUERY_BASE + '/sparql', data=qstr, headers=headers)

    import pdb; pdb.set_trace()

(Pdb) resp2.json()
{'head': {'link': [], 'vars': ['s']}, 'results': {'distinct': False, 'ordered': True, 'bindings': [{'s': {'type': 'uri', 'value': 'https://example.com/customer_1/building_1#AHU_A.SUPPLY_TEMPERATURE'}}]}}

@jli113
Copy link

jli113 commented Apr 24, 2021

maybe it is time to support Brick 1.2.0

@skewty
Copy link

skewty commented Nov 3, 2021

It is likely updated docker images that link to current versions would solve this.
Relates to Issue #23

@Reapor-Yurnero
Copy link
Collaborator

going to update the entire codebase significantly in a large commit shortly

@skewty
Copy link

skewty commented Nov 8, 2021

going to update the entire codebase significantly in a large commit shortly

Anything I can help with to speed this along?

@mirkoperillo
Copy link

I can get my test working with version Brick v1.0.3 (see below) but when trying to run the application with v1.1.0 gave me some errors that i haven't been able to address...

tried setting the configuration.json to this:

"brick": {
    "dbtype": "virtuoso",
    "host": "http://brickserver-virtuoso:8890/sparql",
    "brick_version": "1.1.0",
    "base_ns": "bldg",
    "base_graph": "brick-base-graph"
},

I've got your same error, the problem is about the syntax of brick_version configuration. The correct value is 1.1 instead of 1.1.0

@skewty
Copy link

skewty commented Nov 24, 2021

@Reapor-Yurnero any progress on this? Your comment is now 3 weeks old.

@Reapor-Yurnero
Copy link
Collaborator

Reapor-Yurnero commented Nov 24, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants