Now we will run an IV analysis on the device data we uploaded in the previous notebook.
As before, make sure you have the following environment variables set or added to a .env
file:
GDSFACTORY_HUB_API_URL="https://{org}.gdsfactoryhub.com"
GDSFACTORY_HUB_QUERY_URL="https://query.{org}.gdsfactoryhub.com"
GDSFACTORY_HUB_KEY="<your-gdsfactoryplus-api-key>"
project_id = f"resistance-{getpass.getuser()}"
client = gfh.create_client_from_env(project_id=project_id)
api = client.api()
query = client.query()
We have a bunch of pre-defined analysis functions in the dodata SDK. An analysis function is actually a stand-alone python module with a single run
function inside (more helper functions in the file are allowed, but run
will be executed). Let's have a look at the iv_resistance
analysis.
Let's run this one locally, to see what it gives:
linear_fit.run(
device_data_pkey=device_data_pkey,
xname="i",
yname="v",
slopename="resistance",
xlabel="Current (A)",
ylabel="Voltage (V)",
)
This analysis function clearly fits a straight line to an IV curve to obtain the resistance. The function ran locally, but it's always good to check if it runs on the server too:
result = api.validate_function(
function_id="linear_fit",
target_model="device_data",
test_target_model_pk=device_data_pkey,
file=gfh.get_module_path(linear_fit),
test_kwargs={
"xname": "i",
"yname": "v",
"slopename": "resistance",
"xlabel": "Current (A)",
"ylabel": "Voltage (V)",
},
)
result.summary_plot()
Note
If this would have failed on the server but succeeded locally, you probably have to update the 'uv-comment' at the top of the file with updated dependencies. Such a uv-comment ensures that the script can be run by uv
and all necessary dependencies will be installed. It usually looks as follows:
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "gdsfactoryhub-sdk",
# "matplotlib",
# "numpy",
# ]
# ///
Once we confirmed the analysis function is able to finish successfully on the server, we can upload the function to the server:
with gfh.suppress_api_error(): # don't error out when function already exists in DoData.
result = api.upload_function(
function_id="linear_fit",
target_model="device_data",
file=gfh.get_module_path(linear_fit),
)
We can now trigger an analysis for the uploaded function:
result = api.run_analysis( # run_analysis waits for the analysis to finish
analysis_id=f"device_iv_resistance_{device_data_pkey}",
function_id="linear_fit",
target_model="device_data",
target_model_pk=device_data_pkey,
kwargs={
"xname": "i",
"yname": "v",
"slopename": "resistance",
"xlabel": "Current (A)",
"ylabel": "Voltage (V)",
},
)
result.summary_plot()
Now, let's start (but not wait for it to finish) the analysis for all devices:
task_ids = []
dd_pks = [d["pk"] for d in query.device_data().execute().data]
for dd_pk in tqdm(dd_pks):
with gfh.suppress_api_error():
task_id = api.start_analysis( # start_analysis triggers the analysis task, but does not wait for it to finish.
analysis_id=f"device_iv_resistance_{dd_pk}",
function_id="linear_fit",
target_model="device_data",
target_model_pk=dd_pk,
kwargs={
"xname": "i",
"yname": "v",
"slopename": "resistance",
"xlabel": "Current (A)",
"ylabel": "Voltage (V)",
},
)
task_ids.append(task_id)
you can wait for the result of a started analysis as follows: