The dataset comprises two modalities, with medical imaging data consisting of 6670 dimensions and intestinal data consisting of 377 dimensions, totaling 39 samples. Positive samples are labeled as 1, and negative samples are labeled as -1.
The dataset covers two distinct types of Chinese information extraction tasks, involving relationship extraction and event extraction, encompassing both sentence and discourse-level natural language texts.
Similar to the preliminary round, this dataset contains two modalities: medical imaging data (6670 dimensions) and intestinal data (377 dimensions), with a total of 39 samples. Positive samples are labeled as 1, and negative samples are labeled as -1.
This dataset includes two types of Chinese information extraction tasks: relationship extraction and event extraction, covering both sentence and discourse-level natural language texts.
Dataset based on tpu-mlir The dataset is derived from the CASIA Face Image Database 5.0 version, containing 2500 color face images from 500 subjects. CASIA-FaceV5 facial images can be captured with a Logitech USB camera in a single instance. Volunteers for CASIA-FaceV5 include graduate students, workers, service staff, etc. All facial images are 16-bit color BMP files with an image resolution of 640 * 480. Typical intra-class variations include lighting, pose, expression, eyeglasses, imaging distance, etc.
Learning Materials for TPU-MLIR: https://tpumlir.org/index.html
Open Source Repository for TPU-MLIR: https://github.com/sophgo/tpu-mlir
TPU-MLIR Learning Videos: https://space.bilibili.com/1829795304/channel/collectiondetail?sid=734875
TPU-MLIR Getting Started Guide: https://tpumlir.org/docs/quick_start/index.html
Participants are required to conduct model training and testing on the Momodel platform, and finally submit their results on the platform.
Participants need to perform model conversion and deployment on the SOPHON.NET platform, and ultimately submit their results on the platform.
The finals will be organized in an offline format, structured as a Hackathon.
Final Score = Accuracy * 50 + Recall * 30 + F1 Score * 20.
For the systems evaluated on the test set, their SPO (Subject-Predicate-Object) outputs are compared precisely with manually annotated SPO results. F1 score is used as the evaluation metric. Note: For complex 'O' value types in SPO, all slots must match precisely to consider the SPO extracted correctly. To address entity alias issues in some texts, a Baidu Knowledge Graph alias dictionary is utilized to assist in evaluation. F1 score is calculated as: F1 = (2 * P * R) / (P + R). Where P = Number of correctly predicted SPOs in all test sentences / Number of predicted SPOs in all test sentences; R = Number of correctly predicted SPOs in all test sentences / Number of manually annotated SPOs in all test sentences.
The evaluation metrics comprise two parts: Precision involves comprehensive score metrics (consistent with Preliminary Task 1), and Speed involves the time taken to process all data.
The evaluation metrics also comprise two parts: Precision involves the F1 score (consistent with Preliminary Task 2), and Speed involves the time taken to process all text data.
The evaluation metrics consist of two components: Precision involves the F1 score, and Speed involves the time taken to process all images.