FUITextRecognitionView
@MainActor
open class FUITextRecognitionView : UIView
The FUITextRecognitionView can be used to perform text recognition using device camera. Video captured from device camera would be shown in entire bounds. Use captureMaskPath to define the sub-frame of the view, in which the text would be recognized. If captureMaskPath is set to nil, the text in full screen will be reconized. Rest of the frame would be dimmed by overlayView. This default behavior can be changed by setting view and layer properties of overlayView. All the observations, that is, recognized text blocks and corresponding bounding boxes, would be returned back to observationHandler. To provide hint to user to what texts are recognized, use showTexts to show the observations on top of captured video.
Call startSession to start capturing the video. This is normally called when view appears. Since text recognition model are memory and cpu intensive, it won’t be run until user has stabilized the device camera. After that observationHandler would be called for every video frame with observations. observationHandler can be used to perform any validations and filtering on the observations. Validate if it is a credit card number if observations have 16 numeric digits in total and discard any other observations, for example. Once the filtered observations meet the desired criteria, call stopSession to stop capturing the video. Make sure to call stopSession in case view disappears.
When initializing FUITextRecognitionView, you can specify the style you want. Default style is singleField style. singleField provides a default-sized capturing box for capturing infomation of a single field (E.g. phone number or email address). The other one is multi-field style which provides a large-sized capturing box for capturing information of mutiple fields (E.g. business card).
Example Initialization and Configuration
let recognitionView = FUITextRecognitionView()
recognitionView.observationHandler = { observations
let filteredObservations = <#filter out unwanted text#>
//show text over captured video to provide feedback to user to what is scanned
recognitionView.showTexts(for: filteredObservations)
let areObservationsValid = <#filteredObservations meets desired criteria#>
if areObservationsValid {
DispatchQueue.main.async {
//place captured text in text field
textField.text = filteredObservations.map { $0.value }.joined(separator:" ")
}
//on return true, session would be automatically stoped and observationHandler would no longer be called
return true
}
return false
}
//start session to capture frames from back camera
recognitionView.startSession()
-
Provide custom requests which would be performed on the captured video frames which are cropped based on
captureMaskPath.let mlmodel = ... let vnmodel = try? VNCoreMLModel(for: mlmodel) let request = VNCoreMLRequest(model: vnmodel!, completionHandler: self.detectTextHandler) recognitionView.requests = [request]Use the completionHandler in VNCoreMLRequest to get the observations.
If no custom requests are set, view would use default text recognition requests. In this case supply,
observationHandlerto get the observations.Declaration
Swift
@MainActor open var requests: [VNImageBasedRequest] { get set } -
Called when using default
requestsand no customrequestsare set. This called on every captured video frame afterstartSessioncalled and device camera is stable and untilstopSessionis called or true is returned.Declaration
Swift
@MainActor open var observationHandler: (([SAPMLTextObservation]) -> Bool)? -
Style of the recognition view.
See
FUITextRecognitionView.Stylefor descriptions of the constants used to specify style.Declaration
Swift
@MainActor public private(set) var style: Style { get } -
Captured video inside the bezierPath’s bound would be used to detect text. This would be translated to center of the view.
Set this property to
nilif you want to hide the capturing box. As a result, all texts in recognition view will be detected.Declaration
Swift
@MainActor open var captureMaskPath: UIBezierPath? { get set } -
Change to customize the quality level or bitrate of the captured video
Declaration
Swift
@MainActor open var sessionPreset: AVCaptureSession.Preset { get set } -
Dims the complete view excluding centered captureMaskPath.
Declaration
Swift
@MainActor open var overlayView: OverlayView { get } -
The action button positions at the bottom of recognition view. This action is hidden in
.singleFieldstyle by default.Declaration
Swift
@MainActor public let action: FUIButton -
Creates a recognition view object with the given frame and
Declaration
Swift
@MainActor public required init(frame: CGRect, style: Style = .singleField) -
Supply subset of observations to render the text over the captured video. Supply empty array to clear any rendered text.
Declaration
Swift
@MainActor open func showTexts(for observations: [SAPMLTextObservation], with checkingResults: [NSTextCheckingResult]?, attributes: [NSAttributedString.Key : Any] = [:], scaleToFit: Bool = true)Parameters
forSubset of observations from
observationHandlerwithOnly observations which are part of checking results are used to render text
attributesAttributed applied to the rendered text
scaleToFitAppropriately chooses the font size to completely overlay the rendered text on top of text in captured video when set to true, ignoring font sized supplied in
attributes -
Start capturing the video from device camera. However,
requestsare not performed until after camera stabilizes.Declaration
Swift
@MainActor public func startSession() -
Stop capturing the video from device camera.
Declaration
Swift
@MainActor public func stopSession() -
Dims the complete
See moreFUITextRecognitionViewexcluding centeredcaptureMaskPath. Since this is backed byCAShapeLayer, strokeColor can be set to change the color of thecaptureMaskPath.Declaration
Swift
@MainActor final public class OverlayView : UIView -
Constants for recognition view styles.
See moreDeclaration
Swift
public enum Style